%0 Journal Article %T A Comparison between Position-Based and Image-Based Dynamic Visual Servoings in the Control of a Translating Parallel Manipulator %A G. Palmieri %A M. Palpacelli %A M. Battistelli %A M. Callegari %J Journal of Robotics %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/103954 %X Two different visual servoing controls have been developed to govern a translating parallel manipulator with an eye-in-hand configuration, That is, a position-based and an image-based controller. The robot must be able to reach and grasp a target randomly positioned in the workspace; the control must be adaptive to compensate motions of the target in the 3D space. The trajectory planning strategy ensures the continuity of the velocity vector for both PBVS and IBVS controls, whereas a replanning event is needed. A comparison between the two approaches is given in terms of accuracy, fastness, and stability in relation to the robot peculiar characteristics. 1. Introduction Visual servoing is the use of computer vision to control the motion of a robot; two basic approaches can be identified [1¨C4]: position-based visual servo (PBVS), in which vision data are used to reconstruct the 3D pose of the robot and a kinematic error is generated in the Cartesian space and mapped to actuators commands [5¨C7]; image-based visual servo (IBVS), in which the error is generated directly from image plane features [8¨C15]. Recently, a new family of hybrid or partitioned methods is growing, with the aim of combining advantages of PBVS and IBVS while trying to avoid their shortcomings [16, 17]. The principal advantage of using position-based control is the chance of defining tasks in a standard Cartesian frame. On the other hand, the control law strongly depends on the optical parameters of the vision system and can become widely sensitive to calibration errors. On the contrary, the image-based control is less sensitive to calibration errors; however, it is required the online calculation of the image Jacobian, that is, a quantity depending on the distance between the target and the camera which is difficult to evaluate. A control in the image plane results also to be strongly nonlinear and coupled when mapped on the joint space of the robot and may cause problems when crossing points which are singular for the kinematics of the manipulator [1]. Visual servo systems can also be classified on the basis of their architecture in the following two categories [1]: the vision system provides an external input to the joint closed-loop control of the robot that stabilizes the mechanism (dynamic look and move); the vision system is directly used in the control loop to compute joints inputs, thus stabilizing autonomously the robot (direct visual servo). In general, most applications are of the dynamic look and move type; one of the reasons is the difference between vision systems and %U http://www.hindawi.com/journals/jr/2012/103954/