Two different visual servoing controls have been developed to govern a translating parallel manipulator with an eye-in-hand configuration, That is, a position-based and an image-based controller. The robot must be able to reach and grasp a target randomly positioned in the workspace; the control must be adaptive to compensate motions of the target in the 3D space. The trajectory planning strategy ensures the continuity of the velocity vector for both PBVS and IBVS controls, whereas a replanning event is needed. A comparison between the two approaches is given in terms of accuracy, fastness, and stability in relation to the robot peculiar characteristics. 1. Introduction Visual servoing is the use of computer vision to control the motion of a robot; two basic approaches can be identified [1–4]: position-based visual servo (PBVS), in which vision data are used to reconstruct the 3D pose of the robot and a kinematic error is generated in the Cartesian space and mapped to actuators commands [5–7]; image-based visual servo (IBVS), in which the error is generated directly from image plane features [8–15]. Recently, a new family of hybrid or partitioned methods is growing, with the aim of combining advantages of PBVS and IBVS while trying to avoid their shortcomings [16, 17]. The principal advantage of using position-based control is the chance of defining tasks in a standard Cartesian frame. On the other hand, the control law strongly depends on the optical parameters of the vision system and can become widely sensitive to calibration errors. On the contrary, the image-based control is less sensitive to calibration errors; however, it is required the online calculation of the image Jacobian, that is, a quantity depending on the distance between the target and the camera which is difficult to evaluate. A control in the image plane results also to be strongly nonlinear and coupled when mapped on the joint space of the robot and may cause problems when crossing points which are singular for the kinematics of the manipulator [1]. Visual servo systems can also be classified on the basis of their architecture in the following two categories [1]: the vision system provides an external input to the joint closed-loop control of the robot that stabilizes the mechanism (dynamic look and move); the vision system is directly used in the control loop to compute joints inputs, thus stabilizing autonomously the robot (direct visual servo). In general, most applications are of the dynamic look and move type; one of the reasons is the difference between vision systems and
References
[1]
S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996.
[2]
P. I. Corke and S. A. Hutchinson, “Real-time vision, tracking and control,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '00), pp. 622–629, April 2000.
[3]
F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006.
[4]
M. Staniak and C. Zieliński, “Structures of visual servos,” Robotics and Autonomous Systems, 2010.
[5]
P. Martinet, J. Gallice, and D. Khadraoui, “Vision based control law using 3D visual features,” in World Automation Congress, Robotics and Manufacturing Systems (WAC '96), vol. 3, pp. 497–502, 1996.
[6]
W. J. Wilson, C. C. W. Hulls, and G. S. Bell, “Relative end-effector control using cartesian position based visual servoing,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 684–696, 1996.
[7]
N. R. Gans, A. P. Dani, and W. E. Dixon, “Visual servoing to an arbitrary pose with respect to an object given a single known length,” in Proceedings of the American Control Conference (ACC '08), pp. 1261–1267, June 2008.
[8]
L. E. Weiss, A. C. Sanderson, and C. P. Neuman, “Dynamic visual servo control of robots: an adaptive image-based approach,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 662–668, 1985.
[9]
J. T. Feddema and O. R. Mitchell, “Vision-guided servoing with feature-based trajectory generation,” IEEE Transactions on Robotics and Automation, vol. 5, no. 5, pp. 691–700, 1989.
[10]
K. Hashimoto, T. Kimoto, T. Ebine, and H. Kimura, “Manipulator control with image-based visual servo,” in Proceedings of the 1991 IEEE International Conference on Robotics and Automation, pp. 2267–2271, April 1991.
[11]
Z. Qi and J. E. McInroy, “Improved image based visual servoing with parallel robot,” Journal of Intelligent and Robotic Systems, vol. 53, no. 4, pp. 359–379, 2008.
[12]
O. Bourquardez, R. Mahony, N. Guenard, F. Chaumette, T. Hamel, and L. Eck, “Image-based visual servo control of the translation kinematics of a quadrotor aerial vehicle,” IEEE Transactions on Robotics, vol. 25, no. 3, pp. 743–749, 2009.
[13]
U. Khan, I. Jan, N. Iqbal, and J. Dai, “Uncalibrated eye-in-hand visual servoing: an LMI approach,” Industrial Robot, vol. 38, no. 2, pp. 130–138, 2011.
[14]
D. Fioravanti, B. Allotta, and A. Rindi, “Image based visual servoing for robot positioning tasks,” Meccanica, vol. 43, no. 3, pp. 291–305, 2008.
[15]
A. De Luca, G. Oriolo, and P. R. Giordano, “Image-based visual servoing schemes for nonholonomic mobile manipulators,” Robotica, vol. 25, no. 2, pp. 131–145, 2007.
[16]
F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches [Tutorial],” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109–118, 2007.
[17]
N. R. Gans, S. A. Hutchinson, and P. I. Corke, “Performance tests for visual servo control systems with application to partitioned approaches to visual servo control,” International Journal of Robotics Research, vol. 22, no. 10-11, pp. 955–981, 2003.
[18]
T. W. Sederberg, “Computer aided geometric design,” BYU, Computer Aided Geometric Design Course Notes, 2011.
[19]
J. W. Choi, R. Curry, and G. Elkaim, “Path planning based on bézier curve for autonomous ground vehicles,” in Advances in Electrical and Electronics Engineering—IAENG Special Edition of the World Congress on Engineering and Computer Science (WCECS '08), pp. 158–166, October 2008.
[20]
M. Callegari and M. C. Palpacelli, “Prototype design of a translating parallel robot,” Meccanica, vol. 43, no. 2, pp. 133–151, 2008.
[21]
D. C. Brown, “Decentering distortion of lenses,” Photometric Engineering, vol. 32, no. 3, pp. 444–462, 1966.
[22]
J. Heikkila and O. Silven, “Four-step camera calibration procedure with implicit image correction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '97), pp. 1106–1112, June 1997.
[23]
R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987.
[24]
B. Bishop, S. Hutchinson, and M. Spong, “Camera modelling for visual servo control applications,” Mathematical and Computer Modelling, vol. 24, no. 5-6, pp. 79–102, 1996.
[25]
Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
[26]
O. Faugeras, Three Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, Boston, Mass, USA, 1993.
[27]
B. Siciliano and O. Khatib, Springer Handbook of Robotics, Springer, 2008.
[28]
M. Callegari, “Design and prototyping of a spherical parallel machine based on 3-CPU kinematics,” in Parallel Manipulators: New Developments, J.-H. Ryu, Ed., pp. 171–198, I-Tech, 2008.