This paper presents a visual/motor behavior learning approach, based on neural networks. We propose Behavior Chain Model (BCM) in order to create a way of behavior learning. Our behavior-based system evolution task is a mobile robot detecting a target and driving/acting towards it. First, the mapping relations between the image feature domain of the object and the robot action domain are derived. Second, a multilayer neural network for offline learning of the mapping relations is used. This learning structure through neural network training process represents a connection between the visual perceptions and motor sequence of actions in order to grip a target. Last, using behavior learning through a noticed action chain, we can predict mobile robot behavior for a variety of similar tasks in similar environment. Prediction results suggest that the methodology is adequate and could be recognized as an idea for designing different mobile robot behaviour assistance. 1. Introduction The robotics research covers a wide range of application scenarios, from industrial or service robots up to robotic assistance for disabled or elderly people. Robots in industry, mining, agriculture, space exploration, and health sciences are just a few examples of challenging applications where human attributes such as cognition, perception, and intelligence can play an important role. Inducing perception and cognition, and thence the intelligentsia into robotics machines is the main aim in constructing a robot, able to “think” and operate in uncertain and unstructured conditions. To successfully realize the instruction capability (e.g., object manipulation, haptically guided teleoperation, robot surgery manipulation, etc.), a robot must extract relevant input/output control signals from the manipulation system task in order to learn the control sequences necessary for task execution [1]. The concept of the visual-motor mapping, which describes the relationship between visually perceived features and the motor signals necessary to act, is very popular in robotics [2]. There are many visual-motor mappings, defined between cameras and a robot. Since the large variation of visual inputs makes it nearly impossible to represent explicitly the sequence of actions, such knowledge must be obtained from a set of machine learning technique examples [3]. A robot fulfils appropriate purposes using its learning and prediction skills. Predictive strategy in robotics may be implemented in the following ways [4, 5].(i)Model-based reinforcement learning. The environment model is learnt, in
References
[1]
A. M. Howard and C. H. Park, “Haptically guided teleoperation for learning manipualtion tasks,” in Robotics: Science and Systems: Workshop on Robot Manipulation, Atlanta, Ga, USA, June 2007.
[2]
G. Taylor and L. Kleeman, Visual Perception and Robotic Manipulation: 3D Object Recognition, Tracking and Hand-Eye Coordination, Springer, 2006.
[3]
Y. Wu, Vision and learning for intelligent Human-Computer interaction [Ph.D. thesis], University of Illnois, 2001.
[4]
M. V. Butz, O. Sigaud, and P. Gerard, “Internal models and anticipations in adaptive learning systems,” in Proceedings of the 1st Workshop on Adaptive Behavior in Anticipatory Learning Systems (ABiALS '06), 2006.
[5]
A. Barrera, “Anticipatory mechanisms of human sensory-motor coordination inspire control of adaptive robots: a brief review,” in Robot Learning, S. Jabin, Ed., InTech, 2010.
[6]
L. Rozo, P. Jimenez, and C. Torras, “Robot learning of container-emptyng skills through haptic demonstration,” Tech. Rep. IRI-TR-09-05, Institut de Robòtica i Informàtica Industrial, CSIC-UPC, 2009.
[7]
C. Gaskett, L. Fletcher, and A. Zelinsky, “Reinforcement learning for visual servoing of a mobile robot,” in Proceedings of the Australian Conference on Robotics and Automation (ACRA '00), Melbourne, Australia, August 2000.
[8]
M. Asada, T. Nakamura, and K. Hosoda, “Behavior acquisition via visual-based robot learning,” in Proceedings of the 7th International Symposium on Robotic Research, 1996.
[9]
A. Morales, E. Chinellato, A. H. Fagg, and A. P. del Pobil, “Experimental prediction of the performance of grasp tasks from visual features,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3423–3428, Las Vegas, Nev, USA, October 2003.
[10]
H. Hoffmann, “Perception through visuomotor anticipation in a mobile robot,” Neural Networks, vol. 20, no. 1, pp. 22–33, 2007.
[11]
E. Datteri, G. Teti, C. Laschi, G. Tamburrini, P. Dario, and E. Guglielmelli, “Expected perception: an anticipation-based perception-action scheme in robots,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 934–939, October 2003.
[12]
D. A. Pomerleau, Neural Network Perception for Mobile Robot Guidance, Kluwer, Dordrecht, The Netherlands, 1993.
[13]
L. A. Meeden, G. McGraw, and D. Blank, “Emergence of control and planning in an autonomous vehicle,” in Proceedings of the 50th Annual Meeting of the Cognitive Science Society, p. 735, Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 1993.
[14]
T. Ziemke, “Remembering how to behave: recurrent neural networks for adaptive robot behavior,” in Recurrent Neural Networks, Design and Applications, L. R. Medsker and L. C. Jain, Eds., CRC Press, 2001.
[15]
J. Tani, R. Nishimoto, J. Namikawa, and M. Ito, “Codevelopmental learning between human and humanoid robot using a dynamic neural-network model,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 1, pp. 43–59, 2008.
[16]
M. Peniak, D. Marocco, J. Taniy, Y. Yamashitay, K. Fischer, and A. Cangelosi, “Multiple time scales recurrent neural network for complex action acquisition,” in Proceedings of the International Joint Conference on Development and Learning (ICDL) and Epigenetic Robotics (ICDL-EPIROB '11), Frankfurt, Germany, August 2011.
[17]
I. Fehervari and W. Elmenreich, “Evolving neural network controllers for a team of self-organizing robots,” Journal of Robotics Volume, vol. 2010, Article ID 841286, 10 pages, 2010.
[18]
R. C. Arkin, Behavior-Based Robotics, The MIT Press, Cambridge, Mass, USA, 1998.
[19]
M. Mayer, B. Odenthal, and M. Grandt, “Task-oriented process planning for cognitive production systems using MTM,” in Proceedings of the 2nd International Conference on Applied Human Factors and Ergonomic, USA Pub, 2008.