%0 Journal Article %T Neural Behavior Chain Learning of Mobile Robot Actions %A Lejla Banjanovic-Mehmedovic %A Dzenisan Golic %A Fahrudin Mehmedovic %A Jasna Havic %J Applied Computational Intelligence and Soft Computing %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/382782 %X This paper presents a visual/motor behavior learning approach, based on neural networks. We propose Behavior Chain Model (BCM) in order to create a way of behavior learning. Our behavior-based system evolution task is a mobile robot detecting a target and driving/acting towards it. First, the mapping relations between the image feature domain of the object and the robot action domain are derived. Second, a multilayer neural network for offline learning of the mapping relations is used. This learning structure through neural network training process represents a connection between the visual perceptions and motor sequence of actions in order to grip a target. Last, using behavior learning through a noticed action chain, we can predict mobile robot behavior for a variety of similar tasks in similar environment. Prediction results suggest that the methodology is adequate and could be recognized as an idea for designing different mobile robot behaviour assistance. 1. Introduction The robotics research covers a wide range of application scenarios, from industrial or service robots up to robotic assistance for disabled or elderly people. Robots in industry, mining, agriculture, space exploration, and health sciences are just a few examples of challenging applications where human attributes such as cognition, perception, and intelligence can play an important role. Inducing perception and cognition, and thence the intelligentsia into robotics machines is the main aim in constructing a robot, able to ˇ°thinkˇ± and operate in uncertain and unstructured conditions. To successfully realize the instruction capability (e.g., object manipulation, haptically guided teleoperation, robot surgery manipulation, etc.), a robot must extract relevant input/output control signals from the manipulation system task in order to learn the control sequences necessary for task execution [1]. The concept of the visual-motor mapping, which describes the relationship between visually perceived features and the motor signals necessary to act, is very popular in robotics [2]. There are many visual-motor mappings, defined between cameras and a robot. Since the large variation of visual inputs makes it nearly impossible to represent explicitly the sequence of actions, such knowledge must be obtained from a set of machine learning technique examples [3]. A robot fulfils appropriate purposes using its learning and prediction skills. Predictive strategy in robotics may be implemented in the following ways [4, 5].(i)Model-based reinforcement learning. The environment model is learnt, in %U http://www.hindawi.com/journals/acisc/2012/382782/