oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2020 ( 2 )

2019 ( 52 )

2018 ( 174 )

2017 ( 177 )

Custom range...

Search Results: 1 - 10 of 5471 matches for " active vision "
All listed articles are free for downloading (OA Articles)
Page 1 /5471
Display every page Item
Recovering stimulus locations using populations of eye-position modulated neurons in dorsal and ventral visual streams of non-human primates
Anne B. Sereno,Margaret E. Sereno,Sidney R. Lehky
Frontiers in Integrative Neuroscience , 2014, DOI: 10.3389/fnint.2014.00028
Abstract: We recorded visual responses while monkeys fixated the same target at different gaze angles, both dorsally (lateral intraparietal cortex, LIP) and ventrally (anterior inferotemporal cortex, AIT). While eye-position modulations occurred in both areas, they were both more frequent and stronger in LIP neurons. We used an intrinsic population decoding technique, multidimensional scaling (MDS), to recover eye positions, equivalent to recovering fixated target locations. We report that eye-position based visual space in LIP was more accurate (i.e., metric). Nevertheless, the AIT spatial representation remained largely topologically correct, perhaps indicative of a categorical spatial representation (i.e., a qualitative description such as “left of” or “above” as opposed to a quantitative, metrically precise description). Additionally, we developed a simple neural model of eye position signals and illustrate that differences in single cell characteristics can influence the ability to recover target position in a population of cells. We demonstrate for the first time that the ventral stream contains sufficient information for constructing an eye-position based spatial representation. Furthermore we demonstrate, in dorsal and ventral streams as well as modeling, that target locations can be extracted directly from eye position signals in cortical visual responses without computing coordinate transforms of visual space.
Coding of saliency by ensemble bursting in the amygdala of primates
S. L. Gonzalez Andino
Frontiers in Behavioral Neuroscience , 2012, DOI: 10.3389/fnbeh.2012.00038
Abstract: Salient parts of a visual scene attract longer and earlier fixations of the eyes. Saliency is driven by bottom-up (image dependent) factors and top-down factors such as behavioral relevance, goals, and expertise. It is currently assumed that a saliency map defining eye fixation priorities is stored in neural structures that remain to be determined. Lesion studies support a role for the amygdala in detecting saliency. Here we show that neurons in the amygdala of primates fire differentially when the eyes approach to or fixate behaviorally relevant parts of visual scenes. Ensemble bursting in the amygdala accurately predicts main fixations during the free-viewing of natural images. However, fixation prediction is significantly better for faces—where a bottom-up computational saliency model fails—compared to unfamiliar objects and landscapes. On this basis we propose the amygdala as a locus for a saliency map and ensemble bursting as a saliency coding mechanism.
Saccadic Motion Control for Monocular Fixation in a Robotic Vision Head: A Comparative Study
Waldmann, Jacques;Bispo, Edvaldo Marques;
Journal of the Brazilian Computer Society , 1998, DOI: 10.1590/S0104-65001998000100008
Abstract: a comparative evaluation of two methods for visual tracking by saccade control of an active vision head with antropomorphic characteristics conducted at the ita/inpe active computer vision and perception laboratory is presented. the first method accomplishes fixation by detecting motion and controlling gaze direction based on gray-level segmentation. the second method aligns images of different viewpoints in order to apply static camera motion detection. morphological opening is then employed to compensate for image alignment errors. results from experiments in a controlled environment show that both approaches are capable of dealing with non-rigid forms and scenes with limited dynamics by operating at about 1 hz. however, the comparative evaluation shows that image alignment improves tracking robustness to variations in lighting conditions and background texture. the results so far obtained encourage further applications in autonomous robotics and vision-aided robotic rotorcraft navigation.
Measurements using three-dimensional product imaging
A. Sioma
Archives of Foundry Engineering , 2010,
Abstract: This article discusses a method of creating a three-dimensional cast model using vision systems and how that model can be used in thequality assessment process carried out directly on the assembly line. The technology of active vision, consisting in illumination of theobject with a laser beam, was used to create the model. Appropriate configuration of camera position geometry and laser light allows thecollection of height profiles and construction of a 3D model of the product on their basis. The article discusses problems connected with the resolution of the vision system, resolution of the laser beam analysis, and resolution connected with the application of the successive height profiles on sample cast planes. On the basis of the model, measurements allowing assessment of dimension parameters and surface defects of a given cast are presented. On the basis of tests and analyses of such a threedimensional cast model, a range of checks which are possible to conduct using 3D vision systems is indicated.Testing casts using that technology allows rapid assessment of selected parameters. Construction of the product’s model and dimensional assessment take a few seconds, which significantly reduces the duration of checks in the technological process. Depending on the product, a few checks may be carried out simultaneously on the product’s model.The possibility of controlling all outgoing products, and creating and modifying the product parameter control program, makes the solutionhighly flexible, which is confirmed by pilot industrial implementations. The technology will be developed in terms of detection andidentification of surface defects. It is important due to the possibility of using such information for the purposes of selecting technologicalprocess parameters and observing the effect of changes in selected parameters on the cast parameter controlled in a vision system.
A General Cognitive System Architecture Based on Dynamic Vision for Motion Control
Ernst D. Dickmanns
Journal of Systemics, Cybernetics and Informatics , 2003,
Abstract: Animation of spatio-temporal generic models for 3-D shape and motion of objects and subjects, based on feature sets evaluated in parallel from several image streams, is considered to be the core of dynamic vision. Subjects are a special kind of objects capable of sensing environmental parameters and of initiating own actions in combination with stored knowledge. Object / subject recognition and scene understanding are achieved on different levels and scales. Multiple objects are tracked individually in the image streams for perceiving their actual state ('here and now'). By analyzing motion of all relevant objects / subjects over a larger time scale on the level of state variables in the 'scene tree representation' known from computer graphics, the situation with respect to decision taking is assessed. Behavioral capabilities of subjects are represented explicitly on an abstract level for characterizing their potential behaviors. These are generated by stereotypical feed-forward and feedback control applications on a separate systems dynamics level with corresponding methods close to the actuator hardware. This dual representation on an abstract level (for decision making) and on the implementation level allows for flexibility and easy adaptation or extension. Results are shown for road vehicle guidance based on three cameras on a gaze control platform.
Vector Disparity Sensor with Vergence Control for Active Vision Systems
Francisco Barranco,Javier Diaz,Agostino Gibaldi,Silvio P. Sabatini,Eduardo Ros
Sensors , 2012, DOI: 10.3390/s120201771
Abstract: This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Julio Vega,Eduardo Perdices,José M. Ca?as
Sensors , 2013, DOI: 10.3390/s130101268
Abstract: Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people’s homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios.
Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy
Gian Luca Foresti,Christian Micheloni,Claudio Piciarelli,Lauro Snidaro
Sensors , 2009, DOI: 10.3390/s90402252
Abstract: The paper is a survey of the main technological aspects of advanced visual-based surveillance systems. A brief historical view of such systems from the origins to nowadays is given together with a short description of the main research projects in Italy on surveillance applications in the last twenty years. The paper then describes the main characteristics of an advanced visual sensor network that (a) directly processes locally acquired digital data, (b) automatically modifies intrinsic (focus, iris) and extrinsic (pan, tilt, zoom) parameters to increase the quality of acquired data and (c) automatically selects the best subset of sensors in order to monitor a given moving object in the observed environment.
Local robot navigation based on an active visual short-term memory
Julio Vega
Journal of Physical Agents , 2012,
Abstract: Vision devices are today one of the most often used sensory elements in autonomous robots. Their main difficulty is to extract useful information from the captured images and the small visual field of regular cameras. Visual attention systems and active vision may help to overcome them. This work proposes a dynamic visual memory to store the information gathered from a continuously moving camera onboard the robot and an attention system to choose where to look at with such mobile camera. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than instantaneous field of view of the camera. The attention system takes into account the need to reobserve objects in the visual memory, explore new areas and test cognitive hypothesis about object existence in the robot surroundings. The system has been programmed and validated in a real Pioneer robot that uses the information in the visual memory for navigation tasks.
Background Subtraction Based on Color and Depth Using Active Sensors
Enrique J. Fernandez-Sanchez,Javier Diaz,Eduardo Ros
Sensors , 2013, DOI: 10.3390/s130708895
Abstract: Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.
Page 1 /5471
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.