oalib
匹配条件: “” ,找到相关结果约100条。
列表显示的所有文章,均可免费获取
第1页/共100条
每页显示
Robust Object Tracking under Appearance Change Conditions

Qi-Cong Wang,Yuan-Hao Gong,Chen-Hui Yang,Cui-Hua Li,

国际自动化与计算杂志 , 2010,
Abstract: We propose a robust visual tracking framework based on particle filter to deal with the object appearance changes due to varying illumination, pose variantions, and occlusions. We mainly improve the observation model and re-sampling process in a particle filter. We use on-line updating appearance model, affine transformation, and M-estimation to construct an adaptive observation model. On-line updating appearance model can adapt to the changes of illumination partially. Affine transformation-based similarity measurement is introduced to tackle pose variantions, and M-estimation is used to handle the occluded object in computing observation likelihood. To take advantage of the most recent observation and produce a suboptimal Gaussian proposal distribution, we incorporate Kalman filter into a particle filter to enhance the performance of the resampling process. To estimate the posterior probability density properly with lower computational complexity, we only employ a single Kalman filter to propagate Gaussian distribution. Experimental results have demonstrated the effectiveness and robustness of the proposed algorithm by tracking visual objects in the recorded video sequences.
Statistical Estimation and Adaptation for Visual Compensation in Object Tracking  [PDF]
Sangkil Jung,Jinseok Lee,Sangjin Hong
International Journal of Distributed Sensor Networks , 2009, DOI: 10.1080/15501320802581524
Abstract: The multi-modal tracking model in [1] enables the on-the-fly error compensation with low complexity by adopting acoustic sensors for the main tracking task and visual sensors for correcting possible tracking errors. The visual compensation process in the model is indispensable to the accurate tracking task in a dynamic object movement.
Visual Learning in Multiple-Object Tracking  [PDF]
Tal Makovski, Gustavo A. Vázquez, Yuhong V. Jiang
PLOS ONE , 2008, DOI: 10.1371/journal.pone.0002228
Abstract: Background Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning. Methodology/Principal Findings Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards. Conclusions/Significance These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order.
Acoustic Sensor-Based Multiple Object Tracking with Visual Information Association  [cached]
Lee Jinseok,Hong Sangjin,Moon Nammee,Oh Seong-Jun
EURASIP Journal on Advances in Signal Processing , 2010,
Abstract: Object tracking by an acoustic sensor based on particle filtering is extended for the tracking of multiple objects. In order to overcome the inherent limitation of the acoustic sensor for the simultaneous multiple object tracking, support from the visual sensor is considered. Cooperation from the visual sensor, however, is better to be minimized, as the visual sensor's operation requires much higher computational resources than the acoustic sensor-based estimation, especially when the visual sensor is not dedicated to object tracking and deployed for other applications. The acoustic sensor mainly tracks multiple objects, and the visual sensor supports the tracking task only when the acoustic sensor has a difficulty. Several techniques based on particle filtering are used for multiple object tracking by the acoustic sensor, and the limitations of the acoustic sensor are discussed to identify the need for the visual sensor cooperation. Performance of the triggering-based cooperation by the two visual sensors is evaluated and compared with a periodic cooperation in a real environment.
Visual object tracking performance measures revisited  [PDF]
Luka ?ehovin,Ale? Leonardis,Matej Kristan
Computer Science , 2015,
Abstract: The problem of visual tracking evaluation is sporting a large variety of performance measures, and largely suffers from lack of consensus about which measures should be used in experiments. This makes the cross-paper tracker comparison difficult. Furthermore, as some measures may be less effective than others, the tracking results may be skewed or biased towards particular tracking aspects. In this paper we revisit the popular performance measures and tracker performance visualizations and analyze them theoretically and experimentally. We show that several measures are equivalent from the point of information they provide for tracker comparison and, crucially, that some are more brittle than the others. Based on our analysis we narrow down the set of potential measures to only two complementary ones, describing accuracy and robustness, thus pushing towards homogenization of the tracker evaluation methodology. These two measures can be intuitively interpreted and visualized and have been employed by the recent Visual Object Tracking (VOT) challenges as the foundation for the evaluation methodology.
Survey on Visual Tracking Algorithms Based on Mean Shift
基于Mean Shift的视觉目标跟踪算法综述

顾幸方,茅耀斌,李秋洁
计算机科学 , 2012,
Abstract: Mean-shift based visual tracking algorithms have several desirable properties, such as computational efficiency,few tuning parameters, relatively high robustness in performance and straightforward implementation, which make them to become an appealing topic in visual tracking research area. Firstly, original mean shift tracking algorithm was introduced and its defects were pointed out afterwards. hhen improvements of the original algorithm were elaborately discussed from five aspects, namely generative and discriminative object appearance model, model update mechanism,scale and orientation adaptation, anti-occlusion and fast moving object tracking. Both classical algorithms and recent advances are included in each aspect. Finally,the prospects of mean-shift based tracking were presented.
A Survey on Moving Object Detection and Tracking in Video Surveillance System  [PDF]
Kinjal A Joshi,Darshak G. Thakore
International Journal of Soft Computing & Engineering , 2012,
Abstract: This paper presents a survey of various techniques related to video surveillance system improving the security. The goal of this paper is to review of various moving object detection and object tracking methods. This paper focuses on detection of moving objects in video surveillance system then tracking the detected objects in the scene. Moving Object detection is first low level important task for any video surveillance application. Detection of moving object is a challenging task. Tracking is required in higher level applications that require the location and shape of object in every frame. In this survey, I described Background subtraction with alpha, statistical method, Eigen background Subtraction and Temporal frame differencing to detect moving object. I also described tracking method based on point tracking, kernel tracking and silhouette tracking.
A Visual Attention Model for Robot Object Tracking

Jin-Kui Chu,Rong-Hua Li,Qing-Ying Li,Hong-Qing Wang,

国际自动化与计算杂志 , 2010,
Abstract: Inspired by human behaviors, a robot object tracking model is proposed on the basis of visual attention mechanism, which is fit for the theory of topological perception. The model integrates the image-driven, bottom-up attention and the object-driven, top-down attention, whereas the previous attention model has mostly focused on either the bottom-up or top-down attention. By the bottom-up component, the whole scene is segmented into the ground region and the salient regions. Guided by top-down strategy which is achieved by a topological graph, the object regions are separated from the salient regions. The salient regions except the object regions are the barrier regions. In order to estimate the model, a mobile robot platform is developed, on which some experiments are implemented. The experimental results indicate that processing an image with a resolution of 752*480 pixels takes less than 200ms and the object regions are unabridged. The analysis obtained by comparing the proposed model with the existing model demonstrates that the proposed model has some advantages in robot object tracking in terms of speed and efficiency.
Robust Real-Time 3D Object Tracking with Interfering Background Visual Projections  [cached]
Jin Huan,Qian Gang
EURASIP Journal on Image and Video Processing , 2008,
Abstract: This paper presents a robust real-time object tracking system for human computer interaction in mediated environments with interfering visual projection in the background. Two major contributions are made in our research to achieve robust object tracking. A reliable outlier rejection algorithm is developed using the epipolar and homography constraints to remove false candidates caused by interfering background projections and mismatches between cameras. To reliably integrate multiple estimates of the 3D object positions, an efficient fusion algorithm based on mean shift is used. This fusion algorithm can also reduce tracking errors caused by partial occlusion of the object in some of the camera views. Experimental results obtained in real life scenarios demonstrate that the proposed system is able to achieve decent 3D object tracking performance in the presence of interfering background visual projection.
Robust Real-Time 3D Object Tracking with Interfering Background Visual Projections  [cached]
Huan Jin,Gang Qian
EURASIP Journal on Image and Video Processing , 2008, DOI: 10.1155/2008/638073
Abstract: This paper presents a robust real-time object tracking system for human computer interaction in mediated environments with interfering visual projection in the background. Two major contributions are made in our research to achieve robust object tracking. A reliable outlier rejection algorithm is developed using the epipolar and homography constraints to remove false candidates caused by interfering background projections and mismatches between cameras. To reliably integrate multiple estimates of the 3D object positions, an efficient fusion algorithm based on mean shift is used. This fusion algorithm can also reduce tracking errors caused by partial occlusion of the object in some of the camera views. Experimental results obtained in real life scenarios demonstrate that the proposed system is able to achieve decent 3D object tracking performance in the presence of interfering background visual projection.
第1页/共100条
每页显示


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.