全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

在线复合模板模型表示的视觉目标跟踪

DOI: 10.11834/jig.20150907

Keywords: 在线学习,复合模板,模型表示,视觉目标跟踪

Full-Text   Cite this paper   Add to My Lib

Abstract:

目的视觉目标跟踪中,目标往往受到自身或场景中各种复杂干扰因素的影响,这对正确捕捉所感兴趣的目标信息带来极大的挑战。特别是,跟踪器所用的模板数据主要是在线学习获得,数据的可靠性直接影响到候选样本外观模型表示的精度。针对视觉目标跟踪中目标模板学习和候选样本外观模型表示等问题,采用一种较为有效的模板组织策略以及更为精确的模型表示技术,提出一种新颖的视觉目标跟踪算法。方法跟踪框架中,将候选样本外观模型表示假设为由一组复合模板和最小重构误差组成的线性回归问题,首先利用经典的增量主成分分析法从在线高维数据中学习出一组低维子空间基向量(模板正样本),并根据前一时刻跟踪结果在线实时采样一些特殊的负样本加以扩充目标模板数据,再利用新组织的模板基向量和独立同分布的高斯―拉普拉斯混合噪声来线性拟合候选目标外观模型,最后估计出候选样本和真实目标之间的最大似然度,从而使跟踪器能够准确捕捉每一时刻的真实目标状态信息。结果在一些公认测试视频序列上的实验结果表明,本文算法在目标模板学习和候选样本外观模型表示等方面比同类方法更能准确有效地反映出视频场景中目标状态的各种复杂变化,能够较好地解决各种不确定干扰因素下的模型退化和跟踪漂移问题,和一些优秀的同类算法相比,可以达到相同甚至更高的跟踪精度。结论本文算法能够在线学习较为精准的目标模板并定期更新,使得跟踪器良好地适应内在或外在因素(姿态、光照、遮挡、尺度、背景扰乱及运动模糊等)所引起的视觉信息变化,始终保持其最佳的状态,使得候选样本外观模型的表示更加可靠准确,从而展现出更为鲁棒的性能。

References

[1]  Frey B J. Filling in scenes by propagating probabilities through layers and into appearance models [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Hilton Head Island, SC: IEEE, 2000 (1): 185-192.
[2]  Olson C F. Maximum-likelihood template matching [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Hilton Head Island, SC: IEEE, 2000 (2): 52-57.
[3]  Lee W C, Chen C H. A fast template matching method for rotation invariance using two-stage process [C]//Proceedings of the 5th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Kyoto: IEEE, 2009: 9-12.
[4]  Black M J, Jepson A D. Eigentracking: robust matching and tracking of articulated objects using a view based representation [J]. International Journal of Computer Vision,1998, 26(1): 63-84.
[5]  Zhang J X, Cai W L, Tian Y, et al. Visual Tracking via Sparse Representation Based Linear Subspace Model [C]//Proceedings of the 9th IEEE International Conference on Computer and Information Technology. Xiamen: IEEE, 2009, (1):166-171.
[6]  He K, Wang G, Yang Y. Optical flow-based facial feature tracking using prior measurement [C]//Proceedings of the 7th IEEE International Conference on Cognitive Informatics. Stanford, CA: IEEE, 2008: 324-331.
[7]  Eldeeb S M, Khalifa A M, Fahmy A S. Hybrid intensity-and phase-based optical flow tracking of tagged MRI [C]//Proceedings of the 36th Annual International Conference of the IEEE on Engineering in Medicine and Biology Society. Chicago, IL:IEEE, 2014: 1059-1062.
[8]  Yang M, Tao J, Shi L, et al. An outlier rejection scheme for optical flow tracking [C] //Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing. Santander: IEEE, 2011: 1-4.
[9]  Tao H, Sawhney H S, Kumar R. Object tracking with bayesian estimation of dynamic layer representations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 21(1): 75-89.
[10]  Barth E, Stuke I, Aach T, et al. Spatio-temporal motion estimation for transparency and occlusions [C]// Proceedings of International Conference on Image Processing. Barcelona: IEEE, 2003,(3): 65-68.
[11]  Du K, Ju Y F, Jin Y L, et al. MeanShift tracking algorithm with adaptive block color histogram [C]//Proceedings of 2nd International Conference on Consumer Electronics, Communications and Networks. Yichang: IEEE, 2012: 2692-2695.
[12]  Diwakar M, Patel P K, Gupta K, et al. Object tracking using joint enhanced color-texture histogram[C]//Proceedings of IEEE Second International Conference on Image Information Processing. Shimla: IEEE, 2013: 160-165.
[13]  Yilmaz A, Javed O, Shah M. Object tracking: a survey[J]. Acm Computing Surveys (CSUR), 2006, 38(4):#13.
[14]  Wu Y, Lim J, Yang M H. Online object tracking: a benchmark [C]// Proceedings of IEEE Conference on CVPR. Portland, OR: IEEE, 2013: 2411-2418.
[15]  Ross D A, Lim J, Lin R S, et al. Incremental learning for robust visual tracking [J]. International Journal of Computer Vision, 2008, 77(1-3): 125-141.
[16]  Chen Zh Y, Wu Y. Robust dictionary learning by error source decomposition[C]//Proceedings of IEEE Conference on ICCV. Sydney, NSW: IEEE, 2013: 2216-2223.
[17]  Wang N Y, Wang J D, Yeung D Y. Online robust non-negative dictionary learning for visual tracking [C]//Proceedings of IEEE International Conference on Computer Vision. Sydney, NSW: IEEE, 2013: 657-664.
[18]  Zhang T Z, Ghanem B, Liu S, et al. Robust visual tracking via multi-task sparse learning [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI: IEEE, 2012: 2042-2049.
[19]  Wang D, Lu H C, Yang M H. Least soft threshold squares tracking [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR: IEEE, 2013: 2371-2378.
[20]  Boyd S, Parikh N, Chu E, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers [J]. Foundations and Trends in Machine Learning, 2010, 3(1):1-122.
[21]  更多...
[22]  Liu G C, Lin Z C, Yan S C, et al. Robust recovery of subspace structures by low-rank representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 171-184.
[23]  Lin Z C, Chen M M, Wu L Q, et al. The augmented Lagrange multiplier method for exact recovery of corrupted low rank matrices [EB/OL]. [2013-10-18]. http://arxiv.org/pdf/1009.5055v3.pdf.
[24]  Zhong W, Lu H C, Yang M H. Robust object tracking via sparsity-based collaborative model [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI: IEEE, 2012: 1838-1845.
[25]  Jia X, Lu H C, Yang M H. Visual tracking via adaptive structural local sparse appearance model[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI: IEEE, 2012: 1822-1829.
[26]  Wang D, Lu H C, Yang Ming-Hsuan. Online object tracking with sparse prototypes [J]. IEEE Transactions on Image Processing, 2013, 22(1): 314-325.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133