全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于中层时空特征的人体行为识别

DOI: 10.11834/jig.20150408

Keywords: 行为识别,时空兴趣点,中层时空特征,点互信息

Full-Text   Cite this paper   Add to My Lib

Abstract:

目的人体行为识别是计算机视觉领域的一个重要研究课题,具有广泛的应用前景.针对局部时空特征和全局时空特征在行为识别问题中的局限性,提出一种新颖、有效的人体行为中层时空特征.方法该特征通过描述视频中时空兴趣点邻域内局部特征的结构化分布,增强时空兴趣点的行为鉴别能力,同时,避免对人体行为的全局描述,能够灵活地适应行为的类内变化.使用互信息度量中层时空特征与行为类别的相关性,将视频识别为与之具有最大互信息的行为类别.结果实验结果表明,本文的中层时空特征在行为识别准确率上优于基于局部时空特征的方法和其他方法,在KTH数据集和日常生活行为(ADL)数据集上分别达到了96.3%和98.0%的识别准确率.结论本文的中层时空特征通过利用局部特征的时空分布信息,显著增强了行为鉴别能力,能够有效地识别多种复杂人体行为.

References

[1]  Yuan F, Xia G S, Sahbi H, et al. Mid-level features and spatio-temporal context for activity recognition [J]. Pattern Recognition, 2012, 45(12): 4182-4191.
[2]  Zhang Y J. Understanding spatial-temporal behaviors [J]. Journal of Image and Graphics, 2013, 18(2): 141-151. [章毓晋. 时空行为理解[J]. 中国图象图形学报, 2013, 18(2): 141-151.][DOI:10.11834/jig.20130203.]
[3]  Gu J X, Ding X Q, Wang S J. A survey of activity analysis algorithms [J]. Journal of Image and Graphics, 2009, 14(3):377-387. [谷军霞,丁晓青,王生进. 行为分析算法综述[J]. 中国图象图形学报, 2009, 14(3): 377-387.][DOI:10.11834/jig.20090301.]
[4]  Li Y, Wang S, Ding X. Eye/eyes tracking based on a unified deformable template and particle filtering [J]. Pattern Recognition Letters, 2010, 31(11): 1377-1387.
[5]  Gorelick L, Blank M, Shechtman E, et al. Actions as space-time shapes [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(12): 2247-2253.
[6]  Derpanis K G, Sizintsev M, Cannons K, et al. Efficient action spotting based on a spacetime oriented structure representation [C]//Proceedings of CVPR 2010. San Francisco, CA: IEEE Press, 2010: 1990-1997.
[7]  Laptev I. On space-time interest points [J]. International Journal of Computer Vision, 2005, 64(2-3): 107-123.
[8]  Dollár P, Rabaud V, Cottrell G, et al. Behavior recognition via sparse spatio-temporal features [C]//Proceedings of VS-PETS 2005. Beijing, China: IEEE Press, 2005: 65-72.
[9]  Schuldt C, Laptev I, Caputo B. Recognizing human actions: a local svm approach [C]//Proceedings of ICPR 2004. Singapore: IEEE Press, 2004: 3: 32-36.
[10]  Wright J, Yang A Y, Ganesh A, et al. Robust face recognition via sparse representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 210-227.
[11]  Duda R O, Hart P E, Stork D G. Pattern Classification [M]. 2nd ed. New York: Wiley-Interscience, 2001: 398.
[12]  Messing R, Pal C, Kautz H. Activity recognition using the velocity histories of tracked keypoints [C]//Proceedings of ICCV 2009. Kyoto, Japan: IEEE Press, 2009: 104-111.
[13]  Rapantzikos K, Avrithis Y, Kollias S. Dense saliency based spatiotemporal feature points for action recognition [C]//Proceedings of CVPR 2009. Miami, USA: IEEE Press, 2009: 1454-1461.
[14]  Wang J, Chen Z, Wu Y. Action recognition with multiscale spatio-temporal contexts [C]//Proceedings of CVPR 2011. Providence, USA: IEEE Press, 2011: 3185-3192.
[15]  Wang T, Wang S, Ding X. Detecting human action as the spatio-temporal tube of maximum mutual information [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(2): 277-290.
[16]  Sadanand S, Corso J J. Action bank: a high-level representation of activity in video [C]//Proceedings of CVPR 2012. Providence, USA: IEEE Press, 2012: 1234-1241.
[17]  Matikainen P, Hebert M, Sukthankar R. Representing pairwise spatial and temporal relations for action recognition [C]//Proceedings of ECCV 2010. Berlin: Springer Press, 2010: 508-521.
[18]  Raptis M, Soatto S. Tracklet descriptors for action modeling and video analysis [C]//Proceedings of ECCV 2010. Berlin: Springer Press, 2010: 577-590.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133