Yuan F, Xia G S, Sahbi H, et al. Mid-level features and spatio-temporal context for activity recognition [J]. Pattern Recognition, 2012, 45(12): 4182-4191.
[2]
Zhang Y J. Understanding spatial-temporal behaviors [J]. Journal of Image and Graphics, 2013, 18(2): 141-151. [章毓晋. 时空行为理解[J]. 中国图象图形学报, 2013, 18(2): 141-151.][DOI:10.11834/jig.20130203.]
[3]
Gu J X, Ding X Q, Wang S J. A survey of activity analysis algorithms [J]. Journal of Image and Graphics, 2009, 14(3):377-387. [谷军霞,丁晓青,王生进. 行为分析算法综述[J]. 中国图象图形学报, 2009, 14(3): 377-387.][DOI:10.11834/jig.20090301.]
[4]
Li Y, Wang S, Ding X. Eye/eyes tracking based on a unified deformable template and particle filtering [J]. Pattern Recognition Letters, 2010, 31(11): 1377-1387.
[5]
Gorelick L, Blank M, Shechtman E, et al. Actions as space-time shapes [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(12): 2247-2253.
[6]
Derpanis K G, Sizintsev M, Cannons K, et al. Efficient action spotting based on a spacetime oriented structure representation [C]//Proceedings of CVPR 2010. San Francisco, CA: IEEE Press, 2010: 1990-1997.
[7]
Laptev I. On space-time interest points [J]. International Journal of Computer Vision, 2005, 64(2-3): 107-123.
[8]
Dollár P, Rabaud V, Cottrell G, et al. Behavior recognition via sparse spatio-temporal features [C]//Proceedings of VS-PETS 2005. Beijing, China: IEEE Press, 2005: 65-72.
[9]
Schuldt C, Laptev I, Caputo B. Recognizing human actions: a local svm approach [C]//Proceedings of ICPR 2004. Singapore: IEEE Press, 2004: 3: 32-36.
[10]
Wright J, Yang A Y, Ganesh A, et al. Robust face recognition via sparse representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 210-227.
[11]
Duda R O, Hart P E, Stork D G. Pattern Classification [M]. 2nd ed. New York: Wiley-Interscience, 2001: 398.
[12]
Messing R, Pal C, Kautz H. Activity recognition using the velocity histories of tracked keypoints [C]//Proceedings of ICCV 2009. Kyoto, Japan: IEEE Press, 2009: 104-111.
[13]
Rapantzikos K, Avrithis Y, Kollias S. Dense saliency based spatiotemporal feature points for action recognition [C]//Proceedings of CVPR 2009. Miami, USA: IEEE Press, 2009: 1454-1461.
[14]
Wang J, Chen Z, Wu Y. Action recognition with multiscale spatio-temporal contexts [C]//Proceedings of CVPR 2011. Providence, USA: IEEE Press, 2011: 3185-3192.
[15]
Wang T, Wang S, Ding X. Detecting human action as the spatio-temporal tube of maximum mutual information [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(2): 277-290.
[16]
Sadanand S, Corso J J. Action bank: a high-level representation of activity in video [C]//Proceedings of CVPR 2012. Providence, USA: IEEE Press, 2012: 1234-1241.
[17]
Matikainen P, Hebert M, Sukthankar R. Representing pairwise spatial and temporal relations for action recognition [C]//Proceedings of ECCV 2010. Berlin: Springer Press, 2010: 508-521.
[18]
Raptis M, Soatto S. Tracklet descriptors for action modeling and video analysis [C]//Proceedings of ECCV 2010. Berlin: Springer Press, 2010: 577-590.