全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
-  2018 

采用超像素标注匹配的交通场景几何分割方法
A Geometric Segmentation Method for Traffic Scenes Using Super??Pixel Label Matching

DOI: 10.7652/xjtuxb201808012

Keywords: 交通场景,超像素,几何分割,全连接条件随机场
traffic scene
,super??pixel,geometric segmentation,fully connected conditional random field

Full-Text   Cite this paper   Add to My Lib

Abstract:

针对交通场景逐像素标注方法计算复杂、模型训练耗时长的问题,提出了一种基于超像素标注匹配的交通场景几何分割方法。该方法无需进行模型训练,根据全局特征搜索一组待分割交通场景图像的相似图像集。对待分割图像进行超像素分割和超像素块特征提取,并利用朴素贝叶斯原理进行似然比计算,根据似然比在相似图像集中进行超像素块标注匹配以实现初次分割。利用初次分割结果计算出一元势,应用全连接条件随机场模型对初次分割结果进行优化。实验结果表明,与传统的逐像素标注方法相比,本文方法的分割正确率和平均召回率分别提高了4%和3%,能够有效地实现交通场景几何分割。
A geometric segmentation method for traffic scenes based on super??pixel label matching is proposed to solve the problem of complex calculation and long time??consumption of model training in the pixel??by??pixel labeling method for traffic scenes. The proposed method does not require model training, and a set of images similar to picture a traffic scene that will be segmented is searched according to global features. Then, super??pixel segmentation and super??pixel block feature extraction are performed on the traffic scene, and the likelihood ratio is calculated using the naive Bayesian principle. Super??pixel block label matching is performed in the set of similar images according to the likelihood ratio to realize an initial segmentation. Finally, the initial segmentation result is used to calculate the unary potential, and the fully connected conditional random field model is used to optimize the initial segmentation result. Experimental results and a comparison with the traditional pixel??by??pixel labeling method show that the proposed method effectively achieves geometric segmentation of traffic scenes, and the accuracy of segmentation and the average recall rate increase by 4% and 3% respectively

References

[1]  [1]HOIEM D, EFROS A A, HEBERT M. Recovering surface layout from an image [J]. International Journal of Computer Vision, 2007, 75(1): 151??172.
[2]  DENG Yanzi, LU Zhaoyang, LI Jing. Segmentation of the image with multi??visual features for a traffic scene [J]. Journal of Xidian University, 2015, 42(6): 11??16.
[3]  [9]YANG J, PRICE B, COHEN S, et al. Context driven scene parsing with attention to rare classes [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2014: 3294??3301.
[4]  [17]谭论正, 夏利民, 夏胜平. 基于多级Sigmoid神经网络的城市交通场景理解 [J]. 国防科技大学学报, 2012, 34(4): 132??137.
[5]  TAN Lunzheng, XIA Limin, XIA Shengping. Urban traffic scene understanding based on multi??level sigmoidal neural network [J]. Journal of National University of Defense Technology, 2012, 34(4): 132??137.
[6]  [18]LADICK? L, RUSSELL C, KOHLI P, et al. Associative hierarchical random fields [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 36(6): 1056??1077.
[7]  [2]LADICK? L’, STURGESS P, ALAHARI K, et al. What, where and how many? combining object detectors and CRFs [C]∥Proceedings of the 11th European Conference on Computer Vision. Berlin, Germany: Springer, 2010: 424??437.
[8]  [3]徐胜军, 韩九强, 何波, 等. 融合边缘特征的马尔可夫随机场模型及分割算法 [J]. 西安交通大学学报, 2014, 48(2): 14??19.
[9]  XU Shengjun, HAN Jiuqiang, HE Bo, et al. A region Markov random field model with integrated edge feature and image segmentation algorithm [J]. Journal of Xi’an Jiaotong University, 2014, 48(2): 14??19.
[10]  [7]GEORGE M. Image parsing with a wide range of classes and scene??level context [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2015: 3622??3630.
[11]  [10]SHELHAMER E, LONG J, DARRELL T. Fully convolutional networks for semantic segmentation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640.
[12]  [11]NGUYEN K, FOOKES C, SRIDHARAN S. Deep context modeling for semantic segmentation [C]∥Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Piscataway, NJ, USA: IEEE, 2017: 56??63.
[13]  [12]KR?FHENB?aHL P, KOLTUN V. Efficient inference in fully connected CRFs with Gaussian edge potentials [C]∥Proceedings of Advances in Neural Information Processing Systems. Cambridge, MA, USA: MIT Press, 2011: 109??117.
[14]  [13]OLIVA A, TORRALBA A. Building the gist of a scene: the role of global image features in recognition [J]. Progress in Brain Research, 2006, 155: 23??36.
[15]  [14]FELZENSZWALB P F, HUTTENLOCHER D P. Efficient graph??based image segmentation [J]. International Journal of Computer Vision, 2004, 59(2): 167??181.
[16]  [15]MALISIEWICZ T, EFROS A A. Recognition by ass??ociation via learning per??exemplar distances [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2008: 1??8.
[17]  [16]GRIDCHYN I, KOLMOGOROV V. Potts model, par??ametric maxflow and K??Submodular functions [C]∥Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ, USA: IEEE, 2013: 2320??2327.
[18]  [4]邓燕子, 卢朝阳, 李静. 交通场景的多视觉特征图像分割方法 [J]. 西安电子科技大学学报(自然科学版), 2015, 42(6): 11??16.
[19]  [5]COSTEA A D, NEDEVSCHI S. Semantic channels for fast pedestrian detection [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2016: 2360??2368.
[20]  [6]COSTEA A D, NEDEVSCHI S. Fast traffic scene segmentation using multirange features from multi??resolution filtered and spatial context channels [C]∥Proceedings of the IEEE Intelligent Vehicles Symposium. Piscataway, NJ, USA: IEEE, 2016: 328??334.
[21]  [8]TIGHE J, LAZEBNIK S. Superparsing: scalable nonparametric image parsing with superpixels [J]. International Journal of Computer Vision, 2013, 101(2): 352??365.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133