|
- 2018
一种基于DSmT推理的物品融合识别算法
|
Abstract:
摘要: 针对目前提升深度模型分类表现方法存在的硬件性能不足、结构创新不易、训练样本有限等问题,提出一种基于DSmT(Dezert-Smarandache)推理的物品融合识别算法。对于待识别目标,应用数据融合思想将来自不同深度学习模型提供的识别信息进行融合处理。利用已有的预训练深度学习模型,根据分类识别任务进行特定的微调;针对DSmT理论中构造信度赋值困难的问题,使用深度学习网络对图像的判别输出进行证据源信度赋值;在决策级层运用DSmT组合理论对信度赋值融合处理,进而实现物品的准确识别。在不改变网络模型结构与同一数据集的情况下,将提出的方法与单一网络模型和平均值处理方法进行对比测试试验。试验结果表明,该方法可以有效地提高物品图像的识别率。
Abstract: Aimed at improving the performance of the depth model in image classification currently, i.e. the inadequate performance of existing hardware, difficulty in structural innovation and the limited training samples, an object fusion recognition algorithm based on DSmT(Desert-Smarandache theory)was proposed. The recognition information of objects was collected and fused from different learning network models. The pretrained depth learning models were fine-tuned according to the classification task. To solve the problem in the construction of the basic belief assignment(BBA)in DSmT, the models were used to assign the BBA to the evidence sources. The DSmT combination theory was used in the fusion of the decision-layer in order to raise the recognition rate. Under the conditions of unchanged network models and the dataset, the multi-model fusion method with the single-model and average value method were compared in the experiments. The results of the experiments showed that the algorithm could improve correct recognition ratio effectively under the same conditions
[1] | HUANG G, LIU Z, WEINBERGER K Q, et al. Densely connected convolutional networks[C] //Computer Vision and Pattern Recognition. Hawaii, USA: CVPR, 2017:1-5. |
[2] | DEZERT J. Foundations of a new theory of plausible and paradoxical reasoning[J]. Information & Security Journal, 2002, 9(1):13-57. |
[3] | LI X, DEZERT J, SMARANDACHE F, et al. Evidence supporting measure of similarity for reducing the complexity in information fusion [J]. Information Sciences, 2011, 181(10): 1818-1835. |
[4] | LI Xinde, JEAN D, HUANG X H, et al. A fast approximate reasoning method in hierarchical DSmT(A)[J]. Acta Electronica Sinica, 2010, 38(11):2566-2572. |
[5] | 王霞, 田亮. 基于典型样本的信度函数分配的构造方法[J]. 电力科学与工程, 2015(5):11-15. WANG Xia, TIAN Liang. Method of constructing confidence function distribution based on typical sample[J]. Electric Power Science and Engineering, 2015(5):11-15. |
[6] | 李新德, 杨伟东, 吴雪建,等. 一种快速分层递阶DSmT近似推理融合方法(B)[J]. 电子学报, 2011, 39(a03):31-36. LI Xinde, YANG Weidong, WU Xuejian, et al. A fast approximate reasoning method in hierarchical DSmT(B)[J]. Acta Electronica Sinica, 2011, 39(a03):31-36. |
[7] | SMARANDACHE F, DEZERT J. Information fusion based on new proportional conflict redistribution rules[C] //International Conference on Information Fusion. Stockholm, Sweden: ICIF, 2006:8 pp. |
[8] | 郭强, 何友. 基于云模型的DSm证据建模及雷达辐射源识别方法[J]. 电子与信息学报, 2015, 37(8):1779-1785. GUO Qiang, HE You. DSm evidence modeling and radar emitter fusion recognition method based on cloud model[J]. Journal of Electronics & Information Technology, 2015, 37(8):1779-1785. |
[9] | 李新德, 杨伟东. 一种飞机图像目标多特征信息融合识别方法[J]. 自动化学报, 2012, 38(8): 1298-1307. LI Xinde, YANG Weidong, DEZERT J. An airplane image targets multi-feature fusion recognition method[J]. Acta Automatica Sinica, 2012, 38(8): 1298-1307. |
[10] | 韩小虎, 徐鹏, 韩森森. 深度学习理论综述[J]. 计算机时代, 2016(6):107-110. HAN Xiaohu, XU Peng, HAN Sensen.Theoretical overview of deep learning[J]. Compute Era, 2016(6):107-110. |
[11] | JIA Y, SHELHAMER E, DONAHUE J, et al. CAFFE: convolutional architecture for fast feature embedding[C] //Proceedings of the 22nd ACM International Conference on Multimedia. New York, USA: ACM, 2014:675-678. |
[12] | 伍家松, 达臻, 魏黎明,等. 基于分裂基-2/(2a)FFT算法的卷积神经网络加速性能的研究[J]. 电子与信息学报, 2017, 39(2):285-292. WU Jiasong, DA Zhen, WEI Liming, et al. Acceleration performance study of convolutional neural network based on split-radix-2/(2a)FFT algorithms[J]. Journal of Electronics & Information Technology, 2017, 39(2):285-292. |
[13] | KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C] //Advances in Neural Information Processing Systems. New York, USA: Curran Associates, 2012:1097-1105. |
[14] | SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: CVPR, 2015:1-9. |
[15] | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C] //International Conference on Learning Representations. San Diego, USA: ICLR, 2015:1-5. |
[16] | SUZUKI S, SHOUNO H. A study on visual interpretation of network in network[C] //International Joint Conference on Neural Networks. Anchorage, USA: IJCNN, 2017:903-910. |
[17] | 卢宏涛, 张秦川. 深度卷积神经网络在计算机视觉中的应用研究综述[J]. 数据采集与处理, 2016, 31(1):1-17. LU Hongtao, ZHANG Qinchuan. Application of deep convolutional neural network in computer vision[J]. Journal of Data Acquisition and Processing, 2016, 31(1):1-17. |
[18] | RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge [J]. International Journal of Computer Vision, 2015, 115(3): 211-252. |
[19] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C] //Computer Vision and Pattern Recognition. Las Vegas, USA: CVPR, 2016:770-778. |
[20] | DEZERT J, SMARANDACHE F. On the generation of hyper-powersets for the DSmT[C] //Proceedings of the 6th International Conference of Information Fusion. Cairns, Australia: ICIF, 2005:1118-1125. |