全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于二通道不可分小波与深度学习的红外与可见光图像融合方法
Infrared and Visible Image Fusion Method Based on Two Channel Non-Separable Wavelet and Deep Learning

DOI: 10.12677/JISP.2021.104018, PP. 166-175

Keywords: 红外图像,可见光图像,图像融合,不可分小波,深度学习
Infrared Image
, Visible Image, Image Fusion, Non-Separable Wavelet, Deep Learning

Full-Text   Cite this paper   Add to My Lib

Abstract:

红外与可见光图像融合在武器检测、目标识别领域中扮演着重要角色,而融合的关键是通过适当方法从源图像中提取显著特征并将其组合生成融合图像,因此提出了基于不可分小波与深度学习的红外与可见光图像融合方法。首先,构造不可分小波滤波器组,通过二通道不可分小波滤波器将源图像分解为高频子图和低频子图;然后,利用深度学习网络提取高频子图的深度特征,采用多层融合的策略得到权值映射,利用权重图和高频细节得到融合后的高频子图;最后,对融合后的低频子图和高频子图进行重构,得到最终的融合图像。实验结果表明,与其他相关方法相比,本文方法在主观视觉和客观指标评价上都取得了更好的结果。
Infrared and visible image fusion is playing an important role in the field of weapon detection and target recognition, and the key of fusion is to extract significant features from the source images and generate fused image by appropriate methods, so the fusion method of infrared and visible image based on non-separable wavelet and deep learning is proposed. First, the non-separable wavelet filter banks are constructed; the source images are decomposed into high frequency sub-image and low frequency sub-image by two channel non-separable wavelet filter. Then, the deep learning network is used to extract the depth features of the high frequency sub-image, and the multi-layer fusion strategy is adopted to obtain the weight mapping, and the fused high frequency sub-image is obtained by using weight graph and high frequency detail. Finally, the low frequency sub-image and high frequency sub-image are reconstructed to obtain the final fused image. The experimental results show that compared with other related methods, the proposed method in this paper achieves better results in both subjective and objective evaluation.

References

[1]  Liu, G.X., Zhao, S.G. and Chen, W.J. (2004) Multi-Resolution Scheme Appropriate of Using Infrared and Visible Light Images. Journal of Optoelectronics Laser, 15, 980-984.
[2]  Zhang, Q. and Guo, B. (2009) Multifocus Image Fusion Using the Nonsubsampled Contourlet Transform. Signal Processing, 89, 1334-1346.
https://doi.org/10.1016/j.sigpro.2009.01.012
[3]  Li, H., Manjunath, B.S. and Mitra, S.K. (1995) Multi-Sensor Image Fusion Using the Wavelet Transform. Graphical Models and Image Processing, 57, 235-245.
https://doi.org/10.1006/gmip.1995.1022
[4]  刘斌, 彭嘉雄. 基于二通道不可分小波的多光谱图像融合[J]. 中国科学, 2008, 36(12): 2273-2284.
[5]  Liu, Y., Chen, X., Ward, R.K., et al. (2016) Image Fusion with Convolutional Sparse Representation. IEEE Signal Processing Letters, 23, 1882-1886.
[6]  Liu, Y., Chen, X., Peng, H., et al. (2017) Multi-Focus Image Fusion with a Deep Convolutional Neural Network. Information Fusion, 36, 191-207.
[7]  Gatys, L.A., Ecker, A.S. and Bethge, M. (2016) Image Style Transfer Using Convolutional Neural Networks. IEEE Computer Vision and Pattern Recognition (CVPR), Las Vegas, 27-30 June 2016, 2414-2423.
https://doi.org/10.1109/CVPR.2016.265
[8]  Simonyan, K. and Zisserman, A. (2014) Very Deep Convolutional Networks for Large-Scale Image Recognition.
[9]  Huang, X. and Belongie, S. (2017) Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. 2017 The IEEE International Conference on Computer Vision (ICCV), Venice, 22-29 October 2017, 1501-1510.
https://doi.org/10.1109/ICCV.2017.167
[10]  Libalu, H. (2017) Risk Upper Bound for a NM-Type Multi-Resolution Classification Scheme of Random Signals by Daubechies Wavelets. Engineering Applications of Artificial Intelligence, 62, 109-123.
[11]  Liu, B. and Liu, W.J. (2018) The Lifting Factorization of 2D 4-Channel Nonseparable Wavelet Transforms. Information Sciences, 456, 113-130.
[12]  Kumar, B.K.S. (2015) Image Fusion Based on Pixel Significance Using Cross Bilateral Filter. Signal, Image and Video Processing, 9, 1193-1204.
https://doi.org/10.1007/s11760-013-0556-9
[13]  Liu, C.H., Qi, Y. and Ding, W.R. (2017) Infrared and Visible Image Fusion Method Based on Saliency Detection in Sparse Domain. Infrared Physics & Technology, 83, 94-102.
https://doi.org/10.1016/j.infrared.2017.04.018
[14]  Ma, J., Zhou, Z., Wang, B., et al. (2017) Infrared and Visible Image Fusion Based on Visual Saliency Map and Weighted Least Square Optimization. Infrared Physics & Technology, 82, 8-17.
https://doi.org/10.1016/j.infrared.2017.02.005
[15]  Zhang, Q., Fu, Y., Li, H., et al. (2013) Dictionary Learning Method for Joint Sparse Representation-Based Image Fusion. Optical Engineering, 52, Article ID: 057006.
https://doi.org/10.1117/1.OE.52.5.057006
[16]  Haghighat, M. and Razian, M.A. (2014) Fast-FMI: Non-Reference Image Fusion Metric. 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), Astana, 15-17 October 2014, 1-3.
https://doi.org/10.1109/ICAICT.2014.7036000
[17]  Kumar, B.K.S. (2013) Multifocus and Multispectral Image Fusion Based on Pixel Significance Using Discrete Cosine Harmonic Wavelet Transform. Signal, Image and Video Processing, 7, 1125-1143.
https://doi.org/10.1007/s11760-012-0361-x
[18]  Li, H., Wu, X.J. and Kittler, J. (2018) Infrared and Visible Image Fusion Using a Deep Learning Framework. 2018 24nd International Conference on Pattern Recognition (ICPR), Beijing, 20-24 August 2018, 2705-2710.
https://doi.org/10.1109/ICPR.2018.8546006

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133