刘刚.基于多尺度的多传感器图像融合研究[D]. 上海:上海交通大学微电子学院, 2005. Liu Gang. Research on multiresolution-based multisensor image fusion[D]. Shanghai: School of Microelectronics, Shanghai Jiaotong University, 2005.
[2]
汤磊.多分辨率图像融合方法与技术研究[D]. 南京:解放军理工大学指挥自动化学院, 2008. Tang Lei. Research on multiresolution image fusion method and technology[D]. Nangjing: Institute of Command and Automation, PLA University of Science and Technology, 2008.
[3]
Garcia J A, Sanchez R R, Valdivia J F. Axiomatic approach to computational attention[J]. Pattern Recognition, 2010, 43(4): 1618-1630.
[4]
Lai Jie-ling, Yi Yang. Key frame extraction based on visual attention model[J]. Journal of Visual Communication and Image Representation, 2012, 23(1): 114-125.
[5]
Hu Yi-qun, Xie Xing, Ma Wei-ying, et al. Salient object extraction combining visual attention and edge information[R]. Technical Report, 2004.
[6]
Fang Yu-ming, Chen Zhen-zhong, Lin Wei-si, et al. Saliency detection in the compressed domain for adaptive image retargeting[J]. IEEE Transactions on Image Processing, 2012, 21(9): 3888-3901.
[7]
Engelke U, Nguyen V X, Zepernick H J. Regional attention to structural degradations for perceptual image quality metric design[C]∥Proc IEEE Int Conf Acoust, Speech, and Signal Processing, 2008.
[8]
Gopalakrishnan V, Hu Y Q, Rajan D. Random walks on graphs for salient object detection in images[J]. IEEE Transactions on Image Processing, 2010, 19(12):3232-3242.
[9]
叶传奇, 王宝树, 苗启广. 一种基于区域特性的红外与可见光图像融合算法[J]. 光子学报, 2009, 38(6): 1498-1503. Ye Chuan-qi, Wang Bao-shu, Miao Qi-guang. Fusion algorithm of infrared and visible images based on region feature[J]. Acta Photonica Sinica, 2009, 38(6): 1498-1503.
[10]
Chai Yi, Li Hua-feng, Li Zhao-fei. Multifocus image fusion scheme using focused region detection and multi-resolution[J]. Optics Communications, 2011, 284 (19): 4376-4389.
[11]
王晓文, 赵宗贵, 汤磊. 一种新的红外与可见光图像融合评价方法[J]. 系统工程与电子技术, 2012, 34(5): 27-31. Wang Xiao-wen, Zhao Zong-gui, Tang Lei. A novel quality metric for infrared and visible image fusion[J]. System Engineering and Electronics, 2012, 34(5): 27-31.
[12]
Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Trans Pattern Analysis and Machine Intelligence, 1998, 20: 1254-1259.
[13]
Da C A, Zhou J P, Do M N. The non-subsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101.
[14]
Wong A K C, Sahoo P K. A gray-level threshold selection method based on maximum entropy principle[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1989, 19(4): 866-871.
[15]
Qu Xiao-bo, Yan Jing-wen, Zhu Zi-qian, et al. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in non- subsampled contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12): 1508-1514.
[16]
Qu Gui-hong, Zhang Da-li, Yan Ping-fan. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313-315.
[17]
Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308-309.
[18]
Zheng Y F, Essock E A, Hansen B C, et al. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms[J]. Information Fusion, 2007, 8(2): 177-192.
[19]
Piella G. New quality measures for image fusion[C]∥Proc IEEE 7th International Conference on Information Fusion, 2004.
[20]
Yang Cui, Zhang Jian-qi, Wang Xiao-rui, et al. A novel similarity based quality metric for image fusion[J]. Information Fusion, 2008, 9(2):156-160.
[21]
Sugihara K. Robust gift wrapping for the three-dimensional convex hull[J]. Journal of Computer and System Sciences, 1994, 49:391-407.