计算机工程与科学 ›› 2022, Vol. 44 ›› Issue (06): 1072-1082.
李一,李阳,苗壮,王家宝,张睿
收稿日期:
2021-09-10
修回日期:
2021-10-25
接受日期:
2022-06-25
出版日期:
2022-06-25
发布日期:
2022-06-17
基金资助:
LI Yi,LI Yang,MIAO Zhuang,WANG Jia-bao,ZHANG Rui
Received:
2021-09-10
Revised:
2021-10-25
Accepted:
2022-06-25
Online:
2022-06-25
Published:
2022-06-17
摘要: 红外与可见光图像融合是机器视觉的一个重要领域,在日常生活中应用广泛。近年来,虽然红外与可见光图像融合领域已有多种融合算法,但目前该领域还缺乏能够衡量多种融合算法性能的算法框架和融合基准。在简要概述了红外与可见光图像融合的最新进展后,提出了一种扩展VIFB的红外与可见光图像融合基准,该基准由56对图像、32种融合算法和16种评价指标组成。基于该融合基准进行了大量实验,用来测评所选取的融合算法的性能。通过定性和定量结果分析,确定了性能优良的图像融合算法,并对红外与可见光图像融合领域的未来前景进行了展望。
李一, 李阳, 苗壮, 王家宝, 张睿. 一种扩展VIFB的红外与可见光图像融合基准[J]. 计算机工程与科学, 2022, 44(06): 1072-1082.
LI Yi, LI Yang, MIAO Zhuang, WANG Jia-bao, ZHANG Rui. An extended VIFB for infrared and visible image fusion[J]. Computer Engineering & Science, 2022, 44(06): 1072-1082.
[1] | James A P,Dasarathy B. Medical image fusion:A survey of the state of the art[J]. Information Fusion,2014,19:4-19. |
[2] | Xia K J,Yin H S,Wang J Q. A novel improved deep convolutional neural network model for medical image fusion[J]. Cluster Computing,2019,22:1515-1527. |
[3] | Wang Z B, Ma Y D, Gu J. Multi-focus image fusion using PCNN[J]. Pattern Recognition,2010,43(6):2003-2016. |
[4] | Liu Y,Chen X,Peng H,et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion,2017,36:191-207. |
[5] | Ghassemian H. A review of remote sensing image fusion methods[J]. Information Fusion,2016,32:75-89. |
[6] | Ma K D,Zeng K,Wang Z. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Transactions on Image Processing,2015,24(11):3345-3356. |
[7] | Prabhakar K R,Srikar V S,Babu R V. Deepfuse:A deep unsupervised approach for exposure fusion with extremeexposure image pairs[C]∥Proc of International Conference on Computer Vision,2017:4724-4732. |
[8] | Ma J Y,Chen C ,Li C,et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion,2016,31:100-109. |
[9] | Bavirisetti D P,Xiao G,Zhao J H,et al. A new image and video fusion method based on cross bilateral filte[C]∥Proc of International Conference on Information Fusion,2018:1-8. |
[10] | Liu Y,Chen X,Cheng J,et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets,Multiresolution and Information Processing,2018,16(3):1850018:1-1850018:20. |
[11] | Li H, Wu X J,Kittler J. Infrared and visible Image fusion using a deep learning framework[C]∥Proc of International Conference on Pattern Recognition,2018:2705-2710. |
[12] | Zhang X C,Ye P,Xiao G. VIFB:A visible and infrared image fusion benchmark[C]∥Proc of the 2020 IEEE International Conference on Computer Vision and Pattern Recognition,2020:468-478. |
[13] | Liu Z,Blasch E,Xue Z,et al. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision:A comparative study[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(1):94-109. |
[14] | Ma J Y,Ma Y ,Li C . Infrared and visible image fusion methods and applications:A survey[J]. Information Fusion,2019,45:153-178. |
[15] | Jin X ,Jiang Q ,Yao S W,et al. A survey of infrared and visual image fusion methods[J]. Infrared Physics and Technology,2017,85:478-501. |
[16] | Li S T,Kang X D,Fang L Y,et al. Pixel-level image fusion:A survey of the state of the art[J]. Information Fusion,2017,33:100-112. |
[17] | Liu Y,Chen X,Wang Z F,et al. Deep learning for pixel-level image fusion:Recent advances and future prospects[J]. Information Fusion,2018,42:158-173. |
[18] | Yan X,Gilani S Z,Qin H L,et al. Unsupervised deep multi-focus image fusion[EB/OL]. [2018-06-19]. https://arxiv.org/pdf/1806. 07272. |
[19] | Hermessi H,Mourali O,Zagrouba E. Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain[J]. Neural Computing and Applications,2018,30:2029-2045. |
[20] | Ma J Y,Yu W,Liang P W,et al. FusionGAN:A generative adversarial network for infrared and visible image fusion[J]. Information Fusion,2019,48:11-26. |
[21] | Li H,Wu X J. Densefuse:A fusion approach to infrared and visible images[J].IEEE Transactions on Image Processing,2019,28(5):2614-2623. |
[22] | Li H,Wu X J,Durrani T S. Infrared and visible image fusion with ResNet and zero-phase component analysis[J]. Infrared Physics and Technology,2019,102:103039. |
[23] | Wang Ji-xiao,Li Yang,Wang Jia-bao,et al. Light-weight image fusion method based on SqueezeNet[J]. Journal of Computer Applications,2020,40(3):837-841.(in Chinese) |
[24] | Krizhevsky A,Sutskever I,Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM,2012,60(6):84-90. |
[25] | Huang G,Liu Z,Kilian Q. Weinberger:Densely connected convolutional networks[C]∥Proc of the 2017 IEEE International Conference on Computer Vision and Pattern Recognition,2017:2261-2269. |
[26] | Szegedy C,Liu W,Jia Y,et al. Going deeper with convolutions[C]∥Proc of the 2015 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) ,2015:1-9. |
[27] | Howard A G,Zhu M,Chen B,et al. MobileNets:Efficient convolutional neural networks for mobile vision applications[EB/OL]. [2017-04-17].https://arxiv.org/pdf/1704.04861. |
[28] | Zhang X Y,Zhou X Y,Lin M X,et al. ShuffleNet:An extremely efficient convolutional neural network for mobile devices [C]∥Proc of the 2018 IEEE International Confe- rence on Computer Vision and Pattern Recognition,2018:6848-6856. |
[29] | Iandola F N,Moskewicz M,Ashraf K,et al. SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <05 MB model size[EB/OL]. [2016-12-04]. https://arxiv.org/ abs/1602.07360. |
[30] | Chollet F. Xception:Deep learning with depthwise separable convolutions[C]∥Proc of the 2017 IEEE International Conference on Computer Vision and Pattern Recognition,2017:1800-1807. |
[31] | Wu Y,Lim J,Yang M H. Online object tracking:A benchmark[C]∥Proc of the 2013 IEEE International Conference on Computer Vision and Pattern Recognition,2013:2411-2418. |
[32] | Wu Y,Lim J,Yang M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1834-1848. |
[33] | Kristan M,Matas J,Leonardis A,et al. A novel performance evaluation methodology for single-target trackers[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,38(11):2137-2155. |
[34] | Davis J W,Sharma V. Background-subtraction using contour-based fusion of thermal and visible imagery[J]. Computer Vision and Image Understanding,2007,106(2-3):162-182. |
[35] | Ellmauthaler A,Pagliari C,Silva E,et al. A visible-light and infrared video database for performance evaluation of video/image fusion methods[J]. Multidimensional Systems and Signal Processing,2019,30(1):119-143. |
[36] | Li C L,Liang X Y,Lu Y J,et al. RGB-T object tracking:Benchmark and baseline[J]. Pattern Recognition,2019,96:106977. |
[37] | Administrator D. Comparison of fusion methods for thermo-visual surveillance tracking[C]∥Proc of International Conference on Information Fusion,2006:1-7. |
[38] | Bavirisetti D P,Dhuli R. Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen- loeve transform[J]. IEEE Sensors Journal,2016,16(1):203-209. |
[39] | Kumar B. Image fusion based on pixel significance using cross bilateral filter[J]. Signal Image Video Processing,2015,9:1193-1204. |
[40] | Bavirisetti D P,Xiao G,Liu G. Multi-sensor image fusion based on fourth order partial differential equations[C]∥Proc of 2017 20th International Conference on Information Fusion,2017:1-9. |
[41] | Zhou Z,Dong M J,Xie X Z,et al. Fusion of infrared and visible images for night-vision context enhancement[J]. Applied Optics,2016,55(23):6480-6490. |
[42] | Li S T,Kang X D,Hu J W. Image fusion with guided filter- ing[J]. IEEE Transactions on Image Processing,2013,22(7):2864-2875. |
[43] | Zhou Z Q,Wang B,Li S,et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters[J]. Information Fusion,2016,30:15-26. |
[44] | Zhang Y,Zhang L J,Bai X,et al. Infrared and visual image fusion through infrared feature extraction and visual information preservation[J]. Infrared Physics and Technology,2017,83:227 -237. |
[45] | Li H,Wu X J. Infrared and visible image fusion using latent low-rank representation[EB/OL]. [2019-08-09]. https://arxiv.org/pdf/1804. 08992. |
[46] | Bavirisetti D P,Xiao G,Zhao J H,et al. Multi-scale guided image and video fusion:A fast and efficient approach[J]. Circuits,Systems and Signal Processing,2019,38(1) :5576-5605. |
[47] | Liu Y,Liu S P,Wang Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion,2015,24:147-164. |
[48] | Naidu V. Image fusion technique using multi-resolution singular value decomposition[J]. Defence Science Journal,2011,61(5):479-484. |
[49] | Bavirisetti D P,Dhuli R. Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics and Technology,2016,76:52-64. |
[50] | Ma J L,Zhou Z,Wang B,et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics and Technology,2017,82:8-17. |
[51] | Li H,Wu X J,Kittler J. MDLatLRR:A novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing,2020,29:4733-4746. |
[52] | Li S T, Yang B,Hu J W. Performance comparison of diffe- rent multi-resolution transforms for image fusion[J]. Information Fusion,2011,12 (2):74-84. |
[53] | Cunha A L,Zhou J,Do M. The Nonsubsampled contourlet transform: Theory,design,and applications[J]. IEEE Transactions on Image Processing,2006,15(10):3089-3101. |
[54] | Fang Y M,Zhu H W,Ma K D,et al. Perceptual evaluation for multi-exposure image fusion of dynamic scenes[J]. IEEE Transactions on Image Processing,2020,29:1127-1138. |
[55] | Wang Z,Bovik A. Mean squared error:Love it or leave it? A new look at signal fidelity measures[J]. IEEE Signal Processing Magazine,2009,26(1):98-117. |
[56] | Aslantas V,Bendes E. A new image quality metric for image fusion:The sum of the correlations of differences[J]. AEU-International Journal of Electronics and Communications,2015,69(12):1890-1896. |
[57] | Adler J,Parmryd I. Quantifying colocalization by correlation:The pearson correlation coefficient is superior to the mander's overlap coefficient[J]. Cytometry,2010,77A(8):733-742. |
[58] | Bulanon D,Burks T,Alchanatis V. Image fusion of visible and thermal images for fruit detection[J]. Biosystems Engineering,2009,103(1):12-22. |
[59] | Qu G,Zhang D L,Yan P. Information measure for performance of image fusion[J]. Electronics Letters,2002,38(7):313-315. |
[60] | Jagalingam P,Hegde A V. A review of quality metrics for fused Image[J]. Aquatic Procedia,2015,4:133-142. |
[61] | Roberts J,Aardt J V,Ahmed F. Assessment of image fusion procedures using entropy,image quality,and multispectral classification[J]. Journal of Applied Remote Sensing,2008,2(1):023522. |
[62] | Wang Z,Bovik A,Sheikh H,et al. Image quality assessment:From error visibility to structural similarity[J]. IEEE Transactions on Image Processing,2004,13(4):600-612. |
[63] | Cui G M,Feng H,Xu Z H,et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications,2015,341: 199-209. |
[64] | Rao Y J. In-fibre bragg grating sensors[J]. Measurement Science and Technology,1997,8(4):355-375. |
[65] | Eskicioglu A,Fisher P S. Image quality measures and their performance[J]. IEEE Transactions on Communications,1995,43(12):2959-2965. |
[66] | Xydeas C,Petrovic V S. Objective image fusion performance measure[J]. Military Technical Courier,2000,56(4):181-193. |
[67] | Yin C,Blum R S. A new automated quality assessment algorithm for image fusion[J]. Image and Vision Computing,2009,27(10):1421-1432. |
[68] | Hao C,Varshney P K. A human perception inspired quality metric for image fusion based on regional information[J]. Information Fusion,2007,8(2):193-207. |
附中文参考文献: | |
[23] | 王继霄,李阳,王家宝,等.基于SqueezeNet的轻量级图像融合方法[J].计算机应用,2020,40(3):837-841. |
[1] | 李彤彤, 王诗蕊, 张耀方, 王佰玲, 王子博, 刘红日, . 面向工控系统漏洞的多维属性评估[J]. 计算机工程与科学, 2023, 45(02): 261-268. |
[2] | 葛旭冉, 刘洋, 陈志广, 肖侬. 基于MPI的并行大数据集生成器[J]. 计算机工程与科学, 2022, 44(07): 1152-1161. |
[3] | 郭文强, 寇馨, 李梦然, 侯勇严, 肖秦琨. 小数据集情况下基于变权重融合的BN参数学习算法[J]. 计算机工程与科学, 2022, 44(05): 916-923. |
[4] | 张丽霞, 曾广平, 宣兆成. 多源图像融合方法的研究综述[J]. 计算机工程与科学, 2022, 44(02): 321-334. |
[5] | 陆卫忠, 宋正伟, 吴宏杰, 曹燕, 丁漪杰, , 张郁. 基于深度学习的人体行为检测方法研究综述[J]. 计算机工程与科学, 2021, 43(12): 2206-2215. |
[6] | 张策1,伊文敏2,白睿1,盛晟1,徐早辉1,高天翼1,王瞰宇1,苏嘉尧1. SRGM下失效数据集效用与验证分析[J]. 计算机工程与科学, 2020, 42(06): 1012-1020. |
[7] | 覃福钿,李晶. 大数据对高校教学研的影响与探索[J]. 计算机工程与科学, 2019, 41(增刊S1): 238-241. |
[8] | 李克文1,林亚林1,杨耀忠2. 一种改进的基于欧氏距离的SDRSMOTE算法[J]. 计算机工程与科学, 2019, 41(11): 2063-. |
[9] | 杨冬菊1,2,徐晨阳1,2. 大数据环境下基于元模型控制的数据质量保障技术研究[J]. 计算机工程与科学, 2019, 41(02): 197-206. |
[10] | 张圣,伍星,邹东升. 垃圾商品评论检测研究综述[J]. 计算机工程与科学, 2018, 40(11): 2060-2066. |
[11] | 赵双,陈曙晖. 基于机器学习的流量识别技术综述与展望[J]. 计算机工程与科学, 2018, 40(10): 1746-1756. |
[12] | 吴帅,赵方. 基于随机森林的老年人居住偏好预测研究[J]. 计算机工程与科学, 2018, 40(05): 924-930. |
[13] | 唐小蔓1,王云飞1,邹复好1,周可2. 基于多哈希算法的大规模图像快速检索方法[J]. J4, 2016, 38(07): 1316-1321. |
[14] | 曹重华1,2,夏家莉1,2,彭文忠2,毛阿敏1,陈位斌1,吴丹1. 克服阻塞的用户交互处理策略[J]. J4, 2016, 38(06): 1135-1140. |
[15] | 徐爱萍,宋先明,徐武平. 分布式异构数据库集成系统研究与实现[J]. J4, 2015, 37(10): 1909-1916. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||
湘公网安备 43010502000083号
湘ICP备10006030号
版权所有 © 《计算机工程与科学》 编辑部
地址:中国湖南省长沙市开福区德雅路109号(410073) 电话:0731-87002567 Email: jsjgcykx@vip.163.com
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn