nav emailalert searchbtn searchbox tablepage yinyongbenwen piczone journalimg journalInfo journalinfonormal searchdiv searchzone qikanlogo popupnotification paper paperNew
2025, 03, v.55 463-474
基于低秩稀疏分解的红外可见光图像融合技术研究
基金项目(Foundation): 国家自然科学基金(62071255,61971241)~~
邮箱(Email):
DOI:
摘要:

针对目前红外与可见光图像融合算法中由于源图像信息特征不同而产生的全局结构和细节信息无法保留等问题,提出一种基于低秩稀疏分解(Low-rank Sparse Decomposition, LRSD)的红外与可见光图像融合方法。该方法通过最优方向选择(Method of Optimal Directions, MOD)、K奇异值分解(K-Singular Value Decomposition, K-SVD)和背景字典3种字典学习方法构造字典,并采用低秩表示(Low-rank Represention, LRR)对源图像分解得到低秩部分和稀疏细节部分,其中低秩部分保留了源图像的全局结构,稀疏部分突出了源图像的局部结构和细节信息。在融合过程中,对低秩部分和稀疏部分分别采用加权平均与l2-l1范数约束策略进行融合,保留了全局对比度和像素强度。实验结果表明,与经典融合算法相比,提出的方法在图像视觉效果和定量评价指标方面有显著提升。采用MOD和K-SVD方法进行字典训练,得到的融合图像在定量评价指标互信息(Mutual Information, MI)、结构相似度(Structural Similarity Index, SSIM)、视觉信息保真度(Visual Information Fidelity, VIF)、标准差(Standard Deviation, SD)和峰值信噪比(Peak Signal to Noise Ratio, PSNR)上分别提高了约6%、27%、9.6%、2.4%和3.4%;采用背景字典方法进行字典训练,得到的融合图像在定量评价指标MI、SSIM、SD、均方误差(Mean Squared Error,MSE)、PSNR上分别提高了约23%、29%、1.2%、33%和4.5%。

Abstract:

To solve the problem that the global structure and detail information cannot be preserved due to the different information features of the source image in current infrared and visible image fusion algorithms, a Low-rank Sparse Decomposition( LRSD) based infrared and visible image fusion method is proposed. In this method, the dictionary is constructed by Method of Optimal Directions(MOD), K-Singular Value Decomposition( K-SVD), and background dictionary, and then Low-rank Representation( LRR) is used to decompose the source image to obtain the low-rank part and the sparse detail part. The low-rank part preserves the global structure of the source image, and the sparse part highlights the local structure and detail information of the source image. In the fusion process, weighted average and l2-l1 norm constraint strategies are used to merge the low-rank and sparse parts respectively to preserve the global contrast and pixel intensity. The experimental results show that compared with classical fusion algorithms, the proposed method has significant improvements in image visual effects and quantitative evaluation indicators. The quantitative evaluation indexes of fusion images obtained with MOD and K-SVD dictionary training methods such as Mutual Information( MI), Structural Similarity Index( SSIM), Visual Information Fidelity( VIF), Standard Deviation( SD), and Peak Signal to Noise Ratio( PSNR) have been improved by approximately 6%, 27%, 9. 6%, 2. 4% and 3. 4%, respectively. Meanwhile, the fusion images obtained with background dictionary training method improve MI, SSIM, SD, Mean Squared Error(MSE), and PSNR by about 23%, 29%, 1. 2%, 33% and 4. 5%, respectively.

参考文献

[1] BAVIRISETTI D P, DHULI R. Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform[J]. IEEE Sensors Journal,2016,16(1):203-209.

[2] KUMAR B K S. Image Fusion Based on Pixel Significance Using Cross Bilateral Filter[J]. Signal,Image and Video Processing,2015,9:1193-1204.

[3] XU H,WANG X Y,MA J Y. DRF:Disentangled Representation for Visible and Infrared Image Fusion[J]. IEEE Transactions on Instrumentation and Measurement,2021,70:1-13.

[4] LI J W,LI B H,JIANG Y X. An Infrared and Visible Image Fusion Algorithm Based on LSWT-NSST[J]. IEEE Access,2020,8:179857-179880.

[5] ZOU D P,YANG B,LI Y H,et al. Visible and NIR Image Fusion Based on Multiscale Gradient Guided Edge-Smoothing Model and Local Gradient Weight[J]. IEEE Sensors Journal,2023,23(3):2783-2793.

[6]刘丹,朱鸿泰,程虎,等.基于双引导滤波的红外和可见光图像融合算法[J].激光与红外,2023,53(11):1778-1784.

[7] WANG L, WANG B J, ZHANG Z, et al. Robust Autoweighted Projective Low-rank and Sparse Recovery for Visual Representation[J]. Neural Networks,2019,117:201-215.

[8] MARKUS R. What Is Principal Component Analysis[J].Nature Biotechnology,2008,26(3):303-304.

[9]江源,张梦,周锦,等.基于RPCA的红外与可见光图像融合[J/OL].激光杂志,1-8[2024-03-10]. http://kns.cnki. net/kcms/detail/50. 1085. TN. 20230915. 1139. 014.html.

[10] LIU G C,LIU Z C,YAN S C,et al. Robust Recovery of Subspace Structures by Low-rank Representation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,35(1):171-184.

[11] LIU G C,YAN S C. Latent Low-rank Representation for Subspace Segmentation and Feature Extraction[C]//2011International Conference on Computer Vision. Barcelona:IEEE,2011:1615-1622.

[12] LI G F,LIN Y J,QU X D. An Infrared and Visible Image Fusion Method Based on Multi-scale Transformation and Norm Optimization[J]. Information Fusion,2021,71(2):109-129.

[13] GAO C,SONG C C,ZHANG Y C,et al. Improving the Performance of Infrared and Visible Image Fusion Based on Latent Low-rank Representation Nested with Rolling Guided Image Filtering[J]. IEEE Access,2021,9:91462-91475.

[14] YANG B,LI S T. Multifocus Image Fusion and Restoration with Sparse Representation[J]. IEEE Transaction on Instrumentation and Measurement,2010,59(4):884-892.

[15] LIU Y,CHEN X,PENG H,et al. Multi-focus Image Fusion with a Deep Convolutional Neural Network[J]. Information Fusion,2017,36:191-207.

[16] MA J Y,YU W,LIANG P W,et al. Fusion GAN:A Generative Adversarial Network for Infrared and Visible Image Fusion[J]. Information Fusion,2019,48:11-26.

[17] MA J Y,XU H,JIANG J J,et al. DDc GAN:A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion[J]. IEEE Transactions on Image Processing,2020,29:4980-4995.

[18] PRABHAKAR K R,BABU R V. Ghosting-free Multi-exposure Image Fusion in Gradient Domain[C]//2016 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP). Shanghai:IEEE,2016:1766-1770.

[19]刘锃亮,张宇,吕恒毅.基于生成对抗网络的可见光与红外图像融合[J].无线电工程,2022,52(4):555-561.

[20] ENGAN K,AASE S O,HUS?Y J H. Multi-frame Compression:Theory and Design[J]. Signal Processing,2000,80(10):2121-2140.

[21] ALHARON M, ELAD M, BRUCKSTEIN A. K-SVD:An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation[J]. IEEE Transactions on Signal Processing,2006,54(11):4311-4322.

[22]史涛.基于显著性分析与分层联合低秩表示的红外与可见光图像融合[D].西安:西安电子科技大学,2019.

[23] LIN Z C,CHEN M M,MA Y. The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Lowrank Matrices[EB/OL].(2010-09-29)[2024-03-10]. https://arxiv. org/abs/1009. 5055.

[24] CHEN J Z,CHEN J,LING H F,et al. Salient Object Detection via Spectral Graph Weighted Low Rank Matrix Recovery[J]. Journal of Visual Communication and Image Representation,2018,50:270-279.

[25] LIU Q G,WANG S S,YING L,et al. Adaptive Dictionary Learning in Sparse Gradient Domain for Image Recovery[J]. IEEE Transactions on Image Processing:A Publication of the IEEE Signal Processing Society, 2013, 22(12):4652-4663.

[26] MA R R,MIAO J Y,NIU L F,et al. Transformed?1 Regularization for Learning Sparse Deep Neural Networks[J].Neural Networks,2019,119:286-298.

[27] CANDES E J,ROMBERG J K,TAO T. Stable Signal Recovery from Incomplete and Inaccurate Measurements[J]. Communications on Pure and Applied Mathematics,2006,59(8):1207-1223.

[28] GOLDSTEIN T,OSHER S. The Split Bregman Method for L1-regularized Problems[J]. SIAM Journal on Imaging Sciences,2009,2(2):323-343.

[29] WANG Y L,YIN W,ZHANG Y. A Fast Algorithm for Image Deblurring with Total Variation Regularization[EB/OL].(2007-06-22)[2024-03-11]. https://optimization-online. org/wp-content/uploads/2007/07/1724. pdf.

[30] JORGE N,STEPHENJ W. Numerical Optimization[M].2nd ed. New York:Springer,2006.

[31] ALEXANDER T. TNO Image Fusion Dataset[EB/OL].(2014-04-26)[2024-03-26]. https://figshare. com/articles/TN_Image_Fusion_Dataset/1008029.

[32] The Image Fusion Image Database[DB/OL].(2007-10-20)[2024-04-20]. http://www. imagefusion. org.

[33] BAVIRISETTI D P,DHULI R. Two-scale Image Fusion of Visible and Infrared Images Using Saliency Detection[J].Infrared Physics and Technology,2016,76:52-64.

[34] MA J L, ZHOU Z Q,WANG B,et al. Infrared and Visible Image Fusion Based on Visual Saliency Map and Weighted Least Square Optimization[J]. Infrared Physics and Technology,2017,82:8-17.

[35] MA J Y,CHEN C,LI C,et al. Infrared and Visible Image Fusion via Gradient Transfer and Total Variation Minimization[J]. Information Fusion,2016,31:100-109.

基本信息:

中图分类号:TP391.41;TN219

引用信息:

[1]张思宇,江雪,侯晓赟.基于低秩稀疏分解的红外可见光图像融合技术研究[J].无线电工程,2025,55(03):463-474.

基金信息:

国家自然科学基金(62071255,61971241)~~

发布时间:

2024-09-27

出版时间:

2024-09-27

网络发布时间:

2024-09-27

检 索 高级检索

引用

GB/T 7714-2015 格式引文
MLA格式引文
APA格式引文