红外和可见光图像融合论文及代码整理
News
[2022-07-29] 我们的综述论文《基于深度学习的图像融合方法综述》被《中国图象图形学报》正式接收![论文下载]
本篇博文在转载:图像融合论文及代码网址整理总结(2)——红外与可见光图像融合的基础上进行了扩充,整理汇总一下现有的红外与可见光图像融合算法(文章和代码)。希望为自己以及大家查红外和可见光图像融合领域的文章代码提供一些便捷。另外,该领域的文章很多,本篇博文也只是整理了其中的一部分,由于水平有限,并未对文章进行解读!
笔者QQ:2458707789, 好友申请请备注:姓名+学校 便于备注。
作者对高级视觉任务驱动的图像融合框架也进行了介绍,具体见:SeAFusion:首个结合高级视觉任务的图像融合框架
作者同系列博文还有:
- 图像融合论文及代码整理最全大合集参见:图像融合论文及代码整理最全大合集
- 图像融合综述论文整理参见:图像融合综述论文整理
- 图像融合评估指标参见:红外和可见光图像融合评估指标
- 图像融合常用数据集整理参见:图像融合常用数据集整理
- 通用图像融合框架论文及代码整理参见:通用图像融合框架论文及代码整理
- 基于深度学习的红外和可见光图像融合论文及代码整理参见:基于深度学习的红外和可见光图像融合论文及代码整理
- 更加详细的红外和可见光图像融合代码参见:红外和可见光图像融合论文及代码整理
- 基于深度学习的多曝光图像融合论文及代码整理参见:基于深度学习的多曝光图像融合论文及代码整理
- 基于深度学习的多聚焦图像融合论文及代码整理参见:基于深度学习的多聚焦图像融合(Multi-focus Image Fusion)论文及代码整理
- 基于深度学习的全色图像锐化论文及代码整理参见:基于深度学习的全色图像锐化(Pansharpening)论文及代码整理
- 基于深度学习的医学图像融合论文及代码整理参见:基于深度学习的医学图像融合(Medical image fusion)论文及代码整理
- 彩色图像融合参见: 彩色图像融合
- SeAFusion:首个结合高级视觉任务的图像融合框架参见:SeAFusion:首个结合高级视觉任务的图像融合框架
转载作者同系列的博文还有:
图像融合论文及代码网址整理总结(1)——多聚焦图像融合
图像融合论文及代码网址整理总结(2)——红外与可见光图像融合
图像融合论文及代码网址整理总结(3)——题目中未加区分的图像融合算法
图像融合数据集,图像融合数据库
【2022】
-
文章:Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network【深度学习】【高级视觉任务驱动】
-
Cite as: Tang, Linfeng, Jiteng Yuan, and Jiayi Ma. “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network.” Information Fusion 82 (2022): 28-42.
-
Paper: Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network
-
Code:/Linfeng-Tang/SeAFusion
解读: SeAFusion:首个结合高级视觉任务的图像融合框架
文章:SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer【TransFormer】【通用图像融合框架】
-
Cite as: Jiayi Ma, Linfeng Tang, Fan Fan, Jun Huang, Xiaoguang Mei, and Yong Ma. “SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer”, IEEE/CAA Journal of Automatica Sinica, 9(7), pp. 1200-1217, Jul. 2022.
-
Paper: SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer
-
Code:/Linfeng-Tang/SwinFusion
-
文章:PIAFusion: A progressive infrared and visible image fusion network based on illumination aware【深度学习】
-
Cite as: Tang, Linfeng, Jiteng Yuan, Hao Zhang, Xingyu Jiang, and Jiayi Ma. “PIAFusion: A progressive infrared and visible image fusion network based on illumination aware.” Information Fusion 83 (2022): 79-92.
-
Paper: PIAFusion: A progressive infrared and visible image fusion network based on illumination aware
-
Code:/Linfeng-Tang/SeAFusion
文章:Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection【深度学习】【高级视觉任务驱动】
-
Cite as: Liu, Jinyuan, Xin Fan, Zhanbo Huang, Guanyao Wu, Risheng Liu, Wei Zhong, and Zhongxuan Luo. “Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5802-5811. 2022.
-
Paper: Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection
-
Code:/JinyuanLiu-CV/TarDAL
文章:Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration【深度学习】【配准融合】
-
Cite as: Wang, Di, Jinyuan Liu, Xin Fan, and Risheng Liu. “Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration.” arXiv preprint arXiv:2205.11876 (2022).
-
Paper: Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration
-
Code:/wdhudiekou/UMF-CMGR
【2021】
-
文章:STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection【深度学习】【显著目标掩模】
-
Cite as: J. Ma, L. Tang, M. Xu, H. Zhang and G. Xiao, “STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection,” in IEEE Transactions on Instrumentation and Measurement, 2021, 70:1-13.
Paper: STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection
Code:/Linfeng-Tang/STDFusionNet
文章:SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion【深度学习】【通用图像融合框架】
-
Cite as: Zhang, H. and Ma, J., 2021. SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. International Journal of Computer Vision, 129(10), pp.2761-2785.
Paper: SDNet: A versatile squeeze-and-decomposition network for real-time image fusion
Code:/HaoZhang1018/SDNet
文章:Image fusion meets deep learning: A survey and perspective【深度学习】【综述】
-
Cite as:Zhang, Hao, Han Xu, Xin Tian, Junjun Jiang, and Jiayi Ma. “Image fusion meets deep learning: A survey and perspective.” Information Fusion 76 (2021): 323-336.
Paper: Image fusion meets deep learning: A survey and perspective
文章:Classification Saliency-Based Rule for Visible and Infrared Image Fusion【深度学习 【可学习融合规则】
-
Cite as: Xu, Han, Hao Zhang, and Jiayi Ma. “Classification saliency-based rule for visible and infrared image fusion.” IEEE Transactions on Computational Imaging 7 (2021): 824-836.
Paper: Classification Saliency-Based Rule for Visible and Infrared Image Fusion
Code: /hanna-xu/CSF
文章:GANMcC: A Generative Adversarial Network with Multiclassification Constraints for Infrared and Visible Image Fusion【深度学习 【GAN】
-
Cite as: Ma, Jiayi, Hao Zhang, Zhenfeng Shao, Pengwei Liang, and Han Xu. “GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion.” IEEE Transactions on Instrumentation and Measurement 70 (2020): 1-14.
Paper: GANMcC: A Generative Adversarial Network with Multiclassification Constraints for Infrared and Visible Image Fusion
Code: /HaoZhang1018/GANMcC
文章:A Bilevel Integrated Model with Data-Driven Layer Ensemble for Multi-Modality Image Fusion【深度学习】
-
Cite as:Liu, Risheng, Jinyuan Liu, Zhiying Jiang, Xin Fan, and Zhongxuan Luo. “A bilevel integrated model with data-driven layer ensemble for multi-modality image fusion.” IEEE Transactions on Image Processing 30 (2020): 1261-1274.
Paper: A Bilevel Integrated Model with Data-Driven Layer Ensemble for Multi-Modality Image Fusion
文章:RFN-Nest: An end-to-end residual fusion network for infrared and visible images【深度学习 】【多尺度】
-
Cite as: Li, Hui, Xiao-Jun Wu, and Josef Kittler. “RFN-Nest: An end-to-end residual fusion network for infrared and visible images.” Information Fusion 73 (2021): 72-86.
Paper: RFN-Nest: An end-to-end residual fusion network for infrared and visible images
Code: /hli1221/imagefusion-rfn-nest
文章:DRF: Disentangled Representation for Visible and Infrared Image Fusion【深度学习】 【解纠缠学习】
-
Cite as: Xu, Han, Xinya Wang, and Jiayi Ma. “DRF: Disentangled representation for visible and infrared image fusion.” IEEE Transactions on Instrumentation and Measurement 70 (2021): 1-13.
Paper: DRF: Disentangled Representation for Visible and Infrared Image Fusion
Code: /hanna-xu/DRF
文章:RXDNFuse: A aggregated residual dense network for infrared and visible image fusion【深度学习】
-
Cite as: Long, Yongzhi, Haitao Jia, Yida Zhong, Yadong Jiang, and Yuming Jia. “RXDNFuse: a aggregated residual dense network for infrared and visible image fusion.” Information Fusion 69 (2021): 128-141.
Paper: RXDNFuse: a aggregated residual dense network for infrared and visible image fusion
文章:Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion【深度学习 】【元学习】
-
Cite as: Li, Huafeng, Yueliang Cen, Yu Liu, Xun Chen, and Zhengtao Yu. “Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion.” IEEE Transactions on Image Processing 30 (2021): 4070-4083.
Paper: Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion
文章:Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis【深度学习 】【深度分解网络】
-
Cite as: Jian, L., Rayhana, R., Ma, L., Wu, S., Liu, Z. and Jiang, H., 2021. Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis. IEEE Transactions on Multimedia.
Paper: Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis
【2020】
1、文章:U2Fusion: A Unified Unsupervised Image Fusion Network 【深度学习】【通用图像融合】
Cite as:Xu H, Ma J, Jiang J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
Paper: U2Fusion: A Unified Unsupervised Image Fusion Network
Code:/jiayi-ma/U2Fusion
2、文章:Deep Convolutional Neural Network for Multi-modal Image Restoration and Fusion【深度学习】【图像分解】
Cite as: Deng X, Dragotti P L. Deep convolutional neural network for multi-modal image restoration and fusion[J]. IEEE transactions on pattern analysis and machine intelligence, 2020.
Paper:Deep convolutional neural network for multi-modal image restoration and fusion
Code:
3、文章:FusionDN: A Unified Densely Connected Network for Image Fusion 【深度学习】【通用图像融合】
Cite as:Xu H, Ma J, Le Z, et al. FusionDN: A Unified Densely Connected Network for Image Fusion[C]//AAAI. 2020: 12484-12491.
Paper: FusionDN: A Unified Densely Connected Network for Image Fusion
Code:/jiayi-ma/FusionDN
4、文章:DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion 【深度学习】
Cite as:Ma J, Xu H, Jiang J, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995.
Paper:DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion
Code:/jiayi-ma/DDcGAN
5、文章:Rethinking the Image Fusion A Fast Unified Image Fusion Network based on Proportional Maintenance of Gradient and Intensity 【深度学习】【通用图像融合】
Cite as:Zhang H, Xu H, Xiao Y, et al. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity[C]//Proc. AAAI Conf. Artif. Intell. 2020.
Paper:Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity(PMGI)
Code:/HaoZhang1018/PMGI AAAI2020
6、文章:Infrared and visible image fusion based on target-enhanced multiscale transform decomposition 【多尺度分解】
Cite as:Chen J, Li X, Luo L, et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition[J]. Information Sciences, 2020, 508: 64-78.
Paper:Infrared and visible image fusion based on target-enhanced multiscale transform decomposition
Code:/jiayi-ma/TE-MST
7、文章:AttentionFGAN: Infrared and Visible Image Fusion using Attention-based Generative Adversarial Networks 【深度学习】
Cite as: Li J, Huo H, Li C, et al. AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia, 2020.
Paper: AttentionFGAN: Infrared and Visible Image Fusion using Attention-based Generative Adversarial Networks
8、文章:MDLatLRR: A novel decomposition method for infrared and visible image fusion 【多尺度分解】
Cite as:Li H, Wu X, Kittler J, et al. MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion[J]. IEEE Transactions on Image Processing, 2020: 4733-4746.
Paper:MDLatLRR: A novel decomposition method for infrared and visible image fusion
Code:/hli1221/imagefusion_mdlatlrr
9、文章:NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models 【多尺度分解】
Cite as:Li H, Wu X J, Durrani T. Nestfuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656.
Paper:NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models
Code:/hli1221/imagefusion-nestfuse
10、文章:IFCNN: A general image fusion framework based on convolutional neural network 【深度学习】【通用图像融合】
Cite as:Zhang Y, Liu Y, Sun P, et al. IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network[J]. Information Fusion, 2020: 99-118.
Paper:. IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network
Code:/uzeful/IFCNN
11、文章:RXDNFuse: A aggregated residual dense network for infrared and visible image fusion 【深度学习】
Cite as:Long Y, Jia H, Zhong Y, et al. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Information Fusion, 69: 128-141.
Paper:. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion
12、文章:Infrared and visible image fusion via detail preserving adversarial learning 【深度学习】【生成对抗网络】
Cite as:Ma J, Liang P, Yu W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion, 2020, 54: 85-98.
Paper:. Infrared and visible image fusion via detail preserving adversarial learning
Code:/jiayi-ma/ResNetFusion
13、文章:SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion 【深度学习】
Cite as: Jian L, Yang X, Liu Z, et al. SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-15.
Paper: SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion
Code:/jianlihua123/SEDRFuse
14、文章:VIF-Net: an unsupervised framework for infrared and visible image fusion 【深度学习】
Cite as: JHou R, Zhou D, Nie R, et al. VIF-Net: an unsupervised framework for infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging, 2020, 6: 640-651.
Paper: VIF-Net: an unsupervised framework for infrared and visible image fusion
Code:/Laker2423/VIF-NET
15、文章:Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance 【深度学习】
Cite as: Li J, Huo H, Liu K, et al. Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance[J]. Information Sciences, 2020, 529: 28-41.
Paper:Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance
16、文章:Multigrained Attention Network for Infrared and Visible Image Fusion 【深度学习】
Cite as: Li J, Huo H, Li C, et al. Multigrained Attention Network for Infrared and Visible Image Fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-12.
Paper:Multigrained Attention Network for Infrared and Visible Image Fusion
【2019】
1、文章:FusionGAN:A generative adversarial network for infrared and visible image fusion 【深度学习】
Cite as:Jiayi Ma, Wei Yu, Pengwei Liang, Chang Li, and Junjun Jiang. FusionGAN: A generative adversarial network for infrared and visible image fusion, Information Fusion, 48, pp. 11-26, Aug. 2019.
Paper:/10.1016/.2018.09.004
Code:/jiayi-ma/FusionGAN
作者:
马佳义,武汉大学。
个人主页:
/people/jiayima/ (科研在线 科研主页)
/jiayima/
GitHub地址:/jiayi-ma
李畅,合肥工业大学,讲师。
个人主页:/people/lichang/(科研在线 科研主页)
(值得一提:主页内Data栏总结了高光谱图像数据链接。)
GitHub地址:/Chang-Li-HFUT
江俊君,哈尔滨工业大学,教授。
个人主页:
/people/jiangjunjun/(科研在线 科研主页)
/jiangjunjun(哈工大主页)
/citations?user=WNH2_rgAAAAJ&hl=zh-CN&oi=ao(Google学术主页)
/junjun-jiang(GitHub主页)
2、文章:Infrared and visible image fusion methods and applications: A survey 【综述文章】
Cite as: Jiayi Ma, Yong Ma, and Chang Li. "Infrared and visible image fusion methods and applications: A survey", Information Fusion, 45, pp. 153-178, 2019.
Paper:/10.1016/.2018.02.004
作者:马佳义,武汉大学。马泳,李畅。
【2018】
1、文章:Infrared and Visible Image Fusion with ResNet and zero-phase component analysis(点击下载文章)【深度学习】
Cite as:Li H , Wu X J , Durrani T S . Infrared and Visible Image Fusion with ResNet and zero-phase component analysis[J]. 2018.
Paper:/abs/1806.07119
Code:/hli1221/imagefusion_resnet50
作者:
李辉,江南大学博士。(导师:吴小俊)
主页:
GitHub地址:
/hli1221 (primary GitHub)
/exceptionLi
吴小俊:
主页:/info/1059/(学校导师主页)
/citations?user=5IST34sAAAAJ&hl=zh-CN&oi=sra(Google学术主页)
2、文章:DenseFuse: A Fusion Approach to Infrared and Visible Images(点击下载文章)【深度学习】
Cite as:
H. Li, X. J. Wu, DenseFuse: A Fusion Approach to Infrared and Visible Images, IEEE Trans. Image Process.(Early Access), pp. 1-1, 2018.Paper:/abs/1804.08361
(DOI:10.1109/TIP.2018.2887342)
Code:/hli1221/imagefusion_densefuse
另一个实现:/srinu007/MultiModelImageFusion(代码包里也包含有图像融合MATLAB客观评价指标函数)
作者:李辉,江南大学博士。(导师:吴小俊)
3、文章:Infrared and Visible Image Fusion using a Deep Learning Framework(点击下载文章)【深度学习】
Cite as:Li H, Wu X J, Kittler J. Infrared and Visible Image Fusion using a Deep Learning Framework[C]//Pattern Recognition (ICPR), 2018 24rd International Conference on. IEEE, 2018: 2705 - 2710.
Paper:/pdf/1804.06992
(DOI: 10.1109/ICPR.2018.8546006)
Code:/hli1221/imagefusion_deeplearning
作者:李辉,江南大学博士。(导师:吴小俊)
4、文章:Infrared and visible image fusion using Latent Low-Rank Representation 【LRR用于图像融合】
Cite as:Li H, Wu X J. Infrared and visible image fusion using Latent Low-Rank Representation[J]. 2018.
Paper:/abs/1804.08992
Code:/exceptionLi/imagefusion_Infrared_visible_latlrr
作者:李辉,江南大学博士。(导师:吴小俊)
5、文章:Infrared and visible image fusion using a novel deep decomposition method【深度学习】
Cite as: Li H, Wu X. Infrared and visible image fusion using a novel deep decomposition method[J]. arXiv: Computer Vision and Pattern Recognition, 2018.
Paper:/abs/1811.02291
Code:/hli1221/imagefusion_deepdecomposition
作者:李辉,江南大学博士。(导师:吴小俊)
6、文章:Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain
Cite as: Jin X, Jiang Q, Yao S, et al. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain[J]. Infrared Physics & Technology, 2018: 1-12.
Paper:/10.1016/.2017.10.004
Code:/jinxinhuo/SWT_DCT_SF-for-image-fusion
/matlabcentral/fileexchange/68674-infrared-and-visual-image-fusion-method-based-on-swt_dct_sf?s_tid=FX_rc2_behav
作者:金鑫——2013级云南大学博士。
7、文章:Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis
Cite as: Ma T, Ma J, Fang B, et al. Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis[J]. Infrared Physics & Technology, 2018: 154-162.
Paper:/10.1016/.2018.06.002
作者:Siwen Quan (权思文)
主页:/view/siwenquanshomepage
/citations?user=9CS008EAAAAJ&hl=zh-CN&oi=sra(Google学术主页)
8、文章:Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary
Cite as: Aishwarya N, Thangammal C B. Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary[J]. Infrared Physics & Technology, 2018: 300-309.
Paper:/10.1016/.2018.08.013
9、文章:Infrared and visible image fusion based on convolutional neural network model and saliency detection via hybrid l0-l1 layer decomposition 【CNN】【深度学习】【显著性检测】
Cite as: Liu D, Zhou D, Nie R, et al. Infrared and visible image fusion based on convolutional neural network model and saliency detection via hybrid l0-l1 layer decomposition[J]. Journal of Electronic Imaging, 2018, 27(06).
Paper:/10.1117/.27.6.063036
作者:
周冬明——云南大学教授,博导
聂仁灿——云南大学信息学院副教授,博士,硕士生导师
【2017】
1、文章:Fusion of visible and infrared images using global entropy and gradient constrained regularization
Paper:/10.1016/.2017.01.012
作者:赵巨峰,杭州电子科技大学副教授,硕导。
个人主页:/zhaojufeng/
2、文章:A survey of infrared and visual image fusion methods 【综述文章】
Paper:/10.1016/.2017.07.010
作者:
金鑫——2013级云南大学博士。
姚邵文——云南大学软件学院院长
周冬明——云南大学教授,博导
聂仁灿——云南大学信息学院副教授,博士,硕士生导师
贺康建——2014级云南大学博士
3、文章:Infrared and Visual Image Fusion through Infrared Feature Extraction and Visual Information Preservation
Cite as:
Yu Zhang, Lijia Zhang, Xiangzhi Bai and Li Zhang. Infrared and Visual Image Fusion through Infrared Feature Extraction and Visual Information Preservation, Infrared Physics & Technology 83 (2017) 227-237.
Paper:/10.1016/.2017.05.007(DOI:10.1016/.2017.05.007)
Code:/uzeful/Infrared-and-Visual-Image-Fusion-via-Infrared-Feature-Extraction-and-Visual-Information-Preservation
作者:张余,清华大学博士。
主页:
/site/uze1989/
/
GitHub地址:/uzeful
4、文章:Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility
Cite as:
Vanmali A V , Gadre V M . Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility[J]. Sādhanā, 2017, 42(7):1063-1082.
Paper: (DOI:10.1007/s12046-017-0673-1)
Code:/file/d/0B-hGkOHjv3gzVnU5Slg2YWZRWVE/view?usp=sharing
5、文章:Infrared and visible image fusion based on visual saliency map and weighted least square optimization
Cite as:
Ma J, Zhou Z, Wang B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82:8-17.
Paper:/10.1016/.2017.02.005(DOI:10.1016/.2017.02.005)
Code:/JinleiMa/Image-fusion-with-VSM-and-WLS
作者:马金磊,北京理工大学。
GitHub地址:/JinleiMa?utf8=✓
6、文章:Infrared and visible image fusion method based on saliency detection in sparse domain
Cite as:
Liu C H , Qi Y , Ding W R . Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017:S1350449516307150.
Paper:/10.1016/.2017.04.018(DOI:10.1016/.2017.04.018)
7、文章:Infrared and visible image fusion with convolutional neural networks 【深度学习】【CNN】
Cite as:
Yu Liu, Xun Chen, Juan Cheng, Hu Peng, Zengfu Wang,“Infrared and visible image fusion with convolutional neural networks”, International Journal of Wavelets,Multiresolution and Information Processing, vol. 16, no. 3, 1850018: 1-20, 2018.
Paper:/doi/abs/10.1142/S0219691318500182
/publication/321799375_Infrared_and_visible_image_fusion_with_convolutional_neural_networks
(DOI:10.1142/S0219691318500182)
Code:/people/liuyu1/(刘羽)
作者:
刘羽
陈勋,教授、博导
/~xunchen/
/citations?user=aBnUWyQAAAAJ&hl=zh-CN&oi=sra(Google学术主页)
成娟
/people/chengjuanhfut/
/citations?user=fMOOhH8AAAAJ&hl=zh-CN&oi=sra(Google学术主页)
8、文章:Infrared and visible image fusion based on total variation and augmented Lagrangian
Paper:/10.1364/JOSAA.34.001961
作者:HANQI GUO, YONG MA(马泳), XIAOGUANG MEI(梅晓光), JIAYI MA(马佳义),武汉大学。
9、文章:Fusion of infrared-visible images using improved multi-scale top-hat transform and suitable fusion rules
Paper:/10.1016/.2017.01.013
10、Fusion of infrared and visible images based on nonsubsampled contourlet transform and sparse K-SVD dictionary learning
Paper:(DOI:10.1016/.2017.01.026)
作者:
Jiajun Cai,武汉大学
/citations?user=1jAmUp0AAAAJ&hl=zh-CN&oi=sra
【2016】
1、文章:Infrared and visible image fusion via gradient transfer and total variation minimization(点击下载文章)
Cite as:
Jiayi Ma, Chen Chen, Chang Li, and Jun Huang. Infrared and visible image fusion via gradient transfer and total variation minimization, Information Fusion, 31, pp. 100-109, Sept. 2016.
Paper:/10.1016/.2016.02.001
Code:/jiayi-ma/GTF
(代码包里也提供了论文中用作对比实验的其他八种算法的代码,以及图像融合MATLAB客观评价指标函数)
作者:马佳义,武汉大学。
个人主页:/people/jiayima/
2、文章:Multi-window visual saliency extraction for fusion of visible and infrared images
Cite as:
Zhao J , Gao X , Chen Y , et al. Multi-window visual saliency extraction for fusion of visible and infrared images[J]. Infrared Physics & Technology, 2016, 76:295-302.
Paper:/10.1016/.2016.01.020
作者:赵巨峰,杭州电子科技大学副教授,硕导。
个人主页:/zhaojufeng/
3、文章:Two-scale image fusion of visible and infrared images using saliency detection
Cite as:
Bavirisetti D P , Dhuli R . Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics & Technology, 2016, 76:52-64.
Paper:/10.1016/.2016.01.009
Code:/matlabcentral/fileexchange/63571-two-scale-image-fusion-of-visible-and-infrared-images-using-saliency-detection
作者:Durga Prasad Bavirisetti
主页:/view/durgaprasadbavirisetti/home
(主页中右上角Datasets中提供了各种图像融合数据集。)
/citations?user=hc0VdQQAAAAJ&hl=zh-CN&oi=sra(Google学术主页)
4、文章:Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform
Paper:/document/7264981
(DOI: 10.1109/JSEN.2015.2478655)
Code:/matlabcentral/fileexchange/63591-fusion-of-infrared-and-visible-sensor-images-based-on-anisotropic-diffusion-and-kl-transform?s_tid=FX_rc2_behav
作者:Durga Prasad Bavirisetti
主页:/view/durgaprasadbavirisetti/home
(主页中右上角Datasets中提供了各种图像融合数据集。)
/citations?user=hc0VdQQAAAAJ&hl=zh-CN&oi=sra(Google学术主页)
5、文章:Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters 【HMSD】
Cite as:
Zhiqiang Zhou et al. "Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters", Information Fusion, 30, 2016
Paper:/10.1016/.2015.11.003
Code:/bitzhouzq/Hybrid-MSD-Fusion
或:/publication/304246314
作者:周志强,北京理工大学自动化学院,副教授
主页:/szdw/jsdw/mssbyznxtyjs_20150206131517284801/20150206115445413049_20150206131517284801/
GitHub地址:/bitzhouzq
6、文章:Fusion of infrared and visible images for night-vision context enhancement
Paper:/10.1364/AO.55.006480
Code:/bitzhouzq/Context-Enhance-via-Fusion
作者:周志强,北京理工大学自动化学院,副教授
主页:/szdw/jsdw/mssbyznxtyjs_20150206131517284801/20150206115445413049_20150206131517284801/
GitHub地址:/bitzhouzq
【2015】
1、文章:Attention-based hierarchical fusion of visible and infrared images
Paper:/10.1016/.2015.08.120
作者:
陈艳菲,副教授,硕士生导师。
主页:/info/1067/(教师主页)
桑农,华中科技大学自动化学院教授,博士生导师
主页:/info/1154/(教师主页)
【2014】
1、文章:Fusion method for infrared and visible images by using non-negative sparse representation 【NNSR】
Cite as:
Wang J , Peng J , Feng X , et al. Fusion method for infrared and visible images by using non-negative sparse representation[J]. Infrared Physics & Technology, 2014, 67:477-489.
Paper:/10.1016/.2014.09.019
作者:西北工业大学 王珺,彭进业,冯晓毅,何贵青
2、文章:The infrared and visible image fusion algorithm based on target separation and sparse representation
Cite as:
Lu X , Zhang B , Zhao Y , et al. The infrared and visible image fusion algorithm based on target separation and sparse representation[J]. Infrared Physics & Technology, 2014, 67:397-407.
Paper:/10.1016/.2014.09.007
作者:吕晓琪,张宝华,赵瑛,内蒙古科技大学
吕晓琪,内蒙古科技大学信息工程学院教授,博导。
主页:/info/1063/
张宝华,内蒙古科技大学信息工程学院副教授,硕导。
主页:/info/1063/
赵瑛,内蒙古科技大学信息工程学院讲师,硕导。
主页:/info/1063/
===================== 分 ========== 割 ========== 线 =====================
PS:早期的代码中用到的某些函数可能随着MATLAB版本的升级更新,会被删掉,导致运行错误。解决办法就是在自己电脑上保留着低版本的MATLAB。然后用到哪个函数,复制出来粘贴到代码文件夹里。
由于笔者水平有限,某些最新的论文未被收集整理,欢迎大家讨论交流!
如有疑问可联系:2458707789@; 备注 姓名+学校