CMFA_Net: A cross-modal feature aggregation network for infrared-visible image fusion

Infrared Physics & Technology(2021)

引用 13|浏览6
暂无评分
摘要
•The channel-spatial-wise attention convolutional (CSAC) layer is put forward and integrated into the feature extractor to extract effective features and focus on the affinity areas of dual modalities.•A feature aggregation strategy based on the attention mechanism and l1-norm is proposed for fusing the depth features appropriately.•We demonstrate that substituting batch normalization (BN) with group normalization (GN) layer can accelerate the training of the network and meanwhile prevent overfitting.•A specific loss function is imposed for training the model, which is composed of SSIM-p and CGV to enable the fused image equilibrate high-quality and abundent background details.
更多
查看译文
关键词
Cross-modal,Attention mechanism,Image fusion,Unsupervised learning,End-to-end network,Infrared–visible images
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要