Modal complementary fusion network for RGB-T salient object detection

被引:0
作者
Shuai Ma
Kechen Song
Hongwen Dong
Hongkun Tian
Yunhui Yan
机构
[1] Northeastern University,School of Mechanical Engineering & Automation
[2] Northeastern University,Key Laboratory of Vibration and Control of Aero
来源
Applied Intelligence | 2023年 / 53卷
关键词
RGB-T salient object detection; Image quality; Modality reweight; Spatial complementary fusion;
D O I
暂无
中图分类号
学科分类号
摘要
RGB-T salient object detection (SOD) combines thermal infrared and RGB images to overcome the light sensitivity of RGB images in low-light conditions. However, the quality of RGB-T images could be unreliable under complex imaging scenarios, and direct fusion of these low-quality images will lead to sub-optimal detection results. In this paper, we propose a novel Modal Complementary Fusion Network (MCFNet) to alleviate the contamination effect of low-quality images from both global and local perspectives. Specifically, we design a modal reweight module (MRM) to evaluate the global quality of images and adaptively reweight RGB-T features by explicitly modelling interdependencies between RGB and thermal images. Furthermore, we propose a spatial complementary fusion module (SCFM) to explore the complementary local regions between RGB-T images and selectively fuse multi-modal features. Finally, multi-scale features are fused to obtain the salient detection result. Experiments on three RGB-T benchmark datasets demonstrate that our MCFNet achieved outstanding performance compared with the latest state-of-the-art methods. We have also achieved competitive results in RGB-D SOD tasks, which proves the generalization of our method. The source code is released at https://github.com/dotaball/MCFNet.
引用
收藏
页码:9038 / 9055
页数:17
相关论文
empty
未找到相关数据