共 50 条
C2DFNet: Criss-Cross Dynamic Filter Network for RGB-D Salient Object Detection
被引:50
作者:
Zhang, Miao
[1
]
Yao, Shunyu
[2
]
Hu, Beiqi
[2
]
Piao, Yongri
[3
]
Ji, Wei
[4
]
机构:
[1] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Software Engn, Key Lab Ubiquitous Network & Serv Software Liaonin, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, Sch Software Technol, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
[4] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6G 2R3, Canada
基金:
中国国家自然科学基金;
关键词:
Dynamic filter;
fusion network;
RGB-D salient object detection;
D O I:
10.1109/TMM.2022.3187856
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
The ability to deal with intra and inter-modality features has been critical to the development of RGB-D salient object detection. While many works have advanced in leaps and bounds in this field, most existing methods have not taken their way down into the inherent differences between the RGB and depth data due to widely adopted conventional convolution in which fixed parameter kernels are applied during inference. To promote intra and inter-modality interaction conditioned on various scenarios, as RGB and depth data are processed independently and later fused interactively, we develop a new insight and a better model. In this paper, we introduce a criss-cross dynamic filter network by decoupling dynamic convolution. First, we propose a Model-specific Dynamic Enhanced Module (MDEM) that dynamically enhances the intra-modality features with global context guidance. Second, we propose a Scene-aware Dynamic Fusion Module (SDFM) to realize dynamic feature selection between two modalities. As a result, our model achieves accurate predictions of salient objects. Extensive experiments demonstrate that our method achieves competitive performance over 28 state-of-the-art RGB-D methods on 7 public datasets.
引用
收藏
页码:5142 / 5154
页数:13
相关论文