共 50 条
Improving RGB-D Salient Object Detection via Modality-Aware Decoder
被引:33
作者:
Song, Mengke
[1
,2
]
Song, Wenfeng
[3
]
Yang, Guowei
[4
]
Chen, Chenglizhao
[1
,2
]
机构:
[1] China Univ Petr East China, Coll Comp Sci & Technol, Qingdao 266580, Peoples R China
[2] China Univ Petr East China, Qingdao Inst Software, Qingdao 266580, Peoples R China
[3] Beijing Informat Sci & Technol Univ, Comp Sch, Beijing 100192, Peoples R China
[4] Qingdao Univ, Sch Elect Informat, Qingdao 266071, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Decoding;
Object detection;
Training;
Task analysis;
Saliency detection;
Image segmentation;
Feature extraction;
RGB-D salient object detection;
modality-aware fusion;
deep learning;
GRAPH CONVOLUTION NETWORK;
IMAGE;
ATTENTION;
FUSION;
D O I:
10.1109/TIP.2022.3205747
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Most existing RGB-D salient object detection (SOD) methods are primarily focusing on cross-modal and cross-level saliency fusion, which has been proved to be efficient and effective. However, these methods still have a critical limitation, i.e., their fusion patterns - typically the combination of selective characteristics and its variations, are too highly dependent on the network's non-linear adaptability. In such methods, the balances between RGB and D (Depth) are formulated individually considering the intermediate feature slices, but the relation at the modality level may not be learned properly. The optimal RGB-D combinations differ depending on the RGB-D scenarios, and the exact complementary status is frequently determined by multiple modality-level factors, such as D quality, the complexity of the RGB scene, and degree of harmony between them. Therefore, given the existing approaches, it may be difficult for them to achieve further performance breakthroughs, as their methodologies belong to some methods that are somewhat less modality sensitive. To conquer this problem, this paper presents the Modality-aware Decoder (MaD). The critical technical innovations include a series of feature embedding, modality reasoning, and feature back-projecting and collecting strategies, all of which upgrade the widely-used multi-scale and multi-level decoding process to be modality-aware. Our MaD achieves competitive performance over other state-of-the-art (SOTA) models without using any fancy tricks in the decoder's design. Codes and results will be publicly available at https://github.com/MengkeSong/MaD.
引用
收藏
页码:6124 / 6138
页数:15
相关论文
共 50 条