Perception-and-Regulation Network for Salient Object Detection

被引:6
|
作者
Zhu, Jinchao [1 ,2 ]
Zhang, Xiaoyu [1 ]
Fang, Xian [3 ]
Wang, Yuxuan [1 ]
Tan, Panlong [1 ]
Liu, Junnan [4 ]
机构
[1] Nankai Univ, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[2] Tsinghua Univ, Dept Automat, BNRist, Tianjin 300350, Peoples R China
[3] Nankai Univ, Coll Comp Sci, Tianjin 300350, Peoples R China
[4] Harbin Engn Univ, Coll Intelligent Syst Sci & Engn, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Regulation; Object detection; Feature extraction; Convolution; Logic gates; Task analysis; Salient object detection; convolutional neural networks; attention mechanism; global perception; MODEL;
D O I
10.1109/TMM.2022.3210366
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Effective fusion of different types of features is the key to salient object detection (SOD). The majority of the existing network structure designs are based on the subjective experience of scholars, and the process of feature fusion does not consider the relationship between the fused features and the highest-level features. In this paper, we focus on the feature relationship and propose a novel global attention unit, which we term the "perception-and-regulation" (PR) block, that adaptively regulates the feature fusion process by explicitly modelling the interdependencies between features. The perception part uses the structure of the fully connected layers in the classification networks to learn the size and shape of the objects. The regulation part selectively strengthens and weakens the features to be fused. An imitating eye observation module (IEO) is further employed to improve the global perception capabilities of the network. The imitation of foveal vision and peripheral vision enables the IEO to scrutinize highly detailed objects and to organize a broad spatial scene to better segment objects. Sufficient experiments conducted on the SOD datasets demonstrate that the proposed method performs favourably against the 29 state-of-the-art methods.
引用
收藏
页码:6525 / 6537
页数:13
相关论文
共 50 条
  • [41] Residual Learning for Salient Object Detection
    Feng, Mengyang
    Lu, Huchuan
    Yu, Yizhou
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4696 - 4708
  • [42] Learning Salient Feature for Salient Object Detection Without Labels
    Li, Shuo
    Liu, Fang
    Jiao, Licheng
    Liu, Xu
    Chen, Puhua
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (02) : 1012 - 1025
  • [43] Deeper Look at Image Salient Object Detection: Bi-Stream Network With a Small Training Dataset
    Wu, Zhenyu
    Li, Shuai
    Chen, Chenglizhao
    Hao, Aimin
    Qin, Hong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 73 - 86
  • [44] LHRNet: Lateral hierarchically refining network for salient object detection
    Zheng, Tao
    Li, Bo
    Yao, Jiaxu
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (02) : 2503 - 2514
  • [45] Salient Object Detection With Purificatory Mechanism and Structural Similarity Loss
    Li, Jia
    Su, Jinming
    Xia, Changqun
    Ma, Mingcan
    Tian, Yonghong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6855 - 6868
  • [46] Edge Enhancing Network for Salient Object Detection
    Zhao W.
    Wang H.
    Liu X.
    Tongji Daxue Xuebao/Journal of Tongji University, 2024, 52 (02): : 293 - 302
  • [47] A Multistage Refinement Network for Salient Object Detection
    Zhang, Lihe
    Wu, Jie
    Wang, Tiantian
    Borji, Ali
    Wei, Guohua
    Lu, Huchuan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3534 - 3545
  • [48] DNA: Deeply Supervised Nonlinear Aggregation for Salient Object Detection
    Liu, Yun
    Cheng, Ming-Ming
    Zhang, Xin-Yu
    Nie, Guang-Yu
    Wang, Meng
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (07) : 6131 - 6142
  • [49] Feature Refine Network for Salient Object Detection
    Yang, Jiejun
    Wang, Liejun
    Li, Yongming
    SENSORS, 2022, 22 (12)
  • [50] Multi-Prior Driven Network for RGB-D Salient Object Detection
    Zhang, Xiaoqin
    Xu, Yuewang
    Wang, Tao
    Liao, Tangfei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9209 - 9222