FANet: Features Adaptation Network for 360° Omnidirectional Salient Object Detection

被引:13
|
作者
Huang, Mengke [1 ,2 ]
Liu, Zhi [1 ,2 ]
Li, Gongyang [1 ,2 ]
Zhou, Xiaofei [3 ]
Le Meur, Olivier [4 ]
机构
[1] Shanghai Univ, Shanghai Inst Adv Commun & Data Sci, Shanghai 200444, Peoples R China
[2] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[3] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310018, Peoples R China
[4] Univ Rennes 1, IRISA, F-35042 Rennes, France
基金
中国国家自然科学基金;
关键词
360 degrees omnidirectional image; salient object detection; equirectangular and cube-map projection; projection features adaptation; multi-level features adaptation; SEGMENTATION;
D O I
10.1109/LSP.2020.3028192
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Salient object detection (SOD) in 360 degrees omnidirectional images has become an eye-catching problem because of the popularity of affordable 360 degrees cameras. In this paper, we propose a Features Adaptation Network (FANet) to highlight salient objects in 360 degrees omnidirectional images reliably. To utilize the feature extraction capability of convolutional neural networks and capture global object information, we input the equirectangular 360 degrees images and corresponding cube-map 360 degrees images to the feature extraction network (FENet) simultaneously to obtain multi-level equirectangular and cube-map features. Furthermore, we fuse these two kinds of features at each level of the FENet by a projection features adaptation (PFA) module, for selecting these two kinds of features adaptively. Finally, we combine the preliminary adaptation features at different levels by a multi-level features adaptation (MLFA) module, which weights these different-level features adaptively and produces the final saliencymaps. Experiments show our FANet outperforms the state-of-the-art methods on the 360 degrees omnidirectional SOD datasets.
引用
收藏
页码:1819 / 1823
页数:5
相关论文
共 50 条
  • [21] Lightweight adversarial network for salient object detection
    Huang, Lili
    Li, Guanbin
    Li, Ya
    Lin, Liang
    NEUROCOMPUTING, 2020, 381 (381) : 130 - 140
  • [22] Feature Refine Network for Salient Object Detection
    Yang, Jiejun
    Wang, Liejun
    Li, Yongming
    SENSORS, 2022, 22 (12)
  • [23] Pyramid Spatial Context Features for Salient Object Detection
    Li, Hui
    IEEE ACCESS, 2020, 8 : 88518 - 88526
  • [24] Salient object detection: A survey
    Borji, Ali
    Cheng, Ming-Ming
    Hou, Qibin
    Jiang, Huaizu
    Li, Jia
    COMPUTATIONAL VISUAL MEDIA, 2019, 5 (02) : 117 - 150
  • [25] Salient Object Detection: A Benchmark
    Borji, Ali
    Cheng, Ming-Ming
    Jiang, Huaizu
    Li, Jia
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) : 5706 - 5722
  • [26] Siamese Network for RGB-D Salient Object Detection and Beyond
    Fu, Keren
    Fan, Deng-Ping
    Ji, Ge-Peng
    Zhao, Qijun
    Shen, Jianbing
    Zhu, Ce
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5541 - 5559
  • [27] Joint training with the edge detection network for salient object detection
    Gu Zongyun
    Kan Junling
    Ma Chun
    Wang Qing
    Li Fangfang
    INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2022, 25 (05) : 504 - 512
  • [28] Saliency and edge features-guided end-to-end network for salient object detection
    Yang, Chen
    Xiao, Yang
    Chu, Lili
    Yu, Ziping
    Zhou, Jun
    Zheng, Huilong
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 257
  • [29] Split-guidance network for salient object detection
    Shuhan Chen
    Jinhao Yu
    Xiuqi Xu
    Zeyu Chen
    Lu Lu
    Xuelong Hu
    Yuequan Yang
    The Visual Computer, 2023, 39 : 1437 - 1451
  • [30] Deep layer guided network for salient object detection
    Liu, Zhengyi
    Li, Quanlong
    Li, Wei
    NEUROCOMPUTING, 2020, 372 : 55 - 63