Saliency Detection Based on Context-aware Cross-layer Feature Fusion for Light Field Images

被引:0
作者
Deng H. [1 ]
Cao Z. [1 ]
Xiang S. [1 ]
Wu J. [1 ]
机构
[1] (School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430081, China) (Engineering Research Center for Metallurgical Automation and Measurement Technology of Ministry of Education, Wuhan University of Science and
来源
Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology | 2023年 / 45卷 / 12期
关键词
Context-awareness; Cross-layer feature fusion; Light field images; Saliency detection;
D O I
10.11999/JEIT221270
中图分类号
学科分类号
摘要
Saliency detection of light field images is a key technique in applications such as visual tracking, target detection, and image compression. However, the existing deep learning methods ignore feature differences and global contextual information when processing features, resulting in blurred saliency maps and even incomplete detection objects and difficult background suppression in scenes with similar foreground and background colors, textures, or background clutter. A context-aware cross-layer feature fusion-based saliency detection network for light field images is proposed. First, a cross-layer feature fusion module is built to select adaptively complementary components from input features to reduce feature differences and avoid inaccurate integration of features in order to more effectively fuse adjacent layer features and informative coefficients; Meanwhile, a Parallel Cascaded Feedback Decoder (PCFD) is constructed using the cross-layer feature fusion module to iteratively refine features using a multi-level feedback mechanism to avoid feature loss and dilution of high-level contextual features; Finally, a Global Context Module (GCM) generates multi-scale features to exploit the rich global context information in order to obtain the correlation between different salient regions and mitigate the dilution of high-level features. Experimental results on the latest light field dataset show that the textual method outperforms the compared methods both quantitatively and qualitatively, and is able to detect accurately complete salient objects and obtain clear saliency maps from similar front/background scenes. © 2023 Science Press. All rights reserved.
引用
收藏
页码:4489 / 4498
页数:9
相关论文
共 23 条
  • [1] BORJI A, CHENG Mingming, JIANG Huaizhu, Et al., Salient object detection: A benchmark[J], IEEE Transactions on Image Processing, 24, 12, pp. 5706-5722, (2015)
  • [2] LI Xi, HU Weiming, SHEN Chunhua, Et al., A survey of appearance models in visual object tracking, ACM Transactions on Intelligent Systems and Technology, 4, 4, (2013)
  • [3] HAN S, VASCONCELOS N., Object recognition with hierarchical discriminant saliency networks, Frontiers in Computational Neuroscience, 8, (2014)
  • [4] LI Nianyi, YE Jinwei, JI Yu, Et al., Saliency detection on light field[C], 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2806-2813, (2014)
  • [5] ZHANG Jun, WANG Meng, LIN Liang, Et al., Saliency detection on light field: A multi-cue approach, ACM Transactions on Multimedia Computing, Communications, and Applications, 13, 3, (2017)
  • [6] PIAO Yongri, RONG Zhengkun, ZHANG Miao, Et al., Deep light-field-driven saliency detection from a single view[C], The 28th International Joint Conference on Artificial Intelligence, pp. 904-911, (2019)
  • [7] WANG Tiantian, PIAO Yongri, LU Huchuan, Et al., Deep learning for light field saliency detection[C], 2019 IEEE/CVF International Conference on Computer Vision, pp. 8837-8847, (2019)
  • [8] PIAO Yongri, JIANG Yongyao, ZHANG Miao, Et al., PANet: Patch-aware network for light field salient object detection[J], IEEE Transactions on Cybernetics, 53, 1, pp. 379-391, (2023)
  • [9] DAI Yimian, GIESEKE F, OEHMCKE S, Et al., Attentional feature fusion[C], 2021 IEEE Winter Conference on Applications of Computer Vision, pp. 3559-3568, (2021)
  • [10] WANG Anzhi, REN Chunhong, HE Linyan, Et al., Light field salient object detection based on multi-modal multilevel feature aggregation network[J], Computer Engineering, 48, 7, pp. 227-233, (2022)