SVAM: Saliency-guided Visual Attention Modeling by Autonomous Underwater Robots

被引:0
作者
Islam, Md Jahidul [1 ]
Wang, Ruobing [2 ]
Sattar, Junaed [2 ]
机构
[1] Univ Florida, Dept ECE, RoboPI Grp, Gainesville, FL 32611 USA
[2] Univ Minnesota, Dept CS, IRVLab, St Paul, MN USA
来源
ROBOTICS: SCIENCE AND SYSTEM XVIII | 2022年
基金
美国国家科学基金会;
关键词
OBJECT DETECTION; EXPLORATION; CONTRAST;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a holistic approach to saliency-guided visual attention modeling (SVAM) for use by autonomous underwater robots. Our proposed model, named SVAM-Net, integrates deep visual features at various scales and semantics for effective salient object detection (SOD) in natural underwater images. The SVAM-Net architecture is configured in a unique way to jointly accommodate bottom-up and top-down learning within two separate branches of the network while sharing the same encoding layers. We design dedicated spatial attention modules (SAMs) along these learning pathways to exploit the coarse-level and fine-level semantic features for SOD at four stages of abstractions. The bottom-up branch performs a rough yet reasonably accurate saliency estimation at a fast rate, whereas the deeper top-down branch incorporates a residual refinement module (RRM) that provides fine-grained localization of the salient objects. Extensive performance evaluation of SVAM-Net on benchmark datasets clearly demonstrates its effectiveness for underwater SOD. We also validate its generalization performance by several ocean trials' data that include test images of diverse underwater scenes and waterbodies, and also images with unseen natural objects. Moreover, we analyze its computational feasibility for robotic deployments and demonstrate its utility in several important use cases of visual attention modeling.
引用
收藏
页数:13
相关论文
共 68 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[3]   A Revised Underwater Image Formation Model [J].
Akkaynak, Derya ;
Treibitz, Tali .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6723-6732
[4]   CoralSeg: Learning coral segmentation from sparse annotations [J].
Alonso, Inigo ;
Yuval, Matan ;
Eyal, Gal ;
Treibitz, Tali ;
Murillo, Ana C. .
JOURNAL OF FIELD ROBOTICS, 2019, 36 (08) :1456-1477
[5]  
Bazzani L., 2017, ICLR
[6]  
Beijbom O, 2012, PROC CVPR IEEE, P1170, DOI 10.1109/CVPR.2012.6247798
[7]  
Borji Ali, 2019, [Computational Visual Media, 计算可视媒体], V5, P117
[8]   Salient Object Detection: A Benchmark [J].
Borji, Ali ;
Sihite, Dicky N. ;
Itti, Laurent .
COMPUTER VISION - ECCV 2012, PT II, 2012, 7573 :414-429
[9]   OMDP: An ontology-based model for diagnosis and treatment of diabetes patients in remote healthcare systems [J].
Chen, Li ;
Lu, Dongxin ;
Zhu, Menghao ;
Muzammal, Muhammad ;
Samuel, Oluwarotimi Williams ;
Huang, Guixin ;
Li, Weinan ;
Wu, Hongyan .
INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2019, 15 (05)
[10]   Underwater Object Segmentation Integrating Transmission and Saliency Features [J].
Chen, Zhe ;
Sun, Yang ;
Gu, Yupeng ;
Wang, Huibin ;
Qian, Hao ;
Zheng, Hao .
IEEE ACCESS, 2019, 7 :72420-72430