Simultaneously Learning Semantic Segmentation and Depth Estimation from Omnidirectional Image

被引:0
作者
Yokota A. [1 ]
Li S. [1 ]
Kamio T. [1 ]
Kosaku T. [1 ]
机构
[1] Graduate School of Information Sciences, Hiroshima City University, 3-4-1, Ozuka-higashi, Asaminami-ku, Hiroshima
关键词
depth estimation; multi-task learning; omnidirectional image; semantic segmentation;
D O I
10.1541/ieejeiss.144.560
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In multi-task learning, the goal is to improve the generalization performance of the model by exploiting the information shared across tasks. In this paper, we propose a neural network that simultaneously learns depth estimation and semantic segmentation of the environment from omnidirectional images captured by an omnidirectional camera. Our proposed neural network is developed by modifying UniFuse network, which was originally developed for depth estimation from omnidirectional images, to simultaneously learn depth estimation and semantic segmentation of the environment by exploiting the features shared between depth estimation and semantic segmentation tasks. In the experiments, the proposed method was evaluated with the well-known Stanford 2D3D Dataset. High accuracy for the two tasks was not obtained with a single network. However, if either of the two tasks was prioritized in learning, the synergistic effect of the two tasks with shared feature maps would improve accuracy, resulting in better results than a single-task network. It showed the effectiveness of simultaneously learning semantic segmentation and depth estimation from omnidirectional images. © 2024 The Institute of Electrical Engineers of Japan.
引用
收藏
页码:560 / 567
页数:7
相关论文
共 50 条
[31]   CI-Net: a joint depth estimation and semantic segmentation network using contextual information [J].
Gao, Tianxiao ;
Wei, Wu ;
Cai, Zhongbin ;
Fan, Zhun ;
Xie, Sheng Quan ;
Wang, Xinmei ;
Yu, Qiuda .
APPLIED INTELLIGENCE, 2022, 52 (15) :18167-18186
[32]   A comparative study on multi-task uncertainty quantification in semantic segmentation and monocular depth estimation [J].
Landgraf, Steven ;
Hillemann, Markus ;
Kapler, Theodor ;
Ulrich, Markus .
TM-TECHNISCHES MESSEN, 2025,
[33]   CI-Net: a joint depth estimation and semantic segmentation network using contextual information [J].
Tianxiao Gao ;
Wu Wei ;
Zhongbin Cai ;
Zhun Fan ;
Sheng Quan Xie ;
Xinmei Wang ;
Qiuda Yu .
Applied Intelligence, 2022, 52 :18167-18186
[34]   Lifting the Veil of Frequency in Joint Segmentation and Depth Estimation [J].
Fu, Tianhao ;
Li, Yingying ;
Ye, Xiaoqing ;
Tan, Xiao ;
Sun, Hao ;
Shen, Fumin ;
Ding, Errui .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :944-952
[35]   IMPROVED DEEP LEARNING ARCHITECTURE FOR DEPTH ESTIMATION FROM SINGLE IMAGE [J].
Abuowaida, Suhaila F. A. ;
Chan, Huah Yong .
JORDANIAN JOURNAL OF COMPUTERS AND INFORMATION TECHNOLOGY, 2020, 6 (04) :434-445
[36]   Survey on Supervised Learning Based Depth Estimation from a Single Image [J].
Bi T. ;
Liu Y. ;
Weng D. ;
Wang Y. .
Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2018, 30 (08) :1383-1393
[37]   MONOCULAR SEGMENT-WISE DEPTH: MONOCULAR DEPTH ESTIMATION BASED ON A SEMANTIC SEGMENTATION PRIOR [J].
Atapour-Abarghouei, Amir ;
Breckon, Toby P. .
2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, :4295-4299
[38]   Exploiting Depth From Single Monocular Images for Object Detection and Semantic Segmentation [J].
Cao, Yuanzhouhan ;
Shen, Chunhua ;
Shen, Heng Tao .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (02) :836-846
[39]   A survey on deep learning techniques for image and video semantic segmentation [J].
Garcia-Garcia, Alberto ;
Orts-Escolano, Sergio ;
Oprea, Sergiu ;
Villena-Martinez, Victor ;
Martinez-Gonzalez, Pablo ;
Garcia-Rodriguez, Jose .
APPLIED SOFT COMPUTING, 2018, 70 :41-65
[40]   A Semantic Segmentation Method of Embryo Image Based on Curriculum Learning [J].
Tang H.-Z. ;
Wang W. ;
Wang T. ;
Lu W.-D. ;
Huang X.-H. ;
Zhang J. .
Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2023, 51 (11) :3365-3376