ContextNet: Learning Context Information for Texture-Less Light Field Depth Estimation

被引:0
作者
Chao, Wentao [1 ]
Wang, Xuechun [1 ]
Kan, Yiming [1 ]
Duan, Fuqing [1 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VI | 2024年 / 14430卷
关键词
Light field; Depth estimation; Texture-less regions;
D O I
10.1007/978-981-99-8537-1_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Depth estimation in texture-less regions of the light field is an important research direction. However, there are few existing methods dedicated to this issue. We find that context information is significantly crucial for depth estimation in texture-less regions. In this paper, we propose a simple yet effective method called ContextNet for texture-less light field depth estimation by learning context information. Specifically, we aim to enlarge the receptive field of feature extraction by using dilated convolutions and increasing the training patch size. Moreover, we design the Augment SPP (AugSPP) module to aggregate features of multiple-scale and multiple-level. Extensive experiments demonstrate the effectiveness of our method, significantly improving depth estimation results in texture-less regions. The performance of our method outperforms the current state-of-the-art methods (e.g., LFattNet, DistgDisp, OACC-Net, and SubFocal) on the UrbanLF-Syn dataset in terms of MSE x100, BadPix 0.07, BadPix 0.03, and BadPix 0.01. Our method also ranks third place of comprehensive results in the competition about LFNAT Light Field Depth Estimation Challenge at CVPR 2023 Workshop without any post-processing steps (The code and model are available at https:// github.com/chaowentao/ContextNet.).
引用
收藏
页码:15 / 27
页数:13
相关论文
共 40 条
[1]  
[Anonymous], 2005, Light field photography with a hand-held plenoptic camera
[2]  
Chao W., 2023, TCI Early Access, P1
[3]  
Chao WT, 2023, Arxiv, DOI arXiv:2305.17710
[4]  
Chen JX, 2021, AAAI CONF ARTIF INTE, V35, P1009
[5]   Light Field Compressed Sensing Over a Disparity-Aware Dictionary [J].
Chen, Jie ;
Chau, Lap-Pui .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2017, 27 (04) :855-865
[6]   Light Field Reconstruction Using Efficient Pseudo 4D Epipolar-Aware Structure [J].
Chen, Yangling ;
Zhang, Shuo ;
Chang, Song ;
Lin, Youfang .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2022, 8 :397-410
[7]   Spatial-Angular Versatile Convolution for Light Field Reconstruction [J].
Cheng, Zhen ;
Liu, Yutong ;
Xiong, Zhiwei .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2022, 8 :1131-1144
[8]   Light Field Super-Resolution with Zero-Shot Learning [J].
Cheng, Zhen ;
Xiong, Zhiwei ;
Chen, Chang ;
Liu, Dong ;
Zha, Zheng-Jun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :10005-10014
[9]   Light Field Image Super-Resolution Network via Joint Spatial-Angular and Epipolar Information [J].
Duong, Vinh Van ;
Huu, Thuc Nguyen ;
Yim, Jonghoon ;
Jeon, Byeungwoo .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 :350-366
[10]   LOCAL-VARIANCE-BASED ATTENTION FOR VISUAL TRACKING [J].
Guo, Changlun ;
Wen, Xianbin ;
Yuan, Liming ;
Xu, Haixia .
2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,