On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation

被引:0
作者
Haimei Zhao
Jing Zhang
Zhuo Chen
Bo Yuan
Dacheng Tao
机构
[1] University of Sydney,School of Computer Science
[2] Tsinghua University,Shenzhen International Graduate School
[3] University of Queensland,School of Information Technology & Electrical Engineering
来源
Machine Intelligence Research | 2024年 / 21卷
关键词
3D vision; depth estimation; cross-view consistency; self-supervised learning; monocular perception;
D O I
暂无
中图分类号
学科分类号
摘要
Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the “point-to-point” alignment paradigm to the “region-to-region” one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.
引用
收藏
页码:495 / 513
页数:18
相关论文
共 50 条
[41]   Self-Supervised Monocular Depth Estimation Method for Joint Semantic Segmentation [J].
Song X. ;
Hu H. ;
Ning J. ;
Liang L. ;
Lu X. ;
Hei X. .
Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (05) :1336-1347
[42]   Self-supervised scene depth estimation for monocular images based on uncertainty [J].
Chai, Guoqiang ;
Bo, Xiangshi ;
Liu, Haijun ;
Lu, Bin ;
Wang, Dawei .
Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (12) :3780-3787
[43]   Bridging local and global representations for self-supervised monocular depth estimation [J].
Lin, Meiling ;
Li, Gongyan ;
Hao, Yuexing .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
[44]   Multi-resolution distillation for self-supervised monocular depth estimation [J].
Lee, Sebin ;
Im, Woobin ;
Yoon, Sung-Eui .
PATTERN RECOGNITION LETTERS, 2023, 176 :215-222
[45]   Self-Supervised Learning of Monocular Depth Estimation Based on Progressive Strategy [J].
Wang, Huachun ;
Sang, Xinzhu ;
Chen, Duo ;
Wang, Peng ;
Yan, Binbin ;
Qi, Shuai ;
Ye, Xiaoqian ;
Yao, Tong .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2021, 7 :375-383
[46]   Transformers in Self-Supervised Monocular Depth Estimation with Unknown Camera Intrinsics [J].
Varma, Arnav ;
Chawla, Hemang ;
Zonooz, Bahram ;
Arani, Elahe .
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4, 2022, :758-769
[47]   Multi-resolution distillation for self-supervised monocular depth estimation [J].
Lee, Sebin ;
Im, Woobin ;
Yoon, Sung-Eui .
PATTERN RECOGNITION LETTERS, 2023, 176
[48]   Complete contextual information extraction for self-supervised monocular depth estimation [J].
Zhou, Dazheng ;
Zhang, Mingliang ;
Gao, Xianjie ;
Zhang, Youmei ;
Li, Bin .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 245
[49]   Self-supervised monocular depth estimation with occlusion mask and edge awareness [J].
Zhou, Shi ;
Zhu, Miaomiao ;
Li, Zhen ;
Li, He ;
Mizumachi, Mitsunori ;
Zhang, Lifeng .
ARTIFICIAL LIFE AND ROBOTICS, 2021, 26 (03) :354-359
[50]   MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model [J].
Shao, Shuwei ;
Pei, Zhongcai ;
Chen, Weihai ;
Sun, Dingchi ;
Chen, Peter C. Y. ;
Li, Zhengguo .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (04) :3664-3678