On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation

被引:0
作者
Haimei Zhao
Jing Zhang
Zhuo Chen
Bo Yuan
Dacheng Tao
机构
[1] University of Sydney,School of Computer Science
[2] Tsinghua University,Shenzhen International Graduate School
[3] University of Queensland,School of Information Technology & Electrical Engineering
来源
Machine Intelligence Research | 2024年 / 21卷
关键词
3D vision; depth estimation; cross-view consistency; self-supervised learning; monocular perception;
D O I
暂无
中图分类号
学科分类号
摘要
Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the “point-to-point” alignment paradigm to the “region-to-region” one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.
引用
收藏
页码:495 / 513
页数:18
相关论文
共 50 条
[21]   Latent Object Embedding for Self-Supervised Monocular Depth Estimation [J].
Wang, Shuai ;
Yu, Ting ;
Pan, Shan ;
Chen, Wei ;
Wang, Zehua ;
Leung, Victor C. M. ;
Tian, Zijian .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025,
[22]   Self-Supervised Monocular Depth Estimation with Scene Dynamic Pose [J].
He, Jing ;
Zhu, Haonan ;
Zhao, Chenhao ;
Zhao, Minrui .
CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 83 (03) :4551-4573
[23]   Constant Velocity Constraints for Self-Supervised Monocular Depth Estimation [J].
Zhou, Hang ;
Greenwood, David ;
Taylor, Sarah ;
Gong, Han .
CVMP 2020: THE 17TH ACM SIGGRAPH EUROPEAN CONFERENCE ON VISUAL MEDIA PRODUCTION, 2020,
[24]   Self-supervised Monocular Depth Estimation on Unseen Synthetic Cameras [J].
Diana-Albelda, Cecilia ;
Bravo Perez-Villar, Juan Ignacio ;
Montalvo, Javier ;
Garcia-Martin, Alvaro ;
Bescos Cano, Jesus .
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I, 2024, 14469 :449-463
[25]   Self-Supervised Deep Monocular Depth Estimation With Ambiguity Boosting [J].
Bello, Juan Luis Gonzalez ;
Kim, Munchurl .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) :9131-9149
[26]   Transferring knowledge from monocular completion for self-supervised monocular depth estimation [J].
Sun, Lin ;
Li, Yi ;
Liu, Bingzheng ;
Xu, Liying ;
Zhang, Zhe ;
Zhu, Jie .
MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) :42485-42495
[27]   A LIGHTWEIGHT SELF-SUPERVISED TRAINING FRAMEWORK FOR MONOCULAR DEPTH ESTIMATION [J].
Heydrich, Tim ;
Yang, Yimin ;
Du, Shan .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :2265-2269
[28]   Transferring knowledge from monocular completion for self-supervised monocular depth estimation [J].
Lin Sun ;
Yi Li ;
Bingzheng Liu ;
Liying Xu ;
Zhe Zhang ;
Jie Zhu .
Multimedia Tools and Applications, 2022, 81 :42485-42495
[29]   Self-supervised Depth Estimation based on Feature Sharing and Consistency Constraints [J].
Mendoza, Julio ;
Pedrini, Helio .
PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 5: VISAPP, 2020, :134-141
[30]   SC-DepthV3: Robust Self-Supervised Monocular Depth Estimation for Dynamic Scenes [J].
Sun, Libo ;
Bian, Jia-Wang ;
Zhan, Huangying ;
Yin, Wei ;
Reid, Ian ;
Shen, Chunhua .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (01) :497-508