Rectified self-supervised monocular depth estimation loss for nighttime and dynamic scenes

被引:0
|
作者
Qin, Xiaofei [1 ]
Wang, Lin [1 ]
Zhu, Yongchao [1 ]
Mao, Fan [1 ]
Zhang, Xuedian [1 ]
He, Changxiang [2 ]
Dong, Qiulei [3 ,4 ]
机构
[1] Univ Shanghai Sci & Technol, Sch Opt Elect & Comp Engn, Shanghai 200093, Peoples R China
[2] Univ Shanghai Sci & Technol, Coll Sci, Shanghai 200093, Peoples R China
[3] Chinese Acad Sci, State Key Lab Multimodal Artificial Intelligence S, Inst Automat, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
基金
国家重点研发计划;
关键词
Computer vision; Monocular depth estimation; Self-supervised learning; Nighttime scenes; Dynamic scenes;
D O I
10.1016/j.engappai.2025.110026
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-supervised monocular depth estimation has attracted much attention in computer vision recently. However, most existing methods assume that the scenes are static and the photometric is consistent, so that their performances tend to degrade significantly in nighttime and dynamic scenes. To address this issue, this paper proposes a self-supervised monocular depth estimation model to tackle two challenges. One is the drastic photometric changes problem due to underexposure of distant areas in nighttime scenes, the other is the moving objects problem in dynamic scenes. In the proposed model, an Effective Area Photometric Loss function (EAPL) is designed which is gated by the Effective Area Mask (EAM) and Potentially Moving Objects Mask (PMOM). Then, a motion flow network is introduced to estimate the motion of moving objects, and a Motion Flow Loss function (MFL) is proposed based on three facts, i.e., the motion flow of static objects should be zero, most moving objects in autonomous driving scenarios are approximately rigid objects, and the relative motion flows between consecutive frames should be mutually inverse. Finally, a decoupled training approach is provided to facilitate the optimization process of the model. Experimental results show that our model achieves state-of-the-art or second best performance on the nuTonomy Scenes (nuScenes) and Dense Depth for Autonomous Driving (DDAD) dataset which contains many nighttime or dynamic scenes, and also achieves competitive performance on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) dataset which is dominated by daytime and static scenes. Codes are available at https://github.com/pandaswfas/effdepth.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Self-supervised monocular depth estimation in dynamic scenes with moving instance loss
    Yue, Min
    Fu, Guangyuan
    Wu, Ming
    Zhang, Xin
    Gu, Hongyang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 112
  • [2] EDS-Depth: Enhancing Self-Supervised Monocular Depth Estimation in Dynamic Scenes
    Yu, Shangshu
    Wu, Meiqing
    Lam, Siew-Kei
    Wang, Changshuo
    Wang, Ruiping
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025,
  • [3] Self-Supervised Multi-Frame Monocular Depth Estimation for Dynamic Scenes
    Wu, Guanghui
    Liu, Hao
    Wang, Longguang
    Li, Kunhong
    Guo, Yulan
    Chen, Zengping
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4989 - 5001
  • [4] Self-supervised monocular depth estimation in dynamic scenes based on deep learning
    Cheng, Binbin
    Yu, Ying
    Zhang, Lei
    Wang, Ziquan
    Jiang, Zhipeng
    National Remote Sensing Bulletin, 2024, 28 (09) : 2170 - 2186
  • [5] PMIndoor: Pose Rectified Network and Multiple Loss Functions for Self-Supervised Monocular Indoor Depth Estimation
    Chen, Siyu
    Zhu, Ying
    Liu, Hong
    SENSORS, 2023, 23 (21)
  • [6] SC-DepthV3: Robust Self-Supervised Monocular Depth Estimation for Dynamic Scenes
    Sun, Libo
    Bian, Jia-Wang
    Zhan, Huangying
    Yin, Wei
    Reid, Ian
    Shen, Chunhua
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (01) : 497 - 508
  • [7] Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark
    Wang, Kun
    Zhang, Zhenyu
    Yan, Zhiqiang
    Li, Xiang
    Xu, Baobei
    Li, Jun
    Yang, Jian
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16035 - 16044
  • [8] Self-Supervised Learning for Monocular Depth Estimation on Minimally Invasive Surgery Scenes
    Shao, Shuwei
    Pei, Zhongcai
    Chen, Weihai
    Zhang, Baochang
    Wu, Xingming
    Sun, Dianmin
    Doermann, David
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 7159 - 7165
  • [9] Adv-Depth: Self-Supervised Monocular Depth Estimation With an Adversarial Loss
    Li, Kunhong
    Fu, Zhiheng
    Wang, Hanyun
    Chen, Zonghao
    Guo, Yulan
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 638 - 642
  • [10] Self-supervised monocular depth estimation on construction sites in low-light conditions and dynamic scenes
    Shen, Jie
    Huang, Ziyi
    Jiao, Lang
    AUTOMATION IN CONSTRUCTION, 2024, 168