DNA-Depth: A Frequency-Based Day-Night Adaptation for Monocular Depth Estimation

被引:0
作者
Shen, Mengjiao [1 ]
Wang, Zhongyi [1 ]
Su, Shuai [1 ]
Liu, Chengju [1 ]
Chen, Qijun [1 ]
机构
[1] Tongji Univ, Sch Elect & Informat Engn, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金;
关键词
Estimation; Optical flow; Training; Frequency-domain analysis; Lighting; Frequency estimation; Cameras; Depth estimation; domain adaptation; dynamic environment; Fourier transform; monocular vision;
D O I
10.1109/TIM.2023.3322498
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Autonomous driving necessitates ensuring safety across diverse environments, particularly in challenging conditions like low-light or nighttime scenarios. As a fundamental task in autonomous driving, monocular depth estimation has garnered significant attention and discussion. However, current monocular depth estimation methods primarily rely on daytime images, which limits their applicability to nighttime scenarios due to the substantial domain shift between daytime and nighttime styles. In this article, we propose a novel Day-Night Adaptation method (DNA-Depth) to realize monocular depth estimation in a night environment. Specifically, we simply use Fourier Transform to address the domain alignment problem. Our method does not require extra adversarial optimization but is quite effective. The simplicity of our method makes it easy to guide day-to-night domains. To the best of our knowledge, we are the first to utilize fast Fourier transformation for nighttime monocular depth estimation. Furthermore, to alleviate the problem of mobile light sources, we utilize an unsupervised joint learning framework for depth, optical flow, and ego-motion estimation in an end-to-end manner, which is coupled by 3-D geometry cues. Our model can simultaneously reason about the camera motion, the depth of a static background, and the optical flow of moving objects. Extensive experiments on the Oxford RobotCar, nuScenes, and Synthia datasets demonstrate the accuracy and precision of our method by comparing it with those state-of-the-art algorithms in depth estimation, both qualitatively and quantitatively.
引用
收藏
页数:12
相关论文
共 52 条
  • [1] Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer
    Atapour-Abarghouei, Amir
    Breckon, Toby P.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2800 - 2810
  • [2] Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
  • [3] Estimating Depth From Monocular Images as Classification Using Deep Fully Convolutional Residual Networks
    Cao, Yuanzhouhan
    Wu, Zifeng
    Shen, Chunhua
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (11) : 3174 - 3182
  • [4] Eigen D, 2014, ADV NEUR IN, V27
  • [5] Feng ZY, 2022, Arxiv, DOI arXiv:2203.15174
  • [6] Frigo M, 1998, INT CONF ACOUST SPEE, P1381, DOI 10.1109/ICASSP.1998.681704
  • [7] Geiger A, 2011, IEEE INT VEH SYM, P963, DOI 10.1109/IVS.2011.5940405
  • [8] Digging Into Self-Supervised Monocular Depth Estimation
    Godard, Clement
    Mac Aodha, Oisin
    Firman, Michael
    Brostow, Gabriel
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3827 - 3837
  • [9] Unsupervised Monocular Depth Estimation with Left-Right Consistency
    Godard, Clement
    Mac Aodha, Oisin
    Brostow, Gabriel J.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6602 - 6611
  • [10] Coupled Real-Synthetic Domain Adaptation for Real-World Deep Depth Enhancement
    Gu, Xiao
    Guo, Yao
    Deligianni, Fani
    Yang, Guang-Zhong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 6343 - 6356