Learning Adaptive Dense Event Stereo from the Image Domain

被引:9
作者
Cho, Hoonhee [1 ]
Cho, Jegyeong [1 ]
Yoon, Kuk-Jin [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Visual Intelligence Lab, Daejeon, South Korea
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR52729.2023.01707
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, event-based stereo matching has been studied due to its robustness in poor light conditions. However, existing event-based stereo networks suffer severe performance degradation when domains shift. Unsupervised domain adaptation (UDA) aims at resolving this problem without using the target domain ground-truth. However, traditional UDA still needs the input event data with groundtruth in the source domain, which is more challenging and costly to obtain than image data. To tackle this issue, we propose a novel unsupervised domain Adaptive Dense Event Stereo (ADES), which resolves gaps between the different domains and input modalities. The proposed ADES framework adapts event-based stereo networks from abundant image datasets with ground-truth on the source domain to event datasets without ground-truth on the target domain, which is a more practical setup. First, we propose a self-supervision module that trains the network on the target domain through image reconstruction, while an artifact prediction network trained on the source domain assists in removing intermittent artifacts in the reconstructed image. Secondly, we utilize the feature-level normalization scheme to align the extracted features along the epipolar line. Finally, we present the motion-invariant consistency module to impose the consistent output between the perturbed motion. Our experiments demonstrate that our approach achieves remarkable results in the adaptation ability of event-based stereo matching from the image domain.
引用
收藏
页码:17797 / 17807
页数:11
相关论文
共 65 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
Ahmed SH, 2021, AAAI CONF ARTIF INTE, V35, P882
[3]  
[Anonymous], 2020, EUR C COMP VIS, DOI DOI 10.1007/978-3-030-32029-4_7
[4]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[5]   Albumentations: Fast and Flexible Image Augmentations [J].
Buslaev, Alexander ;
Iglovikov, Vladimir I. ;
Khvedchenya, Eugene ;
Parinov, Alex ;
Druzhinin, Mikhail ;
Kalinin, Alexandr A. .
INFORMATION, 2020, 11 (02)
[6]   On the use of orientation filters for 3D reconstruction in event-driven stereo vision [J].
Camunas-Mesa, Luis A. ;
Serrano-Gotarredona, Teresa ;
Ieng, Sio H. ;
Benosman, Ryad B. ;
Linares-Barranco, Bernabe .
FRONTIERS IN NEUROSCIENCE, 2014, 8
[7]   Pyramid Stereo Matching Network [J].
Chang, Jia-Ren ;
Chen, Yong-Sheng .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5410-5418
[8]   EOMVS: Event-Based Omnidirectional Multi-View Stereo [J].
Cho, Hoonhee ;
Jeong, Jaeseok ;
Yoon, Kuk-Jin .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) :6709-6716
[9]  
Cho Hoonhee, 2022, 2022 EUR C COMP VIS
[10]  
Cho Hoonhee, 2022, 36 AAAI C ART INT AA