MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation

被引:6
作者
Yasarla, Rajeev [1 ]
Cai, Hong [1 ]
Jeong, Jisoo [1 ]
Shi, Yunxiao [1 ]
Garrepalli, Risheek [1 ]
Porikli, Fatih [1 ]
机构
[1] Qualcomm AI Res, San Diego, CA 92121 USA
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023) | 2023年
关键词
D O I
10.1109/ICCV51070.2023.00804
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose MAMo, a novel memory and attention framework for monocular video depth estimation. MAMo can augment and improve any single-image depth estimation networks into video depth estimation models, enabling them to take advantage of the temporal information to predict more accurate depth. In MAMo, we augment model with memory which aids the depth prediction as the model streams through the video. Specifically, the memory stores learned visual and displacement tokens of the previous time instances. This allows the depth network to cross-reference relevant features from the past when predicting depth on the current frame. We introduce a novel scheme to continuously update the memory, optimizing it to keep tokens that correspond with both the past and the present visual information. We adopt attention- based approach to process memory features where we first learn the spatio-temporal relation among the resultant visual and displacement memory tokens using self-attention module. Further, the output features of self-attention are aggregated with the current visual features through cross-attention. The cross-attended features are finally given to a decoder to predict depth on the current frame. Through extensive experiments on several benchmarks, including KITTI, NYU-Depth V2, and DDAD, we show that MAMo consistently improves monocular depth estimation networks and sets new state-of-the-art (SOTA) accuracy. Notably, our MAMo video depth estimation provides higher accuracy with lower latency, when comparing to SOTA cost-volume-based video depth models.
引用
收藏
页码:8720 / 8730
页数:11
相关论文
共 50 条
[21]   Lightweight monocular absolute depth estimation based on attention mechanism [J].
Jin, Jiayu ;
Tao, Bo ;
Qian, Xinbo ;
Hu, Jiaxin ;
Li, Gongfa .
JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
[22]   Patch-Wise Attention Network for Monocular Depth Estimation [J].
Lee, Sihaeng ;
Lee, Janghyeon ;
Kim, Byungju ;
Yi, Eojindl ;
Kim, Junmo .
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 :1873-1881
[23]   DAttNet: monocular depth estimation network based on attention mechanisms [J].
Astudillo, Armando ;
Barrera, Alejandro ;
Guindel, Carlos ;
Al-Kaff, Abdulla ;
Garcia, Fernando .
NEURAL COMPUTING & APPLICATIONS, 2024, 36 (07) :3347-3356
[24]   Monocular Depth Estimation with Optical Flow Attention for Autonomous Drones [J].
Shimhada, Tomoyasu ;
Nishikawa, Hiroki ;
Kong, Xiangbo ;
Tomiyama, Hiroyuki .
2022 19TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2022, :197-198
[25]   Radar Fusion Monocular Depth Estimation Based on Dual Attention [J].
Long, JianYu ;
Huang, JinGui ;
Wang, ShengChun .
ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 :166-179
[26]   Monocular depth estimation with multi-view attention autoencoder [J].
Jung, Geunho ;
Yoon, Sang Min .
MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (23) :33759-33770
[27]   DAttNet: monocular depth estimation network based on attention mechanisms [J].
Armando Astudillo ;
Alejandro Barrera ;
Carlos Guindel ;
Abdulla Al-Kaff ;
Fernando García .
Neural Computing and Applications, 2024, 36 :3347-3356
[28]   Boosting Monocular Depth Estimation with Channel Attention and Mutual Learning [J].
Takagi, Kazunari ;
Ito, Seiya ;
Kaneko, Naoshi ;
Sumi, Kazuhiko .
2019 JOINT 8TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2019 3RD INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR) WITH INTERNATIONAL CONFERENCE ON ACTIVITY AND BEHAVIOR COMPUTING (ABC), 2019, :228-233
[29]   Infant Video Interaction Recognition Using Monocular Depth Estimation [J].
Rasmussen, Christopher ;
Kiruga, Amani ;
Orlando, Julie ;
Lobo, Michele A. .
ADVANCES IN VISUAL COMPUTING, ISVC 2024, PT I, 2025, 15046 :156-169
[30]   Realtime depth estimation and obstacle detection from monocular video [J].
Wedel, Andreas ;
Franke, Uwe ;
Klappstein, Jens ;
Brox, Thomas ;
Cremers, Daniel .
PATTERN RECOGNITION, PROCEEDINGS, 2006, 4174 :475-484