Masked Video and Body-Worn IMU Autoencoder for Egocentric Action Recognition

被引:0
作者
Zhang, Mingfang [1 ]
Huang, Yifei [1 ]
Liu, Ruicong [1 ]
Sato, Yoichi [1 ]
机构
[1] Univ Tokyo, Inst Ind Sci, Tokyo, Japan
来源
COMPUTER VISION-ECCV 2024, PT XVIII | 2025年 / 15076卷
关键词
Egocentric action recognition; Inertial Measurement Units; Multimodal Masked Autoencoder; VIEW;
D O I
10.1007/978-3-031-72649-1_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Compared with visual signals, Inertial Measurement Units (IMUs) placed on human limbs can capture accurate motion signals while being robust to lighting variation and occlusion. While these characteristics are intuitively valuable to help egocentric action recognition, the potential of IMUs remains under-explored. In this work, we present a novel method for action recognition that integrates motion data from body-worn IMUs with egocentric video. Due to the scarcity of labeled multimodal data, we design an MAE-based self-supervised pretraining method, obtaining strong multi-modal representations via modeling the natural correlation between visual and motion signals. To model the complex relation of multiple IMU devices placed across the body, we exploit the collaborative dynamics in multiple IMU devices and propose to embed the relative motion features of human joints into a graph structure. Experiments show our method can achieve state-of-the-art performance on multiple public datasets. The effectiveness of our MAE-based pretraining and graph-based IMU modeling are further validated by experiments in more challenging scenarios, including partially missing IMU devices and video quality corruption, promoting more flexible usages in the real world.
引用
收藏
页码:312 / 330
页数:19
相关论文
共 67 条
[41]   Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal Attention [J].
Nakamura, Katsuyuki ;
Ohashi, Hiroki ;
Okada, Mitsuhiro .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :4220-4229
[42]   Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition [J].
Ordonez, Francisco Javier ;
Roggen, Daniel .
SENSORS, 2016, 16 (01)
[43]   Masked Autoencoders for Point Cloud Self-supervised Learning [J].
Pang, Yatian ;
Wang, Wenxiao ;
Tay, Francis E. H. ;
Liu, Wei ;
Tian, Yonghong ;
Yuan, Li .
COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 :604-621
[44]  
Poria S, 2016, IEEE DATA MINING, P439, DOI [10.1109/ICDM.2016.0055, 10.1109/ICDM.2016.178]
[45]   Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural Activities [J].
Sener, Fadime ;
Chatterjee, Dibyadip ;
Shelepov, Daniel ;
He, Kun ;
Singhania, Dipika ;
Wang, Robert ;
Yao, Angela .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :21064-21074
[46]   LSTA: Long Short-Term Attention for Egocentric Action Recognition [J].
Sudhakaran, Swathikiran ;
Escalera, Sergio ;
Lanz, Oswald .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :9946-9955
[47]  
Tateno M, 2024, Arxiv, DOI arXiv:2405.01090
[48]  
Tong Zhan, 2022, ADV NEURAL INFORM PR
[49]  
Tsutsui S., 2021, arXiv
[50]  
Vaswani A, 2017, ADV NEUR IN, V30