Egocentric Scene Understanding via Multimodal Spatial Rectifier

被引:1
作者
Do, Tien [1 ]
Vuong, Khiem [2 ]
Park, Hyun Soo [1 ]
机构
[1] Univ Minnesota, Minneapolis, MN 55455 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.00285
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study a problem of egocentric scene understanding, i.e., predicting depths and surface normals from an egocentric image. Egocentric scene understanding poses unprecedented challenges: (1) due to large head movements, the images are taken from non-canonical viewpoints (i.e., tilted images) where existing models of geometry prediction do not apply; (2) dynamic foreground objects including hands constitute a large proportion of visual scenes. These challenges limit the performance of the existing models learned from large indoor datasets, such as ScanNet [6] and NYUv2 [36], which comprise predominantly upright images of static scenes. We present a multimodal spatial rectifier that stabilizes the egocentric images to a set of reference directions, which allows learning a coherent visual representation. Unlike unimodal spatial rectifier that often produces excessive perspective warp for egocentric images, the multimodal spatial rectifier learns from multiple directions that can minimize the impact of the perspective warp. To learn visual representations of the dynamic foreground objects, we present a new dataset called EDINA (Egocentric Depth on everyday INdoor Activities) that comprises more than 500K synchronized RGBD frames and gravity directions. Equipped with the multimodal spatial rectifier and the EDINA dataset, our proposed method on single-view depth and surface normal estimation significantly outperforms the baselines not only on our ED-INA dataset, but also on other popular egocentric datasets, such as First Person Hand Action (FPHA) [18] and EPIC-KITCHENS [7].
引用
收藏
页码:2822 / 2831
页数:10
相关论文
共 60 条
[1]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00541
[2]  
[Anonymous], 2015, CVPR, DOI DOI 10.1109/ICCV.2015.304
[3]  
[Anonymous], 2018, ECCV
[4]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00484
[5]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00802
[6]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00656
[7]  
[Anonymous], 2016, CVPR, DOI DOI 10.1109/CVPR.2016.642
[8]  
Bertasius Gedas., 2018, CVPR
[9]  
Chen W, 2016, ADV NEUR IN, V29
[10]   Beyond triplet loss: a deep quadruplet network for person re-identification [J].
Chen, Weihua ;
Chen, Xiaotang ;
Zhang, Jianguo ;
Huang, Kaiqi .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1320-1329