Evidential Deep Learning for Open Set Action Recognition

被引:75
作者
Bao, Wentao [1 ]
Yu, Qi [1 ]
Kong, Yu [1 ]
机构
[1] Rochester Inst Technol, Golisano Coll Comp & Informat Sci, Rochester, NY 14623 USA
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.01310
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In a real-world scenario, human actions are typically out of the distribution from training data, which requires a model to both recognize the known actions and reject the unknown. Different from image data, video actions are more challenging to be recognized in an open-set setting due to the uncertain temporal dynamics and static bias of human actions. In this paper, we propose a Deep Evidential Action Recognition (DEAR) method to recognize actions in an open testing set. Specifically, we formulate the action recognition problem from the evidential deep learning (EDL) perspective and propose a novel model calibration method to regularize the EDL training. Besides, to mitigate the static bias of video representation, we propose a plug-and-play module to debias the learned representation through contrastive learning. Experimental results show that our DEAR method achieves consistent performance gain on multiple mainstream action recognition models and benchmarks. Code and pre-trained models are available at https://www.rit.edu/actionlab/dear.
引用
收藏
页码:13329 / 13338
页数:10
相关论文
共 64 条
[1]  
Amini A., 2020, NEURIPS
[2]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/FG.2019.8756525
[3]  
[Anonymous], 2015, CVPR
[4]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.01022
[5]  
[Anonymous], CVPR
[6]  
[Anonymous], 2016, CVPR, DOI DOI 10.1109/CVPR.2016.173
[7]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00414
[8]  
[Anonymous], 2020, CVPR, DOI DOI 10.1109/CVPR42600.2020.00067
[9]  
Bahng Hyojin, 2020, ICML
[10]  
Bao Wentao, 2020, ACM MM