PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer

被引:139
作者
Yu, Zitong [1 ]
Shen, Yuming [2 ]
Shi, Jingang [3 ]
Zhao, Hengshuang [2 ,4 ]
Torr, Philip [2 ]
Zhao, Guoying [1 ]
机构
[1] Univ Oulu, CMVS, Oulu, Finland
[2] Univ Oxford, TVG, Oxford, England
[3] Xi An Jiao Tong Univ, Xian, Peoples R China
[4] Univ Hong Kong, Hong Kong, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
英国工程与自然科学研究理事会; 芬兰科学院; 中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00415
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Remote photoplethysmography (rPPG), which aims at measuring heart activities and physiological signals from facial video without any contact, has great potential in many applications. Recent deep learning approaches focus on mining subtle rPPG clues using convolutional neural networks with limited spatio-temporal receptive fields, which neglect the long-range spatio-temporal perception and interaction for rPPG modeling. In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture, to adaptively aggregate both local and global spatio-temporal features for rPPG representation enhancement. As key modules in PhysFormer, the temporal difference transformers first enhance the quasi-periodic rPPG features with temporal difference guided global attention, and then refine the local spatio-temporal representation against interference. Furthermore, we also propose the label distribution learning and a curriculum learning inspired dynamic constraint in frequency domain, which provide elaborate supervisions for PhysFormer and alleviate overfitting. Comprehensive experiments are performed on four benchmark datasets to show our superior performance on both intra- and cross-dataset testings. One highlight is that, unlike most transformer networks needed pretraining from large-scale datasets, the proposed PhysFormer can be easily trained from scratch on rPPG datasets, which makes it promising as a novel transformer baseline for the rPPG community. The codes are available at https://github.com/ZitongYu/PhysFormer.
引用
收藏
页码:4176 / 4186
页数:11
相关论文
共 73 条
[1]  
[Anonymous], 2016, CVPR
[2]  
Arnab Anurag, 2021, ICCV
[3]  
Bengio Y., 2009, P 26 ANN INT C MACH, P41, DOI DOI 10.1145/1553374.1553380
[4]  
Bertasius G., 2021, arXiv
[5]  
Bulat Adrian, 2021, NEURIPS
[6]  
Cao J., 2021, ARXIV210606847
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[9]  
Chen Chun-Fu, 2021, ICCV
[10]  
Chen Haoyu, 2022, AAAI