Robust Heart Rate Estimation With Spatial-Temporal Attention Network From Facial Videos

被引:30
作者
Hu, Min [1 ]
Qian, Fei [1 ]
Wang, Xiaohua [1 ]
He, Lei [2 ]
Guo, Dong [1 ]
Ren, Fuji [3 ]
机构
[1] Hefei Univ Technol, Sch Comp & Informat, Anhui Prov Key Lab Affect Comp & Adv Intelligent, Hefei 230602, Peoples R China
[2] Hefei Univ Technol, Sch Math, Hefei 230602, Peoples R China
[3] Univ Tokushima, Grad Sch Adv Technol & Sci, Tokushima 7708502, Japan
基金
中国国家自然科学基金;
关键词
Feature extraction; Videos; Heart rate; Facial features; Estimation; Data mining; Signal processing; Aggregation function; remote heart rate (HR) estimation; remote photoplethysmography (rPPG); spatial-temporal attention; spatial-temporal strip pooling; REMOTE PHOTOPLETHYSMOGRAPHY; NONCONTACT;
D O I
10.1109/TCDS.2021.3062370
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In order to solve the problems of highly redundant spatial information and motion noise in the heart rate (HR) estimation from facial videos based on remote photoplethysmography (rPPG), this article proposes a novel HR estimation method based on spatial-temporal attention model. First, to reduce the redundant information and strengthen the association relationships of long-range videos, the spatial-temporal facial features are extracted by the 2-D convolutional neural network (2DCNN) and 3-D convolutional neural network (3DCNN), respectively. The aggregation function is adopted to incorporate feature maps into short segment spatial-temporal feature maps. Second, the spatial-temporal strip pooling is designed in the spatial-temporal attention module to reduce head movement noises. Then, via the two-part loss function, the model can focus more on the rPPG signal rather than the interference. We conduct extensive experiments on two public data sets to verify the effectiveness of our model. The experimental results show that the proposed method achieves significantly better performances than the state-of-the-art baselines: The mean absolute error could be reduced by 11% on the PURE data set, and by 25% on the COHFACE data set.
引用
收藏
页码:639 / 647
页数:9
相关论文
共 41 条
[1]   Smart Homes that Monitor Breathing and Heart Rate [J].
Adib, Fadel ;
Mao, Hongzi ;
Kabelac, Zachary ;
Katabi, Dina ;
Miller, Robert C. .
CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2015, :837-846
[2]   Detecting Pulse from Head Motions in Video [J].
Balakrishnan, Guha ;
Durand, Fredo ;
Guttag, John .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :3430-3437
[3]   Unsupervised skin tissue segmentation for remote photoplethysmography [J].
Bobbia, Serge ;
Macwan, Richard ;
Benezeth, Yannick ;
Mansouri, Alamin ;
Dubois, Julien .
PATTERN RECOGNITION LETTERS, 2019, 124 :82-90
[4]   3D Convolutional Neural Networks for Remote Pulse Rate Measurement and Mapping from Facial Video [J].
Bousefsaf, Frederic ;
Pruski, Alain ;
Maaoui, Choubeila .
APPLIED SCIENCES-BASEL, 2019, 9 (20)
[5]  
Castaneda Denisse, 2018, Int J Biosens Bioelectron, V4, P195, DOI 10.15406/ijbsbe.2018.04.00125
[6]   Heart rate monitoring via remote photoplethysmography with motion artifacts reduction [J].
Cennini, Giovanni ;
Arguel, Jeremie ;
Aksit, Kaan ;
van Leest, Arno .
OPTICS EXPRESS, 2010, 18 (05) :4867-4875
[7]   DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks [J].
Chen, Weixuan ;
McDuff, Daniel .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :356-373
[8]   Video-Based Heart Rate Measurement: Recent Advances and Future Prospects [J].
Chen, Xun ;
Cheng, Juan ;
Song, Rencheng ;
Liu, Yu ;
Ward, Rabab ;
Wang, Z. Jane .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2019, 68 (10) :3600-3615
[9]   Robust Pulse Rate From Chrominance-Based rPPG [J].
de Haan, Gerard ;
Jeanne, Vincent .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2013, 60 (10) :2878-2886
[10]   Optimizing Remote Photoplethysmography Using Adaptive Skin Segmentation for Real-Time Heart Rate Monitoring [J].
Fouad, R. M. ;
Omer, Osama A. ;
Aly, Moustafa H. .
IEEE ACCESS, 2019, 7 :76513-76528