An Individual-Difference-Aware Model for Cross-Person Gaze Estimation

被引:17
作者
Bao, Jun [1 ]
Liu, Buyu [2 ]
Yu, Jun [1 ]
机构
[1] Hangzhou Dianzi Univ, Dept Comp Sci, Hangzhou 310016, Peoples R China
[2] NEC Labs Amer, San Jose, CA 95110 USA
基金
中国国家自然科学基金;
关键词
Estimation; Predictive models; Task analysis; History; Data models; Reliability; Adaptation models; Gaze estimation; person-specific difference; EVE challenge; face; eye images; self-calibration; person-specific transform; EYE;
D O I
10.1109/TIP.2022.3171416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel method on refining cross-person gaze prediction task with eye/face images only by explicitly modelling the person-specific differences. Specifically, we first assume that we can obtain some initial gaze prediction results with existing method, which we refer to as InitNet, and then introduce three modules, the Validity Module (VM), Self-Calibration (SC) and Person-specific Transform (PT) module. By predicting the reliability of current eye/face images, VM is able to identify invalid samples, e.g. eye blinking images, and reduce their effects in modelling process. SC and PT module then learn to compensate for the differences on valid samples only. The former models the translation offsets by bridging the gap between initial predictions and dataset-wise distribution. And the later learns more general person-specific transformation by incorporating the information from existing initial predictions of the same person. We validate our ideas on three publicly available datasets, EVE, XGaze, and MPIIGaze dataset. We demonstrate that our proposed method outperforms the SOTA methods significantly on all of them, e.g. respectively 21.7%, 36.0%, and 32.9% relative performance improvements. We are the winner of the GAZE 2021 EVE Challenge and our code can be found here https://github.com/bjj9/EVE_SCPT.
引用
收藏
页码:3322 / 3333
页数:12
相关论文
共 44 条
[1]  
Atchison D. A., 2000, OPTICS HUMAN EYE, V2, P30
[2]   Measurement of angle kappa with synoptophore and Orbscan II in a normal population [J].
Basmak, Hikmet ;
Sahin, Afsun ;
Yildirim, Nilgun ;
Papakostas, Thanos D. ;
Kanellopoulos, A. John .
JOURNAL OF REFRACTIVE SURGERY, 2007, 23 (05) :456-460
[3]   Optical aberrations and alignment of the eye with age [J].
Berrio, Esther ;
Tabernero, Juan ;
Artal, Pablo .
JOURNAL OF VISION, 2010, 10 (14)
[4]  
Chen J., 2008, P 2008 19 INT C PATT, P14, DOI DOI 10.1109/ICPR.2008.4761343
[5]  
Chen JX, 2011, PROC CVPR IEEE, P609, DOI 10.1109/CVPR.2011.5995675
[6]   Gaze Estimation by Exploring Two-Eye Asymmetry [J].
Cheng, Yihua ;
Zhang, Xucong ;
Lu, Feng ;
Sato, Yoichi .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :5259-5272
[7]   Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency [J].
Chong, Eunji ;
Ruiz, Nataniel ;
Wang, Yongxin ;
Zhang, Yun ;
Rozga, Agata ;
Rehg, James M. .
COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 :397-412
[8]   Model-based head pose-free gaze estimation for assistive communication [J].
Cristina, Stefania ;
Camilleri, Kenneth P. .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 149 :157-170
[9]   Dual Attention Guided Gaze Target Detection in the Wild [J].
Fang, Yi ;
Tang, Jiapeng ;
Shen, Wang ;
Shen, Wei ;
Gu, Xiao ;
Song, Li ;
Zhai, Guangtao .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :11385-11394
[10]   Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design [J].
Feit, Anna Maria ;
Williams, Shane ;
Toledo, Arturo ;
Paradiso, Ann ;
Kulkarni, Harish ;
Kane, Shaun ;
Morris, Meredith Ringel .
PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), 2017, :1118-1130