An Individual-Difference-Aware Model for Cross-Person Gaze Estimation

被引:13
作者
Bao, Jun [1 ]
Liu, Buyu [2 ]
Yu, Jun [1 ]
机构
[1] Hangzhou Dianzi Univ, Dept Comp Sci, Hangzhou 310016, Peoples R China
[2] NEC Labs Amer, San Jose, CA 95110 USA
基金
中国国家自然科学基金;
关键词
Estimation; Predictive models; Task analysis; History; Data models; Reliability; Adaptation models; Gaze estimation; person-specific difference; EVE challenge; face; eye images; self-calibration; person-specific transform; EYE;
D O I
10.1109/TIP.2022.3171416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel method on refining cross-person gaze prediction task with eye/face images only by explicitly modelling the person-specific differences. Specifically, we first assume that we can obtain some initial gaze prediction results with existing method, which we refer to as InitNet, and then introduce three modules, the Validity Module (VM), Self-Calibration (SC) and Person-specific Transform (PT) module. By predicting the reliability of current eye/face images, VM is able to identify invalid samples, e.g. eye blinking images, and reduce their effects in modelling process. SC and PT module then learn to compensate for the differences on valid samples only. The former models the translation offsets by bridging the gap between initial predictions and dataset-wise distribution. And the later learns more general person-specific transformation by incorporating the information from existing initial predictions of the same person. We validate our ideas on three publicly available datasets, EVE, XGaze, and MPIIGaze dataset. We demonstrate that our proposed method outperforms the SOTA methods significantly on all of them, e.g. respectively 21.7%, 36.0%, and 32.9% relative performance improvements. We are the winner of the GAZE 2021 EVE Challenge and our code can be found here https://github.com/bjj9/EVE_SCPT.
引用
收藏
页码:3322 / 3333
页数:12
相关论文
共 44 条
  • [1] [Anonymous], 2014, P S EYE TRACK RES AP, DOI DOI 10.1145/2578153.2578185
  • [2] Atchison D. A., 2000, OPTICS HUMAN EYE, V2, P30
  • [3] Measurement of angle kappa with synoptophore and Orbscan II in a normal population
    Basmak, Hikmet
    Sahin, Afsun
    Yildirim, Nilgun
    Papakostas, Thanos D.
    Kanellopoulos, A. John
    [J]. JOURNAL OF REFRACTIVE SURGERY, 2007, 23 (05) : 456 - 460
  • [4] Optical aberrations and alignment of the eye with age
    Berrio, Esther
    Tabernero, Juan
    Artal, Pablo
    [J]. JOURNAL OF VISION, 2010, 10 (14):
  • [5] Chen J., 2008, P 2008 19 INT C PATT, P1, DOI [DOI 10.1109/ICPR.2008.4761343, 10.1109/PES.2008.4596316., DOI 10.1109/PES.2008.4596316]
  • [6] Chen JX, 2011, PROC CVPR IEEE, P609, DOI 10.1109/CVPR.2011.5995675
  • [7] Gaze Estimation by Exploring Two-Eye Asymmetry
    Cheng, Yihua
    Zhang, Xucong
    Lu, Feng
    Sato, Yoichi
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 5259 - 5272
  • [8] Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency
    Chong, Eunji
    Ruiz, Nataniel
    Wang, Yongxin
    Zhang, Yun
    Rozga, Agata
    Rehg, James M.
    [J]. COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 : 397 - 412
  • [9] Model-based head pose-free gaze estimation for assistive communication
    Cristina, Stefania
    Camilleri, Kenneth P.
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 149 : 157 - 170
  • [10] Dual Attention Guided Gaze Target Detection in the Wild
    Fang, Yi
    Tang, Jiapeng
    Shen, Wang
    Shen, Wei
    Gu, Xiao
    Song, Li
    Zhai, Guangtao
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 11385 - 11394