Mobile Device Eye Tracking on Dynamic Visual Contents using Edge Computing and Deep Learning

被引:0
作者
Gunawardena, Nishan [1 ]
Ginige, Jeewani Anupama [1 ]
Javadi, Bahman [1 ]
Lui, Gough [1 ]
机构
[1] Western Sydney Univ, Penrith, NSW, Australia
来源
2022 ACM SYMPOSIUM ON EYE TRACKING RESEARCH AND APPLICATIONS, ETRA 2022 | 2022年
关键词
eye tracking; mobile human computer interaction; edge computing; deep learning; ATTENTION;
D O I
10.1145/3517031.3532198
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Eye-tracking has been used in various domains, including human-computer interaction, psychology, and many others. Compared to commercial eye trackers, eye tracking using off-the-shelf cameras has many advantages, such as lower cost, pervasiveness, and mobility. Quantifying human attention on the mobile device is invaluable in human-computer interaction. Like videos and mobile games, dynamic visual stimuli require higher attention than static visual stimuli such as web pages and images. This research aims to develop an accurate eye-tracking algorithm using the front-facing camera of mobile devices to identify human attention hotspots when viewing video type contents. The shortage of computational power in mobile devices becomes a challenge to obtain higher user satisfaction. Edge computing moves the processing power closer to the source of the data and reduces the latency introduced by the cloud computing. Therefore, the proposed algorithm will be extended with mobile edge computing to provide a real-time eye tracking experience for users
引用
收藏
页数:3
相关论文
共 14 条
  • [1] [Anonymous], 2019, P 11 ACM S EYE TRACK, DOI DOI 10.1145/3314111.3319832
  • [2] [Anonymous], 2015, How much data is needed to train a medical image deep learning system to achieve necessary high accuracy?
  • [3] Gaze cueing of attention: Visual attention, social cognition, and individual differences
    Frischen, Alexandra
    Bayliss, Andrew P.
    Tipper, Steven P.
    [J]. PSYCHOLOGICAL BULLETIN, 2007, 133 (04) : 694 - 724
  • [4] Howard AG., 2017, ARXIV, DOI DOI 10.48550/ARXIV.1704.04861
  • [5] Eye Tracking for Everyone
    Krafka, Kyle
    Khosla, Aditya
    Kellnhofer, Petr
    Kannan, Harini
    Bhandarkar, Suchendra
    Matusik, Wojciech
    Torralba, Antonio
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2176 - 2184
  • [6] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [7] Gradient-based learning applied to document recognition
    Lecun, Y
    Bottou, L
    Bengio, Y
    Haffner, P
    [J]. PROCEEDINGS OF THE IEEE, 1998, 86 (11) : 2278 - 2324
  • [8] Strategic sophistication and attention in games: An eye-tracking study
    Polonio, Luca
    Di Guida, Sibilla
    Coricelli, Giorgio
    [J]. GAMES AND ECONOMIC BEHAVIOR, 2015, 94 : 80 - 96
  • [9] Combining gaze and AI planning for online human intention recognition
    Singh, Ronal
    Miller, Tim
    Newn, Joshua
    Velloso, Eduardo
    Vetere, Frank
    Sonenberg, Liz
    [J]. ARTIFICIAL INTELLIGENCE, 2020, 284
  • [10] Smith B. A., 2013, P ACM S US INT SOFTW, P271