Subjective Annotations for Vision-based Attention Level Estimation

被引:2
作者
Coifman, Andrea [1 ]
Rohoska, Peter [1 ,3 ]
Kristoffersen, Miklas S. [1 ,2 ]
Shepstone, Sven E. [2 ]
Tan, Zheng-Hua [1 ]
机构
[1] Aalborg Univ, Dept Elect Syst, Aalborg, Denmark
[2] Bang & Olufsen AS, Struer, Denmark
[3] Continental Automot, Budapest, Hungary
来源
PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5 | 2019年
关键词
Attention Level Estimation; Natural HCI; Human Behavior Analysis; Subjective Annotations;
D O I
10.5220/0007311402490256
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Attention level estimation systems have a high potential in many use cases, such as human-robot interaction, driver modeling and smart home systems, since being able to measure a person's attention level opens the possibility to natural interaction between humans and computers. The topic of estimating a human's visual focus of attention has been actively addressed recently in the field of HCI. However, most of these previous works do not consider attention as a subjective, cognitive attentive state. New research within the field also faces the problem of the lack of annotated datasets regarding attention level in a certain context. The novelty of our work is two-fold: First, we introduce a new annotation framework that tackles the subjective nature of attention level and use it to annotate more than 100,000 images with three attention levels and second, we introduce a novel method to estimate attention levels, relying purely on extracted geometric features from RGB and depth images, and evaluate it with a deep learning fusion framework. The system achieves an overall accuracy of 80.02%. Our framework and attention level annotations are made publicly available.
引用
收藏
页码:249 / 256
页数:8
相关论文
共 21 条
  • [1] Asteriadis S., 2011, 2011 3 INT C GAM VIR
  • [2] Borghi G., 2017, IEEE C COMP VIS PATT
  • [3] Boston U., 2018, BOSTON U COMMON DATA
  • [4] A novel hierarchical framework for human action recognition
    Chen, Hongzhao
    Wang, Guijin
    Xue, Jing-Hao
    He, Li
    [J]. PATTERN RECOGNITION, 2016, 55 : 148 - 159
  • [5] The Netflix Recommender System: Algorithms, Business Value, and Innovation
    Gomez-Uribe, Carlos A.
    Hunt, Neil
    [J]. ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS, 2016, 6 (04)
  • [6] Hidalgo Gines, 2018, OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
  • [7] Jariwala K., 2016, 3 INT C DIG INF PROC
  • [8] A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms
    Kar, Anuradha
    Corcoran, Peter
    [J]. IEEE ACCESS, 2017, 5 : 16495 - 16519
  • [9] Fruit classification based on weighted score-level feature fusion
    Kuang, Hulin
    Chan, Leanne Lai Hang
    Liu, Cairong
    Yan, Hong
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2016, 25 (01)
  • [10] Li Z., 2009, AUSTR C ROB AUT ACRA