Automated Action Units Vs. Expert Raters: Face off

被引:5
|
作者
Dhamija, Svati [1 ]
Boult, Terrance E. [1 ]
机构
[1] Univ Colorado Colorado Springs, Colorado Springs, CO 80907 USA
来源
2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018) | 2018年
关键词
STUDENT ENGAGEMENT; RECOGNITION;
D O I
10.1109/WACV.2018.00035
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
User engagement is an essential component of any application design. Finding reliable methods to forecast continuos engagement can aid in creating adaptive applications like web-based interventions, intelligent student tutoring, the creation of socially intelligent human-robots, etc. In this paper, we compare observational estimates from expert raters to vision-based learning, for estimating user engagement. The vision-based approach uses automated computation of Action Units combined with an RNN. Several data collection techniques have been explored in the past that capture different modalities for engagement from obtaining self-reports and gathering external observations via crowd-sourcing or even trained expert raters. Traditional machine learning approaches discard annotations from inconsistent raters, use rater averages or apply rater-specific weighting schemes. Such approaches often end up throwing away expensive annotations. We introduce a novel approach that exploits the inherent confusion and disagreement in raters annotations to build a scalable engagement estimation model that learns to appropriately weigh subjective behavioral cues. We show that actively modeling the uncertainty, either explicitly from expert raters or from automated estimation with AU, significantly improves prediction over prediction from just the average engagement ratings. Our approach performs significantly better or on par with experts in predicting engagement for a trauma-recovery application.
引用
收藏
页码:259 / 268
页数:10
相关论文
共 19 条
  • [1] Discrimination between smiling faces: Human observers vs. automated face analysis
    Del Libano, Mario
    Calvo, Manuel G.
    Fernandez-Martin, Andres
    Recio, Guillermo
    ACTA PSYCHOLOGICA, 2018, 187 : 19 - 29
  • [2] Face detection mechanisms: Nature vs. nurture
    Kobylkov, Dmitry
    Vallortigara, Giorgio
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [3] Newborn preference for a new face vs. a previously seen communicative or motionless face
    Cecchini, Marco
    Baroni, Eleonora
    Di Vito, Cinzia
    Piccolo, Federica
    Lai, Carlo
    INFANT BEHAVIOR & DEVELOPMENT, 2011, 34 (03) : 424 - 433
  • [4] A proposal for a Riemannian face space and application to atypical vs. typical face similarities
    Townsend, James T.
    Fu, Hao-Lun
    Hsieh, Cheng-Ju
    Yang, Cheng-Ta
    JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2024, 122
  • [5] The effects of information type (features vs. configuration) and location (eyes vs. mouth) on the development of face perception
    Tanaka, James W.
    Quim, Paul C.
    Xu, Buyun
    Maynard, Kim
    Huxtable, Natalie
    Lee, Kang
    Pascalis, Olivier
    JOURNAL OF EXPERIMENTAL CHILD PSYCHOLOGY, 2014, 124 : 36 - 49
  • [6] Eye movement strategies in face ethnicity categorization vs. face identification tasks
    Chakravarthula, Puneeth N.
    Tsank, Yuliy
    Eckstein, Miguel P.
    VISION RESEARCH, 2021, 186 : 59 - 70
  • [7] Comparative evaluation of 3D vs. 2D modality for automatic detection of facial action units
    Savran, Arman
    Sankur, Bulent
    Bilge, M. Taha
    PATTERN RECOGNITION, 2012, 45 (02) : 767 - 782
  • [8] Face-to-face or face-to-screen? Undergraduates' opinions and test performance in classroom vs. online learning
    Kemp, Nenagh
    Grieve, Rachel
    FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [9] Data-driven vs. model-driven: Fast face sketch synthesis
    Wang, Nannan
    Zhu, Mingrui
    Li, Jie
    Song, Bin
    Li, Zan
    NEUROCOMPUTING, 2017, 257 : 214 - 221
  • [10] Managing the Quality vs. Efficiency Trade-off Using Dynamic Effort Scaling
    Chippa, Vinay K.
    Roy, Kaushik
    Chakradhar, Srimat T.
    Raghunathan, Anand
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2013, 12