Exploring the Effects of Scanpath Feature Engineering for Supervised Image Classification Models

被引:1
作者
Byrne S.A. [1 ]
Maquiling V. [2 ,5 ]
Reynolds A.P.F. [1 ,3 ]
Polonio L. [4 ]
Castner N. [5 ]
Kasneci E. [6 ]
机构
[1] IMT School for Advanced Studies Lucca, Piazza S.Francesco, 19, LU, Lucca
[2] Human-Computer Interaction, University of Töbingen, Sand 14, Töbingen
[3] MoMiLab, IMT School for Advanced Studies Lucca, Piazza S.Francesco, 19, LU, Lucca
[4] Department of Economics, Management and Statistics, Università Degli Studi di Milano Bicocca, Piazza dell Ateneo Nuovo, Milan
[5] University of Töbingen, Sand 14, Töbingen
[6] Human-Centered Technologies for Learning, Technical University of Munich, Marsstraße 20-22, Munich
关键词
computer vision; eye movements and cognition; feature engineering; image processing; machine learning; scanpaths; signal processing; visual search behavior;
D O I
10.1145/3591130
中图分类号
学科分类号
摘要
Image classification models are becoming a popular method of analysis for scanpath classification. To implement these models, gaze data must first be reconfigured into a 2D image. However, this step gets relatively little attention in the literature as focus is mostly placed on model configuration. As standard model architectures have become more accessible to the wider eye-Tracking community, we highlight the importance of carefully choosing feature representations within scanpath images as they may heavily affect classification accuracy. To illustrate this point, we create thirteen sets of scanpath designs incorporating different eye-Tracking feature representations from data recorded during a task-based viewing experiment. We evaluate each scanpath design by passing the sets of images through a standard pre-Trained deep learning model as well as a SVM image classifier. Results from our primary experiment show an average accuracy improvement of 25 percentage points between the best-performing set and one baseline set. © 2023 ACM.
引用
收藏
相关论文
共 86 条
  • [1] Ahmed Z.A.T., Jadhav M.E., Convolutional Neural Network for Prediction of Autism based on Eye-tracking Scanpaths, International Journal of Psychosocial Rehabilitation, 24, 2020, (2020)
  • [2] Arulkumaran K., Deisenroth M.P., Brundage M., Bharath A.A., Deep reinforcement learning: A brief survey, IEEE Signal Processing Magazine, 34, 6, pp. 26-38, (2017)
  • [3] Atyabi A., Shic F., Jiang J., Foster C.E., Barney E., Kim M., Li B., Ventola P., Chen C.H., Stratification of Children with Autism Spectrum Disorder through fusion of temporal information in eye-gaze scan-paths, ACM Transactions on Knowledge Discovery from Data (TKDD), (2022)
  • [4] Banerjee I., Ling Y., Chen M.C., Hasan S.A., Langlotz C.P., Moradzadeh N., Chapman B., Amrhein T., Mong D., Rubin D.L., Et al., Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification, Artificial intelligence in medicine, 97, 2019, pp. 79-88, (2019)
  • [5] Barz M., Sonntag D., Gaze-guided object classification using deep neural networks for attention-based computing, Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, pp. 253-256, (2016)
  • [6] Bhattacharya N., Rakshit S., Gwizdka J., Kogut P., Relevance prediction from eye-movements using semi-interpretable convolutional neural networks, Proceedings of the 2020 conference on human information interaction and retrieval, pp. 223-233, (2020)
  • [7] Boisvert J.F.G., Bruce N.D.B., Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features, Neurocomputing, 207, pp. 653-668, (2016)
  • [8] Braunagel C., Rosenstiel W., Kasneci E., Ready for take-over? A new driver assistance system for an automated classification of driver take-over readiness, IEEE Intelligent Transportation Systems Magazine, 9, 4, pp. 10-22, (2017)
  • [9] Predicting choice behaviour in economic games using gaze data encoded as scanpath images, Scientific Reports, 13, 1, (2023)
  • [10] Caldara R., Miellet S., iMap: a novel method for statistical fixation mapping of eye movement data, Behavior Research Methods, 43, 3, pp. 864-878, (2011)