Visual saliency model based on crowdsourcing eye tracking data and its application in visual design

被引:4
作者
Cheng, Shiwei [1 ]
Fan, Jing [1 ]
Hu, Yilin [1 ]
机构
[1] Zhejiang Univ Technol, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Visual saliency model; Visual attention; Crowd computing; Eye tracking; Human-computer interaction; GAZE;
D O I
10.1007/s00779-020-01463-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The visual saliency models based on low-level features of an image have the problem of low accuracy and scalability, while the visual saliency models based on deep neural networks can effectively improve the prediction performance, but require a large amount of training data, e.g., eye tracking data, to achieve good results. However, the traditional eye tracking method is limited by high equipment and time cost, complex operation process, low user experience, etc. Therefore, this paper proposed a visual saliency model based on crowdsourcing eye tracking data, which was collected by gaze recall with self-reporting from crowd workers. Parameter optimization on our crowdsourcing method was explored, and it came out that the accuracy of gaze data reached 1 degrees of visual angle, which was 3.6% higher than other existed crowdsourcing methods. On this basis, we collected a webpage dataset of crowdsourcing gaze data and constructed a visual saliency model based on a fully convolutional neural network (FCN). The evaluation results showed that after trained by crowdsourcing gaze data, the model performed better, such as prediction accuracy increased by 44.8%. Also, our model outperformed the existing visual saliency models. We also applied our model to help webpage designers evaluate and revise their visual designs, and the experimental results showed that the revised design obtained improved ratings by 8.2% compared to the initial design.
引用
收藏
页码:613 / 630
页数:18
相关论文
共 47 条
  • [1] Amirkhani D, 2019, P 5 IR C SIGN PROC I
  • [2] [Anonymous], 2011, P WORKSHOP CROWDSOUR
  • [3] [Anonymous], 2018, IEEE T PATTERN ANAL
  • [4] Borji A., 2015, P COMP VIS PATT REC
  • [5] State-of-the-Art in Visual Attention Modeling
    Borji, Ali
    Itti, Laurent
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (01) : 185 - 207
  • [6] Beyond Memorability: Visualization Recognition and Recall
    Borkin, Michelle A.
    Bylinskii, Zoya
    Kim, Nam Wook
    Bainbridge, Constance May
    Yeh, Chelsea S.
    Borkin, Daniel
    Pfister, Hanspeter
    Oliva, Aude
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2016, 22 (01) : 519 - 528
  • [7] Buscher G, 2009, CHI2009: PROCEEDINGS OF THE 27TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4, P21
  • [8] Learning Visual Importance for Graphic Designs and Data Visualizations
    Bylinskii, Zoya
    Kim, Nam Wook
    O'Donovan, Peter
    Alsheikh, Sami
    Madan, Spandan
    Pfister, Hanspeter
    Durand, Fredo
    Russell, Bryan
    Hertzmann, Aaron
    [J]. UIST'17: PROCEEDINGS OF THE 30TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, 2017, : 57 - 69
  • [9] Cerf M., 2008, Advances in Neural Information Processing Systems, P241
  • [10] Chen Mon Chu, 2001, CHI 01 Extended Abstracts on Human Factors in Computing Systems, P281, DOI [10.1145/634067.634234, DOI 10.1145/634067.634234]