Individualising Graphical Layouts with Predictive Visual Search Models

被引:13
|
作者
Todi, Kashyap [1 ]
Jokinen, Jussi [1 ]
Luyten, Kris [2 ]
Oulasvirta, Antti [1 ]
机构
[1] Aalto Univ, Dept Commun & Networking, POB 11000, Helsinki, Finland
[2] UHasselt tUL Flanders Make, Wetenschapspk 2, Diepenbeek, Belgium
基金
欧洲研究理事会; 芬兰科学院;
关键词
Visual search; graphical layouts; computational design; adaptive user interfaces; MEMORY;
D O I
10.1145/3241381
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) on an interface. This article contributes individualised predictive models of visual search, and a computational approach to restructure graphical layouts for an individual user such that features on a new, unvisited interface can be found quicker. It explores four technical principles inspired by the human visual system (HVS) to predict expected positions of features and create individualised layout templates: (I) the interface with highest frequency is chosen as the template; (II) the interface with highest predicted recall probability (serial position curve) is chosen as the template; (III) the most probable locations for features across interfaces are chosen (visual statistical learning) to generate the template; (IV) based on a generative cognitive model, the most likely visual search locations for features are chosen (visual sampling modelling) to generate the template. Given a history of previously seen interfaces, we restructure the spatial layout of a new (unseen) interface with the goal of making its features more easily tradable. The four HVS principles are implemented in Familiariser, a web browser that automatically restructures webpage layouts based on the visual history of the user. Evaluation of Familiariser (using visual statistical learning) with users provides first evidence that our approach reduces visual search time by over 10%, and number of eye-gaze fixations by over 20%, during web browsing tasks.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] History effects in visual search for monsters: Search times, choice biases, and liking
    Chetverikov, Andrey
    Kristjansson, Arni
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2015, 77 (02) : 402 - 412
  • [42] Conditional probability modulates visual search efficiency
    Cort, Bryan
    Anderson, Britt
    FRONTIERS IN HUMAN NEUROSCIENCE, 2013, 7
  • [43] Perceptual similarity in visual search for multiple targets
    Gorbunova, Elena S.
    ACTA PSYCHOLOGICA, 2017, 173 : 46 - 54
  • [44] A computational model for task inference in visual search
    Haji-Abolhassani, Amin
    Clark, James J.
    JOURNAL OF VISION, 2013, 13 (03):
  • [45] Visual search for arbitrary objects in real scenes
    Wolfe, Jeremy M.
    Alvarez, George A.
    Rosenholtz, Ruth
    Kuzmova, Yoana I.
    Sherman, Ashley M.
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2011, 73 (06) : 1650 - 1671
  • [46] Visual search habits and the spatial structure of scenes
    Clarke, Alasdair D. F.
    Nowakowska, Anna
    Hunt, Amelia R.
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2022, 84 (06) : 1874 - 1885
  • [47] Fractal fluctuations in gaze speed visual search
    Stephen, Damian G.
    Anastas, Jason
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2011, 73 (03) : 666 - 677
  • [48] Sequence Learning Is Surprisingly Fragile in Visual Search
    Toh, Yi Ni
    Remington, Roger W.
    Lee, Vanessa G.
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2021, 47 (10) : 1378 - 1394
  • [49] Rapid Guidance of Visual Search by Object Categories
    Nako, Rebecca
    Wu, Rachel
    Eimer, Martin
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2014, 40 (01) : 50 - 60
  • [50] The Neural Basis of Visual Search in Scene Context
    Peelen, Marius V.
    CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 2025,