Where to look at the movies: Analyzing visual attention to understand movie editing

被引:2
作者
Bruckert, Alexandre [1 ]
Christie, Marc [1 ]
Le Meur, Olivier [1 ]
机构
[1] Univ Rennes 1, IRISA, CNRS, Rennes, France
关键词
Eye-tracking; Film editing; Visual saliency; EYE-MOVEMENTS; SALIENCY; MODEL; TRACKING; NETWORK; GAZE;
D O I
10.3758/s13428-022-01949-7
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
In the process of making a movie, directors constantly care about where the spectator will look on the screen. Shot composition, framing, camera movements, or editing are tools commonly used to direct attention. In order to provide a quantitative analysis of the relationship between those tools and gaze patterns, we propose a new eye-tracking database, containing gaze-pattern information on movie sequences, as well as editing annotations, and we show how state-of-the-art computational saliency techniques behave on this dataset. In this work, we expose strong links between movie editing and spectators gaze distributions, and open several leads on how the knowledge of editing information could improve human visual attention modeling for cinematic content. The dataset generated and analyzed for this study is available at https://github.com/abruckert/eye_tracking_filmmaking
引用
收藏
页码:2940 / 2959
页数:20
相关论文
共 40 条
  • [21] Are you in the loop? Using gaze dispersion to understand driver visual attention during vehicle automation
    Louw, Tyron
    Merat, Natasha
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2017, 76 : 35 - 50
  • [22] Fantastic Answers and Where to Find Them: Immersive Question-Directed Visual Attention
    Jiang, Ming
    Chen, Shi
    Yang, Jinhui
    Zhao, Qi
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2977 - 2986
  • [23] Group cycling in urban environments: Analyzing visual attention and riding performance for enhanced road safety
    Li, Meng
    Zhang, Yan
    Chen, Tao
    Du, Hao
    Deng, Kaifeng
    ACCIDENT ANALYSIS AND PREVENTION, 2025, 209
  • [24] Analyzing visual attention during TAP learning and the effect of epistemic beliefs on the understanding of argument components
    Cheng, Chia-Hui
    Yang, Fang-Ying
    INTERNATIONAL JOURNAL OF SCIENCE EDUCATION, 2022, 44 (08) : 1336 - 1355
  • [25] Cross-race recognition deficit and visual attention: Do they all look (at faces) alike?
    Josephson, Sheree
    Holmes, Michael E.
    PROCEEDINGS OF THE EYE TRACKING RESEARCH AND APPLICATIONS SYMPOSIUM (ETRA 2008), 2008, : 157 - 164
  • [26] Robust memory of where from way back when: evidence from behaviour and visual attention
    Bauer, Patricia J.
    Stewart, Rebekah
    Sirkin, Ruth E.
    Larkina, Marina
    MEMORY, 2017, 25 (08) : 1089 - 1109
  • [27] Guiding Low Spatial Ability Individuals through Visual Cueing: The Dual Importance of Where and When to Look
    Roach, Victoria A.
    Fraser, Graham M.
    Kryklywy, James H.
    Mitchell, Derek G. V.
    Wilson, Timothy D.
    ANATOMICAL SCIENCES EDUCATION, 2019, 12 (01) : 32 - 42
  • [28] Using eye-tracking to understand relations between visual attention and language in children's spatial skills
    Miller, Hilary E.
    Kirkorian, Heather L.
    Simmering, Vanessa R.
    COGNITIVE PSYCHOLOGY, 2020, 117
  • [29] Analyzing Visual Attention of People with Intellectual Disabilities during Virtual Reality-Based Job Training
    Hong, Sungjin
    Shin, Heesook
    Gil, Younhee
    Jo, Junghee
    ELECTRONICS, 2021, 10 (14)
  • [30] A closer look at split visual attention in system- and self-paced instruction in multimedia learning
    Schmidt-Weigand, Florian
    Kohnert, Alfred
    Glowalla, Ulrich
    LEARNING AND INSTRUCTION, 2010, 20 (02) : 100 - 110