Where to look at the movies: Analyzing visual attention to understand movie editing

被引:2
作者
Bruckert, Alexandre [1 ]
Christie, Marc [1 ]
Le Meur, Olivier [1 ]
机构
[1] Univ Rennes 1, IRISA, CNRS, Rennes, France
关键词
Eye-tracking; Film editing; Visual saliency; EYE-MOVEMENTS; SALIENCY; MODEL; TRACKING; NETWORK; GAZE;
D O I
10.3758/s13428-022-01949-7
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
In the process of making a movie, directors constantly care about where the spectator will look on the screen. Shot composition, framing, camera movements, or editing are tools commonly used to direct attention. In order to provide a quantitative analysis of the relationship between those tools and gaze patterns, we propose a new eye-tracking database, containing gaze-pattern information on movie sequences, as well as editing annotations, and we show how state-of-the-art computational saliency techniques behave on this dataset. In this work, we expose strong links between movie editing and spectators gaze distributions, and open several leads on how the knowledge of editing information could improve human visual attention modeling for cinematic content. The dataset generated and analyzed for this study is available at https://github.com/abruckert/eye_tracking_filmmaking
引用
收藏
页码:2940 / 2959
页数:20
相关论文
共 40 条
  • [11] Where to Look for American Sign Language (ASL) Sublexical Structure in the Visual World: Reply to Salverda (2016)
    Lieberman, Amy M.
    Borovsky, Arielle
    Hatrak, Marla
    Mayberry, Rachel I.
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2016, 42 (12) : 2002 - 2006
  • [12] Do you look where I look? Attention shifts and response preparation following dynamic social cues
    Hermens, Frouke
    Walker, Robin
    JOURNAL OF EYE MOVEMENT RESEARCH, 2012, 5 (05):
  • [13] A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder
    Liberati, Alessio
    Fadda, Roberta
    Doneddu, Giuseppe
    Congiu, Sara
    Javarone, Marco A.
    Striano, Tricia
    Chessa, Alessandro
    PERCEPTION, 2017, 46 (08) : 889 - 913
  • [14] The Cascading Development of Visual Attention in Infancy: Learning to Look and Looking to Learn
    Oakes, Lisa M.
    CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 2023, 32 (05) : 410 - 417
  • [15] Analyzing Human Visual Attention in Human-Robot Collaborative Construction Tasks
    Liang, Xiaoyun
    Cai, Jiannan
    Hu, Yuqing
    CONSTRUCTION RESEARCH CONGRESS 2024: ADVANCED TECHNOLOGIES, AUTOMATION, AND COMPUTER APPLICATIONS IN CONSTRUCTION, 2024, : 856 - 865
  • [16] Eye Tracking for Analyzing Applied Effects on Instructional Graphics Directing Visual Attention
    Chen, Hui-Hui
    Hwang, Bor-Jiunn
    Kao, Chiao-Wen
    Lai, Hung-Sheng
    Huang, Shih-Yu
    2014 TENTH INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION HIDING AND MULTIMEDIA SIGNAL PROCESSING (IIH-MSP 2014), 2014, : 369 - 372
  • [17] Where Is My Mind (Looking at)? A Study of the EEG-Visual Attention Relationship
    Delvigne, Victor
    Tits, Noe
    La Fisca, Luca
    Hubens, Nathan
    Maiorca, Antoine
    Wannous, Hazem
    Dutoit, Thierry
    Vandeborre, Jean-Philippe
    INFORMATICS-BASEL, 2022, 9 (01):
  • [18] A developmental where-what network for concurrent and interactive visual attention and recognition
    Ji, Zhengping
    Weng, Juyang
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 71 : 35 - 48
  • [19] Visual saliency and potential field data enhancements: Where is your attention drawn?
    Sivarajah, Yathunanthan
    Holden, Eun-Jung
    Togneri, Roberto
    Dentith, Michael
    Lindsay, Mark
    INTERPRETATION-A JOURNAL OF SUBSURFACE CHARACTERIZATION, 2014, 2 (04): : SJ9 - SJ21
  • [20] Look at what I can do: Object affordances guide visual attention while speakers describe potential actions
    Rehrig, Gwendolyn
    Barker, Madison
    Peacock, Candace E.
    Hayes, Taylor R.
    Henderson, John M.
    Ferreira, Fernanda
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2022, 84 (05) : 1583 - 1610