A model of audio-visual motion integration during active self-movement

被引:0
作者
Gallagher, Maria [1 ]
Haynes, Joshua D. [2 ,3 ]
Culling, John F. [4 ]
Freeman, Tom C. A. [4 ]
机构
[1] Univ Kent, Sch Psychol, Canterbury CT2 7NZ, England
[2] Cardiff Univ, Sch Psychol, Cardiff, Wales
[3] Univ Manchester, Sch Hlth Sci, Manchester, England
[4] Cardiff Univ, Sch Psychol, Cardiff CF10 3AT, Wales
来源
JOURNAL OF VISION | 2025年 / 25卷 / 02期
关键词
multisensory integration; motion perception; self-movement; active movement; audio-visual integration; MULTISENSORY CUE INTEGRATION; SPATIAL REPRESENTATION; SOUND LOCALIZATION; AUDITORY SPACE; EYE-MOVEMENT; HEAD; PERCEPTION; COMPENSATION; INFORMATION; SIGNALS;
D O I
10.1167/jov.25.2.8
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Despite good evidence for optimal audio-visual integration in stationary observers, few studies have considered the impact of self-movement on this process. When the head and/or eyes move, the integration of vision and hearing is complicated, as the sensory measurements begin in different coordinate frames. To successfully integrate these signals, they must first be transformed into the same coordinate frame. We propose that audio and visual motion cues are separately transformed using self-movement signals, before being integrated as body-centered cues to audio-visual motion. We tested this hypothesis using a psychophysical audio-visual integration task in which participants made left/right judgments of audio, visual, or audio-visual targets during self-generated yaw head rotations. Estimates of precision and bias from the audio and visual conditions were used to predict performance in the audio-visual conditions. We found that audio-visual performance was predicted well by models that suggested the transformation of cues into common coordinates but could not be explained by a model that did not rely on coordinate transformation before integration. We also found that precision specifically was better predicted by a model that accounted for shared noise arising from signals encoding head movement. Taken together, our findings suggest that motion perception in active observers is based on the integration of partially correlated body-centered signals.
引用
收藏
页数:20
相关论文
共 71 条
[1]   The ventriloquist effect results from near-optimal bimodal integration [J].
Alais, D ;
Burr, D .
CURRENT BIOLOGY, 2004, 14 (03) :257-262
[2]   No direction-specific bimodal facilitation for audiovisual motion detection [J].
Alais, D ;
Burr, D .
COGNITIVE BRAIN RESEARCH, 2004, 19 (02) :185-194
[3]  
Andersen R A, 1993, Curr Opin Neurobiol, V3, P171, DOI 10.1016/0959-4388(93)90206-E
[4]   Vestibular system: The many facets of a multimodal sense [J].
Angelaki, Dora E. ;
Cullen, Kathleen E. .
ANNUAL REVIEW OF NEUROSCIENCE, 2008, 31 :125-150
[5]   Discriminating audiovisual speed: Optimal integration of speed defaults to probability summation when component reliabilities diverge [J].
Bentvelzen, Adam ;
Leung, Johahn ;
Alais, David .
PERCEPTION, 2009, 38 (07) :966-987
[6]   Dynamic interaction between retinal and extraretinal signals in motion integration for smooth pursuit [J].
Bogadhi, Amarender R. ;
Montagnini, Anna ;
Masson, Guillaume S. .
JOURNAL OF VISION, 2013, 13 (13)
[7]   Multisensory-mediated auditory localization [J].
Bolognini, Nadia ;
Leo, Fabrizio ;
Passamonti, Claudia ;
Stein, Barry E. ;
Ladavas, Elisabetta .
PERCEPTION, 2007, 36 (10) :1477-1485
[8]   Seeing sounds: visual and auditory interactions in the brain [J].
Bulkin, David A. ;
Groh, Jennifer M. .
CURRENT OPINION IN NEUROBIOLOGY, 2006, 16 (04) :415-419
[9]   Visual-Haptic Adaptation Is Determined by Relative Reliability [J].
Burge, Johannes ;
Girshick, Ahna R. ;
Banks, Martin S. .
JOURNAL OF NEUROSCIENCE, 2010, 30 (22) :7714-7721
[10]   Multi-sensory weights depend on contextual noise in reference frame transformations [J].
Burns, Jessica Katherine ;
Blohm, Gunnar .
FRONTIERS IN HUMAN NEUROSCIENCE, 2010, 4