When more is less: Increasing allocentric visual information can switch visual-proprioceptive combination from an optimal to sub-optimal process

被引:10
作者
Byrne, Patrick A. [1 ]
Henriques, Denise Y. P. [1 ,2 ]
机构
[1] York Univ, Ctr Vis Res, Toronto, ON M3J 1P3, Canada
[2] York Univ, Sch Kinesiol & Hlth, Toronto, ON M3J 1P3, Canada
基金
加拿大创新基金会; 加拿大自然科学与工程研究理事会;
关键词
Vision; Proprioception; Multisensory integration; Reaching; Visuomotor transformation; MEMORY-GUIDED REACH; SHORT-TERM-MEMORY; RETENTION CHARACTERISTICS; MULTISENSORY INTEGRATION; REMEMBERED TARGETS; POINTING MOVEMENTS; PERCEPTION; LANDMARKS; TIME; PERFORMANCE;
D O I
10.1016/j.neuropsychologia.2012.10.008
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
When reaching for an object in the environment, the brain often has access to multiple independent estimates of that object's location. For example, if someone places their coffee cup on a table, then later they know where it is because they see it, but also because they remember how their reaching limb was oriented when they placed the cup. Intuitively, one would expect more accurate reaches if either of these estimates were improved (e.g., if a light were turned on so the cup were more visible). It is now well-established that the brain tends to combine two or more estimates about the same stimulus as a maximum-likelihood estimator (MLE), which is the best thing to do when estimates are unbiased. Even in the presence of small biases, relying on the MLE rule is still often better than choosing a single estimate. For this work, we designed a reaching task in which human subjects could integrate proprioceptive and allocentric (landmark-relative) visual information to reach for a remembered target. Even though both of these modalities contain some level of bias, we demonstrate via simulation that our subjects should use an MLE rule in preference to relying on one modality or the other in isolation. Furthermore, we show that when visual information is poor, subjects do, indeed, combine information in this way. However, when we improve the quality of visual information, subjects counter-intuitively switch to a sub-optimal strategy that occasionally includes reliance on a single modality. (C) 2012 Elsevier Ltd. All rights reserved.
引用
收藏
页码:26 / 37
页数:12
相关论文
共 46 条
[1]   The interaction of visual and proprioceptive inputs in pointing to actual and remembered targets in Parkinson's disease [J].
Adamovich, SV ;
Berkinblit, MB ;
Hening, W ;
Sage, J ;
Poizner, H .
NEUROSCIENCE, 2001, 104 (04) :1027-1041
[2]   Bayesian integration of visual and auditory signals for spatial localization [J].
Battaglia, PW ;
Jacobs, RA ;
Aslin, RN .
JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2003, 20 (07) :1391-1397
[3]  
Berkinblit MB, 1995, EXP BRAIN RES, V107, P326
[4]   Flexibility and individual differences in visuo-proprioceptive integration: evidence from the analysis of a morphokinetic control task [J].
Boulinguez, Philippe ;
Rouhana, Joelle .
EXPERIMENTAL BRAIN RESEARCH, 2008, 185 (01) :137-149
[5]  
Brouwer A., 2009, J VISION, V9
[6]   Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating [J].
Byrne, Patrick A. ;
Cappadocia, David C. ;
Crawford, J. Douglas .
VISION RESEARCH, 2010, 50 (24) :2661-2670
[7]   Cue Reliability and a Landmark Stability Heuristic Determine Relative Weighting Between Egocentric and Allocentric Visual Information in Memory-Guided Reach [J].
Byrne, Patrick A. ;
Crawford, J. Douglas .
JOURNAL OF NEUROPHYSIOLOGY, 2010, 103 (06) :3054-3069
[8]   Memory for kinesthetically defined target location: Evidence for manual asymmetries [J].
Chapman, CD ;
Heath, MD ;
Westwood, DA ;
Roy, EA .
BRAIN AND COGNITION, 2001, 46 (1-2) :62-66
[9]   Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach [J].
Chen, Ying ;
Byrne, Patrick ;
Crawford, J. Douglas .
NEUROPSYCHOLOGIA, 2011, 49 (01) :49-60
[10]  
DARLING WG, 1995, EXP BRAIN RES, V102, P495