Linking perceptual learning with identical stimuli to imagery perceptual learning

被引:10
作者
Grzeczkowski, Lukasz [1 ]
Tartaglia, Elisa M. [2 ,3 ]
Mast, Fred W. [4 ]
Herzog, Michael H. [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Brain Mind Inst, Lab Psychophys, CH-1015 Lausanne, Switzerland
[2] Univ Chicago, Dept Stat, Chicago, IL 60637 USA
[3] Univ Chicago, Dept Neurobiol, Chicago, IL 60637 USA
[4] Univ Bern, Dept Psychol, Bern, Switzerland
基金
瑞士国家科学基金会;
关键词
perceptual learning; bisection; mental imagery; feedback; MENTAL-IMAGERY; SPATIAL ATTENTION; VERNIER ACUITY; DISCRIMINATION; LOCALIZATION; HYPERACUITY; ORIENTATION; THRESHOLDS; MECHANISMS; FEEDBACK;
D O I
10.1167/15.10.13
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Perceptual learning is usually thought to be exclusively driven by the stimuli presented during training (and the underlying synaptic learning rules). In some way, we are slaves of our visual experiences. However, learning can occur even when no stimuli are presented at all. For example, Gabor contrast detection improves when only a blank screen is presented and observers are asked to imagine Gabor patches. Likewise, performance improves when observers are asked to imagine the nonexisting central line of a bisection stimulus to be offset either to the right or left. Hence, performance can improve without stimulus presentation. As shown in the auditory domain, performance can also improve when the very same stimulus is presented in all learning trials and observers were asked to discriminate differences which do not exist (observers were not told about the set up). Classic models of perceptual learning cannot handle these situations since they need proper stimulus presentation, i.e., variance in the stimuli, such as a left versus right offset in the bisection stimulus. Here, we show that perceptual learning with identical stimuli occurs in the visual domain, too. Second, we linked the two paradigms by telling observers that only the very same bisection stimulus was presented in all trials and asked them to imagine the central line to be offset either to the left or right. As in imagery learning, performance improved.
引用
收藏
页数:8
相关论文
共 50 条
[31]   Chromatic perceptual learning [J].
Sowden, Paul T. ;
Davies, Ian R. L. ;
Notman, Leslie A. ;
Alexander, Iona ;
Oezgen, Emre .
NEW DIRECTIONS IN COLOUR STUDIES, 2011, :433-443
[33]   Motor response specificity in perceptual learning and its release by double training [J].
Grzeczkowski, Lukasz ;
Cretenoud, Aline F. ;
Mast, Fred W. ;
Herzog, Michael H. .
JOURNAL OF VISION, 2019, 19 (06) :1-14
[34]   Perceptual learning [J].
Goldstone, RL .
ANNUAL REVIEW OF PSYCHOLOGY, 1998, 49 :585-612
[35]   Perceptual learning solely induced by feedback [J].
Choi, Hoon ;
Watanabe, Takeo .
VISION RESEARCH, 2012, 61 :77-82
[36]   Perceptual learning [J].
Ahissar, M .
CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 1999, 8 (04) :124-128
[37]   An integrated reweighting theory of perceptual learning [J].
Dosher, Barbara Anne ;
Jeter, Pamela ;
Liu, Jiajuan ;
Lu, Zhong-Lin .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2013, 110 (33) :13678-13683
[38]   Perceptual learning of ensemble and outlier perception [J].
Hochstein, Shaul ;
Pavlovskaya, Marina .
JOURNAL OF VISION, 2020, 20 (08)
[39]   Generalization of Perceptual Learning of Vocoded Speech [J].
Hervais-Adelman, Alexis G. ;
Davis, Matthew H. ;
Johnsrude, Ingrid S. ;
Taylor, Karen J. ;
Carlyon, Robert P. .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2011, 37 (01) :283-295
[40]   Boosting perceptual learning by fake feedback [J].
Shibata, Kazuhisa ;
Yamagishi, Noriko ;
Ishii, Shin ;
Kawato, Mitsuo .
VISION RESEARCH, 2009, 49 (21) :2574-2585