Figure-ground segregation can rely on differences in motion direction

被引:14
作者
Kandil, FI [1 ]
Fahle, M [1 ]
机构
[1] Univ Bremen, D-28211 Bremen, Germany
关键词
temporal asynchrony; figure-ground segregation; segmentation; motion differences; spatio-temporal integeration; psychophysics;
D O I
10.1016/j.visres.2004.07.027
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
If the elements within a figure move synchronously while those in the surround move at a different time, the figure is easily segregated from the surround and thus perceived. Lee and Blake (1999) [Visual form created solely from temporal structure. Science, 284, 1165-1168] demonstrated that this figure-ground separation may be based not only on time differences between motion onsets, but also on the differences between reversals of motion direction. However, Farid and Adelson (2001) [Synchrony does not promote grouping in temporally structured displays. Nature Neuroscience, 4, 875-876] argued that figure-ground segregation in the motion-reversal experiment might have been based on a contrast artefact and concluded that (a)synchrony as such was 'not responsible for the perception of form in these or earlier displays'. Here, we present experiments that avoid contrast artefacts but still produce figure-ground segregation based on purely temporal cues. Our results show that subjects can segregate figure from ground even though being unable to use motion reversals as such. Subjects detect the figure when either (i) motion stops (leading to contrast artefacts), or (ii) motion directions differ between figure and ground. Segregation requires minimum delays of about 15ms. We argue that whatever the underlying cues and mechanisms, a second stage beyond motion detection is required to globally compare the outputs of local motion detectors and to segregate figure from ground. Since analogous changes take place in both figure and ground in rapid succession, this second stage has to detect the asynchrony with high temporal precision. (C) 2004 Elsevier Ltd. All rights reserved.
引用
收藏
页码:3177 / 3182
页数:6
相关论文
共 50 条
[21]   Figure-ground segmentation from occlusion [J].
Aguiar, PMQ ;
Moura, JMF .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2005, 14 (08) :1109-1124
[22]   Some Insights Into Brightness Perception of Images in the Light of a New Computational Model of Figure-Ground Segregation [J].
Ghosh, Kuntal ;
Pal, Sankar K. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2010, 40 (04) :758-766
[23]   Perception and Saccades during Figure-Ground Segregation and Border-Ownership Discrimination in Natural Contours [J].
Wagatsuma, Nobuhiko ;
Urabe, Mika ;
Sakai, Ko .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (05) :1126-1134
[24]   Pharmacological manipulation of GABA activity in nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS) impairs figure-ground discrimination in pigeons Running head: SP/IPS in figure-ground segregation [J].
Acerbo, Martin J. ;
Lazareva, Olga F. .
BEHAVIOURAL BRAIN RESEARCH, 2018, 344 :1-8
[25]   A SOLUTION OF THE FIGURE-GROUND PROBLEM FOR BIOLOGICAL VISION [J].
GROSSBERG, S .
NEURAL NETWORKS, 1993, 6 (04) :463-483
[26]   Figure-ground segregation requires two distinct periods of activity in VI: a transcranial magnetic stimulation study [J].
Heinen, K ;
Jolij, J ;
Lamme, VAF .
NEUROREPORT, 2005, 16 (13) :1483-1487
[27]   Dissociation of color and figure-ground effects in the watercolor illusion [J].
Von der Heydt, Rudiger ;
Pierson, Rachel .
SPATIAL VISION, 2006, 19 (2-4) :323-+
[28]   Enhanced Figure-Ground Classification With Background Prior Propagation [J].
Chen, Yisong ;
Chan, Antoni B. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (03) :873-885
[29]   The watercolor effect: a new principle of grouping and figure-ground organization [J].
Pinna, B ;
Werner, JS ;
Spillmann, L .
VISION RESEARCH, 2003, 43 (01) :43-52
[30]   Local contour features contribute to figure-ground segregation in monkey V4 neural populations and human perception [J].
Shishikura, Motofumi ;
Machida, Itsuki ;
Tamura, Hiroshi ;
Sakai, Ko .
NEURAL NETWORKS, 2025, 181