What can we learn about visual attention to multiple words from the word-word interference task?

被引:5
|
作者
Mulatti, Claudio [1 ]
Ceccherini, Lisa [1 ,2 ]
Coltheart, Max [2 ]
机构
[1] Univ Padua, Padua, Italy
[2] Macquarie Univ, Sydney, NSW 2109, Australia
关键词
Word production; Visual Word Recognition; Reading; Lexical selection; Visual attention; Lexical processing; Reading Aloud; STROOP-LIKE INTERFERENCE; REPETITION BLINDNESS; MODEL; DISSOCIATION; RECOGNITION; COMPONENTS; FREQUENCY;
D O I
10.3758/s13421-014-0450-x
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
In this work, we develop an empirically driven model of visual attention to multiple words using the word-word interference (WWI) task. In this task, two words are simultaneously presented visually: a to-be-ignored distractor word at fixation, and a to-be-read-aloud target word above or below the distractor word. Experiment 1 showed that low-frequency distractor words interfere more than high-frequency distractor words. Experiment 2 showed that distractor frequency (high vs. low) and target frequency (high vs. low) exert additive effects. Experiment 3 showed that the effect of the case status of the target (same vs. AlTeRnAtEd) interacts with the type of distractor (word vs. string of # marks). Experiment 4 showed that targets are responded to faster in the presence of semantically related distractors than in presence of unrelated distractors. Our model of visual attention to multiple words borrows two principles governing processing dynamics from the dual-route cascaded model of reading: cascaded interactive activation and lateral inhibition. At the core of the model are three mechanisms aimed at dealing with the distinctive feature of the WWI task, which is that two words are presented simultaneously. These mechanisms are identification, tokenization, and deactivation.
引用
收藏
页码:121 / 132
页数:12
相关论文
共 12 条
  • [1] What can we learn about visual attention to multiple words from the word–word interference task?
    Claudio Mulatti
    Lisa Ceccherini
    Max Coltheart
    Memory & Cognition, 2015, 43 : 121 - 132
  • [2] Is visual lexical access based on phonological codes? Evidence from a picture-word interference task
    Markus F. Damian
    Randi C. Martin
    Psychonomic Bulletin & Review, 1998, 5 : 91 - 95
  • [3] What Can We Learn about Auditory Processing from Adult Hearing Questionnaires?
    Bamiou, Doris-Eva
    Iliadou, Vasiliki Vivian
    Zanchetta, Sthella
    Spyridakou, Chrysa
    JOURNAL OF THE AMERICAN ACADEMY OF AUDIOLOGY, 2015, 26 (10) : 824 - 837
  • [4] What can we learn about tropospheric OH from satellite observations of methane?
    Penn, Elise
    Jacob, Daniel J.
    Chen, Zichong
    East, James D.
    Sulprizio, Melissa P.
    Bruhwiler, Lori
    Maasakkers, Joannes D.
    Nesser, Hannah
    Qu, Zhen
    Zhang, Yuzhong
    Worden, John
    ATMOSPHERIC CHEMISTRY AND PHYSICS, 2025, 25 (05) : 2947 - 2965
  • [5] What can we learn from young adolescents' perceptions about the teaching of reading?
    Fletcher, Jo
    Nicholas, Karen
    EDUCATIONAL REVIEW, 2016, 68 (04) : 481 - 496
  • [6] Beyond the Lab: What We Can Learn about Cancer from Wild and Domestic Animals
    Schraverus, Helene
    Larondelle, Yvan
    Page, Melissa M.
    CANCERS, 2022, 14 (24)
  • [7] Talking about writing: What we can learn from conversations between parents and their young children
    Robins, Sarah
    Treiman, Rebecca
    APPLIED PSYCHOLINGUISTICS, 2009, 30 (03) : 463 - 484
  • [8] What can we learn about the distribution of fitness effects of new mutations from DNA sequence data?
    Keightley, Peter D.
    Eyre-Walker, Adam
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2010, 365 (1544) : 1187 - 1193
  • [9] What can we learn about cross-cultural adjustment from Self-Initiated Expatriates?
    Kumra, Savita
    Lindsay, Valerie
    Waxin, Marie-France
    ORGANIZATIONAL DYNAMICS, 2022, 51 (03)
  • [10] What Can We Learn about Compton-Thin AGN Tori from Their X-ray Spectra?
    Melazzini, F.
    Sazonov, S.
    ASTRONOMY LETTERS-A JOURNAL OF ASTRONOMY AND SPACE ASTROPHYSICS, 2023, 49 (06): : 301 - 319