The Making and Evaluation of Digital Games Used for the Assessment of Attention: Systematic Review

被引:10
作者
Wiley, Katelyn [1 ]
Robinson, Raquel [1 ]
Mandryk, Regan L. [1 ]
机构
[1] Univ Saskatchewan, Dept Comp Sci, Room 373,Thorvaldson Bldg,110 Sci Pl, Saskatoon, SK S7N 5C9, Canada
来源
JMIR SERIOUS GAMES | 2021年 / 9卷 / 03期
基金
加拿大自然科学与工程研究理事会;
关键词
cognitive assessment; attention; serious games; gamification; systematic review; mobile phone; DOT-PROBE TASK; DEFICIT/HYPERACTIVITY-DISORDER; VIDEO GAME; COGNITIVE IMPAIRMENT; YOUNG-CHILDREN; COMPUTER GAME; INHIBITION; ABILITIES; PERFORMANCE; SKILLS;
D O I
10.2196/26449
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Serious games are now widely used in many contexts, including psychological research and clinical use. One area of growing interest is that of cognitive assessment, which seeks to measure different cognitive functions such as memory, attention, and perception. Measuring these functions at both the population and individual levels can inform research and indicate health issues. Attention is an important function to assess, as an accurate measure of attention can help diagnose many common disorders, such as attention-deficit/hyperactivity disorder and dementia. However, using games to assess attention poses unique problems, as games inherently manipulate attention through elements such as sound effects, graphics, and rewards, and research on adding game elements to assessments (ie, gamification) has shown mixed results. The process for developing cognitive tasks is robust, with high psychometric standards that must be met before these tasks are used for assessment. Although games offer more diverse approaches for assessment, there is no standard for how they should be developed or evaluated. Objective: To better understand the field and provide guidance to interdisciplinary researchers, we aim to answer the question: How are digital games used for the cognitive assessment of attention made and measured? Methods: We searched several databases for papers that described a digital game used to assess attention that could be deployed remotely without specialized hardware. We used Rayyan, a systematic review software, to screen the records before conducting a systematic review. Results: The initial database search returned 49,365 papers. Our screening process resulted in a total of 74 papers that used a digital game to measure cognitive functions related to attention. Across the studies in our review, we found three approaches to making assessment games: gamifying cognitive tasks, creating custom games based on theories of cognition, and exploring potential assessment properties of commercial games. With regard to measuring the assessment properties of these games (eg, how accurately they assess attention), we found three approaches: comparison to a traditional cognitive task, comparison to a clinical diagnosis, and comparison to knowledge of cognition; however, most studies in our review did not evaluate the game's properties (eg, if participants enjoyed the game). Conclusions: Our review provides an overview of how games used for the assessment of attention are developed and evaluated. We further identified three barriers to advancing the field: reliance on assumptions, lack of evaluation, and lack of integration and standardization. We then recommend the best practices to address these barriers. Our review can act as a resource to help guide the field toward more standardized approaches and rigorous evaluation required for the widespread adoption of assessment games.
引用
收藏
页数:16
相关论文
共 106 条
[51]   Improving precision in neuropsychological assessment: Bridging the gap between classic paper-and-pencil tests and paradigms from cognitive neuroscience [J].
Kessels, Roy P. C. .
CLINICAL NEUROPSYCHOLOGIST, 2019, 33 (02) :357-368
[52]   Noncredible Effort during Pediatric Neuropsychological Exam: A Case Series and Literature Review [J].
Kirkwood, Michael W. ;
Kirk, John W. ;
Blaha, Robert Z. ;
Wilson, Pamela .
CHILD NEUROPSYCHOLOGY, 2010, 16 (06) :604-618
[53]   A task to manipulate attentional load, set-shifting, and inhibitory control: Convergent validity and test-retest reliability of the Parametric Go/No-Go Test [J].
Langenecker, Scott A. ;
Zubieta, Jon-Kar ;
Young, Elizabeth A. ;
Akil, Huda ;
Nielson, Kristy A. .
JOURNAL OF CLINICAL AND EXPERIMENTAL NEUROPSYCHOLOGY, 2007, 29 (08) :842-853
[54]  
Levy L., 2018, P 13 INT C FDN DIG G, DOI [DOI 10.1145/3235765.3235793, 10.1145/3235765.3235793]
[55]   Social Influences on Executive Functioning in Autism: Design of a Mobile Gaming Platform [J].
Li, Beibin ;
Atyabi, Adham ;
Kim, Minah ;
Barney, Erin ;
Ahn, Amy Yeojin ;
Luo, Yawen ;
Aubertine, Madeline ;
Corrigan, Sarah ;
St John, Tanya ;
Wang, Quan ;
Mademtzi, Marilena ;
Best, Mary ;
Shic, Frederick .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[56]  
Loleski Mario, 2017, Pril (Makedon Akad Nauk Umet Odd Med Nauki), V38, P55, DOI 10.2478/prilozi-2018-0006
[57]   Measuring the Reading Abilities of Dyslexic Children through a Visual Game [J].
Ludovico, L. A. ;
Di Tore, P. A. ;
Mangione, G. R. ;
Di Tore, S. ;
Corona, F. .
INTERNATIONAL JOURNAL OF EMERGING TECHNOLOGIES IN LEARNING, 2015, 10 (07) :47-54
[58]   Attrition from Web-Based Cognitive Testing: A Repeated Measures Comparison of Gamification Techniques [J].
Lumsden, Jim ;
Skinner, Andy ;
Coyle, David ;
Lawrence, Natalia ;
Munafo, Marcus .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2017, 19 (11)
[59]   The effects of gamelike features and test location on cognitive test performance and participant enjoyment [J].
Lumsden, Jim ;
Skinner, Andy ;
Woods, Andy T. ;
Lawrence, Natalia S. ;
Munafo, Marcus .
PEERJ, 2016, 4
[60]   Gamification of Cognitive Assessment and Cognitive Training: A Systematic Review of Applications and Efficacy [J].
Lumsden, Jim ;
Edwards, Elizabeth A. ;
Lawrence, Natalia S. ;
Coyle, David ;
Munafo, Marcus R. .
JMIR SERIOUS GAMES, 2016, 4 (02)