The Expertise Effect on Web Accessibility Evaluation Methods

被引:49
作者
Brajnik, Giorgio [1 ]
Yesilada, Yeliz [2 ]
Harper, Simon [3 ]
机构
[1] Univ Udine, Dipartimento Matemat & Informat, I-33100 Udine, Italy
[2] Middle E Tech Univ, Comp Engn Programme, TR-10 Mersin, Turkey
[3] Univ Manchester, Sch Comp Sci, Web Ergon Lab, Manchester M13 9PL, Lancs, England
来源
HUMAN-COMPUTER INTERACTION | 2011年 / 26卷 / 03期
关键词
USABILITY EVALUATION;
D O I
10.1080/07370024.2011.601670
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Web accessibility means that disabled people can effectively perceive, understand, navigate, and interact with the web. Web accessibility evaluation methods are needed to validate the accessibility of web pages. However, the role of subjectivity and of expertise in such methods is unknown and has not previously been studied. This article investigates the effect of expertise in web accessibility evaluation methods by conducting a Barrier Walkthrough (BW) study with 19 expert and 57 nonexpert judges. The BW method is an evaluation method that can be used to manually assess the accessibility of web pages for different user groups such as motor impaired, low vision, blind, and mobile users. Our results show that expertise matters, and even though the effect of expertise varies depending on the metric used to measure quality, the level of expertise is an important factor in the quality of accessibility evaluation of web pages. In brief, when pages are evaluated with nonexperts, we observe a drop in validity and reliability. We also observe a negative monotonic relationship between number of judges and reproducibility: more evaluators mean more diverse outputs. After five experts, reproducibility stabilizes, but this is not the case with nonexperts. The ability to detect all the problems increases with the number of judges: With 3 experts all problems can be found, but for such a level 14 nonexperts are needed. Even though our data show that experts rated pages differently, the difference is quite small. Finally, compared to nonexperts, experts spent much less time and the variability among them is smaller, they were significantly more confident, and they rated themselves as being more productive. The article discusses practical implications regarding how BW results should be interpreted, how to recruit evaluators, and what happens when more than one evaluator is hired. Supplemental materials are available for this article. Go to the publisher's online edition of Human-Computer Interaction for statistical details and additional measures for this article.
引用
收藏
页码:246 / 283
页数:38
相关论文
共 50 条
  • [31] For the external evaluation of AT systems by task-based methods
    Blanchon, Herve
    Boitet, Christian
    TRAITEMENT AUTOMATIQUE DES LANGUES, 2007, 48 (01): : 33 - 65
  • [32] Usability Evaluation of a VibroTactile API for Web-Based Virtual Reality Experiences
    Canelha, Jose
    Cardoso, Jorge C. S.
    Perrotta, Andre
    HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT V, 2021, 12936 : 517 - 521
  • [33] An approach of product usability evaluation based on Web mining in feature fatigue analysis
    Wu, Mingxing
    Wang, Liya
    Li, Ming
    Long, Huijun
    COMPUTERS & INDUSTRIAL ENGINEERING, 2014, 75 : 230 - 238
  • [34] Heuristic Evaluation and Usability Testing as Complementary Methods: A Case Study
    Murillo, Braulio
    Sang, Jose Pow
    Paz, Freddy
    DESIGN, USER EXPERIENCE, AND USABILITY: THEORY AND PRACTICE, DUXU 2018, PT I, 2018, 10918 : 470 - 478
  • [35] Academic methods for usability evaluation of serious games: a systematic review
    Rosa Yáñez-Gómez
    Daniel Cascado-Caballero
    José-Luis Sevillano
    Multimedia Tools and Applications, 2017, 76 : 5755 - 5784
  • [36] Layered evaluation of interactive adaptive systems: framework and formative methods
    Paramythis, Alexandros
    Weibelzahl, Stephan
    Masthoff, Judith
    USER MODELING AND USER-ADAPTED INTERACTION, 2010, 20 (05) : 383 - 453
  • [37] Academic methods for usability evaluation of serious games: a systematic review
    Yanez-Gomez, Rosa
    Cascado-Caballero, Daniel
    Sevillano, Jose-Luis
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (04) : 5755 - 5784
  • [38] Cognitive engineering methods as usability evaluation tools for medical equipment
    Liljegren, E
    Osvalder, AL
    INTERNATIONAL JOURNAL OF INDUSTRIAL ERGONOMICS, 2004, 34 (01) : 49 - 62
  • [39] A web-based platform for quality management of elderly care: usability evaluation of Ankira®
    Loureiro, Natalia
    Fernandes, Marco
    Alvarelhao, Joaquim
    Ferreira, Alina
    Caravau, Hilma
    Martins, Ana Isabel
    Cerqueira, Margarida
    Queiros, Alexandra
    CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS/INTERNATIONAL CONFERENCE ON PROJECT MANAGEMENT/CONFERENCE ON HEALTH AND SOCIAL CARE INFORMATION SYSTEMS AND TECHNOLOGIES, CENTERIS/PROJMAN / HCIST 2015, 2015, 64 : 666 - 673
  • [40] Automating the Evaluation of Usability Remotely for Web Applications via a Model-Based Approach
    Harrati, Nouzha
    Bouchrika, Imed
    Tari, Abdelkamel
    Ladjailia, Ammar
    2015 FIRST INTERNATIONAL CONFERENCE ON NEW TECHNOLOGIES OF INFORMATION AND COMMUNICATION (NTIC), 2015,