COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations

被引:72
作者
Abdul, Ashraf [1 ]
von der Weth, Christian [1 ]
Kankanhalli, Mohan [1 ]
Lim, Brian Y. [1 ]
机构
[1] Natl Univ Singapore, Sch Comp, Singapore, Singapore
来源
PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'20) | 2020年
关键词
explanations; explainable artificial intelligence; cognitive load; visual explanations; generalized additive models; GRAPH COMPREHENSION; BAR; PERFORMANCE; MEMORY;
D O I
10.1145/3313831.3376615
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Interpretable machine learning models trade off accuracy for simplicity to make explanations more readable and easier to comprehend. Drawing from cognitive psychology theories in graph comprehension, we formalize readability as visual cognitive chunks to measure and moderate the cognitive load in explanation visualizations. We present Cognitive-GAM (COGAM) to generate explanations with desired cognitive load and accuracy by combining the expressive nonlinear generalized additive models (GAM) with simpler sparse linear models. We calibrated visual cognitive chunks with reading time in a user study, characterized the trade-off between cognitive load and accuracy for four datasets in simulation studies, and evaluated COGAM against baselines with users. We found that COGAM can decrease cognitive load without decreasing accuracy and/or increase accuracy without increasing cognitive load. Our framework and empirical measurement instruments for cognitive load will enable more rigorous assessment of the human interpretability of explainable AI.
引用
收藏
页数:14
相关论文
共 70 条
[1]   Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda [J].
Abdul, Ashraf ;
Vermeulen, Jo ;
Wang, Danding ;
Lim, Brian ;
Kankanhalli, Mohan .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[2]   The Effect of Gestalt Laws of Perceptual Organization on the Comprehension of Three-Variable Bar and Line Graphs [J].
Ali, Nadia ;
Peebles, David .
HUMAN FACTORS, 2013, 55 (01) :183-203
[3]   Guidelines for Human-AI Interaction [J].
Amershi, Saleema ;
Weld, Dan ;
Vorvoreanu, Mihaela ;
Fourney, Adam ;
Nushi, Besmira ;
Collisson, Penny ;
Suh, Jina ;
Iqbal, Shamsi ;
Bennett, Paul N. ;
Inkpen, Kori ;
Teevan, Jaime ;
Kikin-Gil, Ruth ;
Horvitz, Eric .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[4]  
Andrzej Wodecki, 2017, SERIES MATH PHYS ENG, V58, P4, DOI [10.1111/ fct.12208, DOI 10.1111/FCT.12208]
[5]  
Berkel Niels V A N, 2019, CROWDSOURCING PERCEP, V1
[6]   A comparison of methods for the fitting of generalized additive models [J].
Binder, Harald ;
Tutz, Gerhard .
STATISTICS AND COMPUTING, 2008, 18 (01) :87-99
[7]   The Effects of Example-Based Explanations in a Machine Learning Interface [J].
Cai, Carrie J. ;
Jongejan, Jonas ;
Holbrook, Jess .
PROCEEDINGS OF IUI 2019, 2019, :258-262
[8]   STIMULUS COMPLEXITY AND INFORMATION INTEGRATION IN THE SPONTANEOUS INTERPRETATIONS OF LINE GRAPHS [J].
CARSWELL, CM ;
EMERY, C ;
LONON, AM .
APPLIED COGNITIVE PSYCHOLOGY, 1993, 7 (04) :341-357
[9]   Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission [J].
Caruana, Rich ;
Lou, Yin ;
Gehrke, Johannes ;
Koch, Paul ;
Sturm, Marc ;
Elhadad, Noemie .
KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, :1721-1730
[10]  
Caruana Rich, 2015, INTELLIGIBLE MODELS, DOI [10.1145/2783258.2788613, DOI 10.1145/2783258.2788613]