Explanatory machine learning for sequential human teaching

被引:0
作者
Lun Ai
Johannes Langer
Stephen H. Muggleton
Ute Schmid
机构
[1] Imperial College London,Department of Computing
[2] University of Bamberg,Cognitive Systems Group
[3] University of Bamberg,undefined
来源
Machine Learning | 2023年 / 112卷
关键词
Explainable artificial intelligence; Machine learning comprehensibility; Meta-interpretive learning; Inductive logic programming;
D O I
暂无
中图分类号
学科分类号
摘要
The topic of comprehensibility of machine-learned theories has recently drawn increasing attention. Inductive logic programming uses logic programming to derive logic theories from small data based on abduction and induction techniques. Learned theories are represented in the form of rules as declarative descriptions of obtained knowledge. In earlier work, the authors provided the first evidence of a measurable increase in human comprehension based on machine-learned logic rules for simple classification tasks. In a later study, it was found that the presentation of machine-learned explanations to humans can produce both beneficial and harmful effects in the context of game learning. We continue our investigation of comprehensibility by examining the effects of the ordering of concept presentations on human comprehension. In this work, we examine the explanatory effects of curriculum order and the presence of machine-learned explanations for sequential problem-solving. We show that (1) there exist tasks A and B such that learning A before learning B results in better comprehension for humans in comparison to learning B before learning A and (2) there exist tasks A and B such that the presence of explanations when learning A contributes to improved human comprehension when subsequently learning B. We propose a framework for the effects of sequential teaching on comprehension based on an existing definition of comprehensibility and provide evidence for support from data collected in human trials. Our empirical study involves curricula that teach novices the merge sort algorithm. Our results show that sequential teaching of concepts with increasing complexity (a) has a beneficial effect on human comprehension and (b) leads to human re-discovery of divide-and-conquer problem-solving strategies, and (c) allows adaptations of human problem-solving strategy with better performance when machine-learned explanations are also presented.
引用
收藏
页码:3591 / 3632
页数:41
相关论文
共 132 条
[1]  
Adadi A(2018)Peeking inside the black-box: A survey on explainable artificial intelligence (xai) IEEE Access 6 52138-52160
[2]  
Berrada M(2021)Beneficial and harmful explanatory machine learning Machine Learning 110 695-721
[3]  
Ai L(2002)An effective metacognitive strategy: Learning by doing and explaining with a computer-based cognitive tutor Cognitive science 26 147-179
[4]  
Muggleton S(2011)Does discovery-based instruction enhance learning? Journal of Educational Psychology 103 1-18
[5]  
Hocquette C(2019)Summarizing agent strategies Autonomous Agent Multi-Agent System 33 628-644
[6]  
Gromowski M(1997)The role of examples and rules in the acquisition of a cognitive skill Journal of Experimental Psychology: Learning, Memory, and Cognition 23 932-140
[7]  
Schmid U(1979)The theory of learning by doing Psychological Review 86 124-115
[8]  
Aleven V(2020)Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai Information Fusion 58 82-403
[9]  
Koedinger KR(1994)Development of a short form for the raven advanced progressive matrices test Educational and Psychological measurement 54 394-1126
[10]  
Alfieri L(2019)Using eye tracking to expose cognitive processes in understanding conceptual models Management Information Systems Quarterly 43 1105-965