A top-level model of case-based argumentation for explanation: Formalisation and experiments

被引:20
作者
Prakken, Henry [1 ]
Ratsma, Rosa [1 ]
机构
[1] Univ Utrecht, Dept Informat & Comp Sci, Utrecht, Netherlands
关键词
Explaining machine learning; argumentation; case-based reasoning; BLACK-BOX; DIMENSIONS;
D O I
10.3233/AAC-210009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the approach is motivated by legal decision making, it also applies to other kinds of decision making, such as commercial decisions about loan applications or employee hiring, as long as the outcome is binary and the input conforms to this paper's factor- or dimension format. The model is top-level in that it can be extended with more refined accounts of similarities and differences between cases. It is shown to overcome several limitations of similar argumentation-based explanation models, which only have binary features and do not represent the tendency of features towards particular outcomes. The results of the experimental evaluation studies indicate that the model may be feasible in practice, but that further development and experimentation is needed to confirm its usefulness as an explanation model. Main challenges here are selecting from a large number of possible explanations, reducing the number of features in the explanations and adding more meaningful information to them. It also remains to be investigated how suitable our approach is for explaining non-linear models.
引用
收藏
页码:159 / 194
页数:36
相关论文
共 46 条
  • [1] Acharya M., 2019, IEEE INT C COMPUTATI
  • [2] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [3] Using background knowledge in case-based legal reasoning: A computational model and an intelligent learning environment
    Aleven, V
    [J]. ARTIFICIAL INTELLIGENCE, 2003, 150 (1-2) : 183 - 237
  • [4] Aleven Vincent., 1995, Proceedings of the Fifth International Conference on Artificial Intelligence and Law, ICAIL, V95, P31, DOI DOI 10.1145/222092.222106
  • [5] [Anonymous], 2018, TELCO CUSTOMER CHURN
  • [6] [Anonymous], 2017, P 16 ED INT C ART IN
  • [7] [Anonymous], 1990, Modeling Legal Argument: Reasoning with Cases and Hypotheticals
  • [8] Ashley K.D., 1989, P 2 INT C ART INT LA, P39
  • [9] Ashley KD, 2017, ARTIFICIAL INTELLIGENCE AND LEGAL ANALYTICS: NEW TOOLS FOR LAW PRACTICE IN THE DIGITAL AGE, P1, DOI 10.1017/9781316761380
  • [10] Explanation in AI and law: Past, present and future
    Atkinson, Katie
    Bench-Capon, Trevor
    Bollegala, Danushka
    [J]. ARTIFICIAL INTELLIGENCE, 2020, 289 (289)