Goal-Driven XAI: The Normative Value of the Ladder of Regret in Human-Centered AI

被引:0
作者
Martin-Pena, Rosa E. [1 ,2 ]
机构
[1] German Res Ctr Artificial Intelligence DFKI, Berlin, Germany
[2] Univ Valladolid Uva, Dept Philosophy, Valladolid, Spain
来源
HCI INTERNATIONAL 2024-LATE BREAKING POSTERS, HCII 2024, PT II | 2025年 / 2320卷
关键词
Human-centered AI; Responsible decision-making; Goal-driven XAI; Regret; Noise; Somatic markers;
D O I
10.1007/978-3-031-78531-3_26
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Designing AI systems with cognitive and affective human-like abilities is not without risks. While in normative language or from legal theory, the responsibility falls on whoever breaks the law, human agents have developed different strategies to avoid responsibility, justifying their behavior. This type of justification in the form of explanations has in common that they obey a post hoc story supporting their behavior or choice, which could contribute to what is known as cognitive bias. However, this research focuses not on bias but on analyzing and providing a solution to another source of human error called noise. This concept, taken from Kahneman and his colleagues, refers to the unwanted variability in judgments that should ideally be identical. Emotions are one of the primary sources of noise that these authors identify in human behavior. However, not all of them can be considered equally as noise sources. An example that contradicts this theory is the social cognition and behavioral sciences studies that point to regret's essential role in assigning and assuming responsibility. This paper proposes an interdisciplinary theoretical framework for Responsible AI called "The Ladder of Regret," divided into three levels in which human and artificial agents cooperate within goal-driven XAI. The aim is to create a Theory of Mind about the normative value of regret to illustrate whether the somatic marker hypothesis driven by the counterfactual component of the anticipated regret could serve as a recommendation norm for preventing unspecified errors before they occur.
引用
收藏
页码:233 / 243
页数:11
相关论文
共 28 条
  • [1] Aday J, 2017, EMOTIONS AND AFFECT IN HUMAN FACTORS AND HUMAN-COMPUTER INTERACTION, P27, DOI 10.1016/B978-0-12-801851-4.00002-1
  • [2] Anjomshoae S.T., 2019, Adaptive Agents and Multi-Agent Systems
  • [3] [Anonymous], 1994, Descartes' Error: Emotions, Reason, and the Human Brain
  • [4] Barrett LF, 2017, SOC COGN AFFECT NEUR, V12, P1, DOI [10.1093/scan/nsw154, 10.1093/scan/nsx060]
  • [5] Borgo R, 2018, Arxiv, DOI arXiv:1810.06338
  • [6] Byrne RMJ, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P6276
  • [7] Memory-Based Explainable Reinforcement Learning
    Cruz, Francisco
    Dazeley, Richard
    Vamplew, Peter
    [J]. AI 2019: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11919 : 66 - 77
  • [8] Levels of explainable artificial intelligence for human-aligned conversational explanations
    Dazeley, Richard
    Vamplew, Peter
    Foale, Cameron
    Young, Charlotte
    Aryal, Sunil
    Cruz, Francisco
    [J]. ARTIFICIAL INTELLIGENCE, 2021, 299
  • [9] Feldman Barrett L., 2017, EMOTIONS ARE MADE SE
  • [10] Gibson JJ., 1979, ECOLOGICAL APPROACH