A conceptual framework for establishing trust in real world intelligent systems

被引:3
作者
Guckert, Michael [1 ,2 ]
Gumpfer, Nils [2 ]
Hannig, Jennifer [2 ]
Keller, Till [3 ]
Urquhart, Neil [4 ]
机构
[1] TH Mittelhessen Univ Appl Sci, Nat Wissensch & Datenverarbeitung, Dept MND Math, Wilhelm Leuschner Str 13, D-61169 Friedberg, Germany
[2] TH Mittelhessen Univ Appl Sci, KITE Kompetenzzentrum Informat Technol, Cognit Informat Syst, D-61169 Friedberg, Germany
[3] Justus Liebig Univ Giessen, Dept Internal Med 1, Cardiol, D-35390 Giessen, Germany
[4] Edinburgh Napier Univ, Edinburgh EH11 4DY, Midlothian, Scotland
来源
COGNITIVE SYSTEMS RESEARCH | 2021年 / 68卷
关键词
Intelligent systems; AI; Trust; Explainable AI; Knowledge management; Knowledge patterns; ATRIAL-FIBRILLATION; MACHINES;
D O I
10.1016/j.cogsys.2021.04.001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Intelligent information systems that contain emergent elements often encounter trust problems because results do not get sufficiently explained and the procedure itself can not be fully retraced. This is caused by a control flow depending either on stochastic elements or on the structure and relevance of the input data. Trust in such algorithms can be established by letting users interact with the system so that they can explore results and find patterns that can be compared with their expected solution. Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns and may increase the trust that a user has in the solution. If expectations are not met, close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected. By either accepting or rejecting a solution, the user's set of expectations evolves and a learning process for the users is established. In this paper we present a conceptual framework that reflects and supports this process. The framework is the result of an analysis of two exemplary case studies from two different disciplines with information systems that assist experts in their complex tasks. (C) 2021 The Author(s). Published by Elsevier B.V.
引用
收藏
页码:143 / 155
页数:13
相关论文
共 45 条
  • [1] How and What Can Humans Learn from Being in the Loop? Invoking Contradiction Learning as a Measure to Make Humans Smarter
    Abdel-Karim, Benjamin M.
    Pfeuffer, Nicolas
    Rohde, Gernot
    Hinz, Oliver
    [J]. KUNSTLICHE INTELLIGENZ, 2020, 34 (02): : 199 - 207
  • [2] Trusting Intelligent Machines Deepening Trust within Socio-Technical Systems
    Andras, Peter
    Esterle, Lukas
    Guckert, Michael
    The Anh Han
    Lewis, Peter R.
    Milanovic, Kristina
    Payne, Terry
    Perret, Cedric
    Pitt, Jeremy
    Powers, Simon T.
    Urquhart, Neil
    Wells, Simon
    [J]. IEEE TECHNOLOGY AND SOCIETY MAGAZINE, 2018, 37 (04) : 76 - 83
  • [3] [Anonymous], 2017, Ml
  • [4] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [5] Atrial Fibrillation, Stroke Risk, and Warfarin Therapy Revisited A Population-Based Study
    Bjorck, Staffan
    Palaszewski, Bo
    Friberg, Leif
    Bergfeldt, Lennart
    [J]. STROKE, 2013, 44 (11) : 3103 - 3108
  • [6] Cummings M., 2004, P AIAA 1 INT SYST TE
  • [7] Datenethikkommission, 2019, GUT DAT
  • [8] Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator
    Diprose, William K.
    Buist, Nicholas
    Hua, Ning
    Thurier, Quentin
    Shand, George
    Robinson, Reece
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2020, 27 (04) : 592 - 600
  • [9] A Review of User Interface Design for Interactive Machine Learning
    Dudley, John J.
    Kristensson, Per Ola
    [J]. ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2018, 8 (02)
  • [10] European Commission, 2016, REGULATION EU 201667