Can AI explain AI? Interactive co-construction of explanations among human and artificial agents

被引:4
作者
Klowait, Nils [1 ,2 ]
Erofeeva, Maria [3 ]
Lenke, Michael [1 ,2 ]
Horwath, Ilona [1 ,2 ]
Buschmeier, Hendrik [1 ,4 ]
机构
[1] SFB Transregio 318 Constructing Explainabil, Mersinweg 7, D-33100 Paderborn, Germany
[2] Paderborn Univ, Paderborn, Germany
[3] Free Univ Brussels, Ixelles, Belgium
[4] Bielefeld Univ, Fac Linguist & Literary Studies, Digital Linguist, Bielefeld, Germany
关键词
ChatGPT; co-constructed explainability; ethnomethodological conversation analysis; human-computer interaction; multimodality;
D O I
10.1177/17504813241267069
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
This study investigates the potential of using advanced conversational artificial intelligence (AI) to help people understand complex AI systems. In line with conversation-analytic research, we view the participatory role of AI as dynamically unfolding in a situation rather than being predetermined by its architecture. To study user sensemaking of intransparent AI systems, we set up a naturalistic encounter between human participants and two AI systems developed in-house: a reinforcement learning simulation and a GPT-4-based explainer chatbot. Our results reveal that an explainer-AI only truly functions as such when participants actively engage with it as a co-constructive agent. Both the interface's spatial configuration and the asynchronous temporal nature of the explainer AI - combined with the users' presuppositions about its role - contribute to the decision whether to treat the AI as a dialogical co-participant in the interaction. Participants establish evidentiality conventions and sensemaking procedures that may diverge from a system's intended design or function.
引用
收藏
页码:917 / 930
页数:14
相关论文
共 13 条
[1]   The computational therapeutic: exploring Weizenbaum's ELIZA as a history of the present [J].
Bassett, Caroline .
AI & SOCIETY, 2019, 34 (04) :803-812
[2]  
Button G., 1990, COMPUTERS CONVERSATI, P67, DOI [DOI 10.1016/B978-0-08-050264-9.50009-9, 10.1016/B978-0-08-050264-]
[3]  
Elliott A., 2022, Making sense of AI: our algorithmic world
[4]   THE WORK OF A DISCOVERING SCIENCE CONSTRUED WITH MATERIALS FROM THE OPTICALLY DISCOVERED PULSAR [J].
GARFINKEL, H ;
LYNCH, M ;
LIVINGSTON, E .
PHILOSOPHY OF THE SOCIAL SCIENCES, 1981, 11 (02) :131-158
[5]  
Garfinkel H., 1967, STUDIES ETHNOMETHODO
[6]  
Goodwin C, 2018, LEARN DOING, P1, DOI 10.1017/ 9781139016735
[7]   Inductive reasoning in humans and large language models [J].
Han, Simon Jerome ;
Ransom, Keith J. ;
Perfors, Andrew ;
Kemp, Charles .
COGNITIVE SYSTEMS RESEARCH, 2024, 83
[8]   The Epistemic Engine: Sequence Organization and Territories of Knowledge [J].
Heritage, John .
RESEARCH ON LANGUAGE AND SOCIAL INTERACTION, 2012, 45 (01) :30-52
[9]  
Moore RJ, 2018, HUM-COMPUT INT-SPRIN, P1, DOI 10.1007/978-3-319-95579-7
[10]  
Reeves S., 2023, SAGE HDB DIGITAL SOC, P573, DOI DOI 10.4135/9781529783193.N32