A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness

被引:7
作者
Heersmink, Richard [1 ]
de Rooij, Barend [1 ]
Vazquez, Maria Jimena Clavel [1 ]
Colombo, Matteo [1 ]
机构
[1] Tilburg Univ, Sch Humanities & Digital Sci, Dept Philosophy, Tilburg, Netherlands
关键词
ChatGPT; Bard; Large language models; Transparency; Cognitive artifacts; Generative AI; Conversational AI; Algorithms; Knowledge; Trust; Big data; KNOWLEDGE; ARTIFACTS; TALKING; BELIEF;
D O I
10.1007/s10676-024-09777-3
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a "quasi-other"; when that happens, users anthropomorphise them. For most users, current LLMs are black boxes, i.e., for the most part, they lack data transparency and algorithmic transparency. They can, however, be phenomenologically and informationally transparent, in which case there is an interactional flow. Anthropomorphising and interactional flow can, in some users, create an attitude of (unwarranted) trust towards the output LLMs generate. We conclude this paper by drawing on the epistemology of trust and testimony to examine the epistemic implications of these dimensions. Whilst LLMs generally generate accurate responses, we observe two epistemic pitfalls. Ideally, users should be able to match the level of trust that they place in LLMs to the degree that LLMs are trustworthy. However, both their data and algorithmic opacity and their phenomenological and informational transparency can make it difficult for users to calibrate their trust correctly. The effects of these limitations are twofold: users may adopt unwarranted attitudes of trust towards the outputs of LLMs (which is particularly problematic when LLMs hallucinate), and the trustworthiness of LLMs may be undermined.
引用
收藏
页数:15
相关论文
共 84 条
[1]   Chatbots: History, technology, and applications [J].
Adamopoulou, Eleni ;
Moussiades, Lefteris .
MACHINE LEARNING WITH APPLICATIONS, 2020, 2
[2]   Artificial Hallucinations in ChatGPT: Implications in Scientific Writing [J].
Alkaissi, Hussam ;
McFarlane, Samy I. .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
[3]   Varieties of transparency: exploring agency within AI systems [J].
Andrada, Gloria ;
Clowes, Robert W. ;
Smart, Paul R. .
AI & SOCIETY, 2023, 38 (04) :1321-1331
[4]  
[Anonymous], 2023, Reuters
[5]  
Arkoudas K., 2023, PHILOS TECHNOLOGY, V36, P54, DOI DOI 10.1007/S13347-023-00619-6
[6]  
Audi R, 1997, AM PHILOS QUART, V34, P405
[7]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623
[8]  
Bird Jon, 2011, Interactions, V18, P20, DOI 10.1145/2029976.2029983
[9]  
Bommasani R, 2023, Arxiv, DOI arXiv:2310.12941
[10]   The epistemology and ontology of human-computer interaction [J].
Brey, P .
MINDS AND MACHINES, 2005, 15 (3-4) :383-398