The Effect of Progressive Disclosure in the Transparency of Large Language Models

被引:0
作者
Muralidhar, Deepa [1 ]
Belloum, Rafik [2 ]
de Oliveira, Kathia Marcal [2 ]
Ashok, Ashwin [1 ]
Mohammad, Pardaz Banu [1 ]
机构
[1] Georgia State Univ, Atlanta, GA 30302 USA
[2] Univ Polytech Hauts France, LAMIH, CNRS, UMR 8201, F-59313 Valenciennes, France
来源
COMPUTER-HUMAN INTERACTION RESEARCH AND APPLICATIONS, CHIRA 2024, PT I | 2025年 / 2370卷
关键词
Progressive disclosure; Transparency; Explainable user interface; Explainable AI; XAI; artificial intelligence; HCI; AI text generation; LLM;
D O I
10.1007/978-3-031-82633-7_17
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advances in artificial intelligence (AI) text generation systems have resulted in their ability to provide precise recommendations in response to users' questions (prompts). However AI models often operate as black boxes, making it challenging for users to comprehend their inner workings. The transparency of these models is crucial for users to gain a better understanding of how AI systems function. While the Human-Computer Interaction (HCI) community has advocated for design principles like progressive disclosure to improve transparency, we still lack empirical evidence validating its efficacy for AI systems, especially in the context of LLM-based text generation. Addressing this gap, this paper presents a user study with 30 participants aimed at investigating the effect of progressive disclosure and adjusting the explanations so as to adapt to users' mental models for improving the transparency of AI text generation systems. The findings suggest that users prefer on-demand explanations and value diverse explanation methods, especially when the explanations gradually give the users a better understanding of the AI system. Additionally, qualitative data shows a marginal preference for word clouds over keyword highlighting. User feedback indicates that explanations such as word-pair cosine values, which leverage the interpretability of AI models, are less suitable for lay users. Altering the visual presentation of these word-pair cosine values from a table of numbers to a bar graph did not increase user satisfaction with this explanation technique.
引用
收藏
页码:269 / 288
页数:20
相关论文
共 53 条
[1]   Resilient Chatbots: Repair Strategy Preferences for Conversational Breakdowns [J].
Ashktorab, Zahra ;
Jain, Mohit ;
Liao, Q. Vera ;
Weisz, Justin D. .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[2]  
Betz G., 2021, Thinking aloud: dynamic context generation improves zero-shot reasoning performance of gpt-2
[3]  
Bosch K., 2019, Six challenges for human-AI co-learning
[4]  
Brown R., 1983, Cognition
[5]  
Brown TB, 2020, ADV NEUR IN, V33
[6]  
Carroll J.M., 1984, Commun
[7]  
Chromik M., 2021, Human-XAI interaction: a review and design principles for explanation user interfaces
[8]  
Chromik M., 2019, IUI WORKSH
[9]   Lack of Transparency and Potential Bias in Artificial Intelligence Data Sets and Algorithms A Scoping Review [J].
Daneshjou, Roxana ;
Smith, Mary P. ;
Sun, Mary D. ;
Rotemberg, Veronica ;
Zou, James .
JAMA DERMATOLOGY, 2021, 157 (11) :1362-1369
[10]  
Dangelo M., 2023, AI & Future Technologies