"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents

被引:6
作者
Zhang, Zhiping [1 ]
Jia, Michelle [2 ]
Lee, Hao-Ping [2 ]
Yao, Bingsheng [3 ]
Das, Sauvik [2 ]
Lerner, Ada [1 ]
Wang, Dakuo [1 ]
Li, Tianshi [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA USA
[3] Rensselaer Polytech Inst, Troy, NY USA
来源
PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2024 | 2024年
关键词
Large language models (LLM); Artifcial general intelligence (AGI); Conversational agents; Chatbots; Privacy; Contextual integrity; Privacy risks; Privacy-enhancing technologies; Interviews; Empirical studies; SELF-DISCLOSURE; PRIVACY; TRUST;
D O I
10.1145/3613904.3642385
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.
引用
收藏
页数:26
相关论文
共 81 条
  • [1] Calibration and Evaluation of a Force Measurement Glove for Field-Based Monitoring of Manual Wheelchair Users
    Anderson, Anthony
    Hooke, Alexander W.
    Jayaraman, Chandrasekaran
    Burns, Adam
    Fortune, Emma
    Sosnoff, Jacob J.
    Morrow, Melissa M.
    [J]. 2020 IEEE 20TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE 2020), 2020, : 1004 - 1007
  • [2] The role of shared mental models in human-AI teams: a theoretical review
    Andrews, Robert W.
    Lilly, J. Mason
    Srivastava, Divya
    Feigh, Karen M.
    [J]. THEORETICAL ISSUES IN ERGONOMICS SCIENCE, 2023, 24 (02) : 129 - 175
  • [3] End User and Expert Perceptions of Threats and Potential Countermeasures
    Anell, Simon
    Groeber, Lea
    Krombholz, Katharina
    [J]. 2020 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (EUROS&PW 2020), 2020, : 230 - 239
  • [4] [Anonymous], 2021, USENIX SEC S, DOI DOI 10.21037/ALES-2019-MISS-09
  • [5] Bansal G., 2019, P 7 AAAI C HUM COMP, V7, P2, DOI DOI 10.1609/HCOMP.V7I1.5285
  • [6] Beyer H., 1999, interactions, V6, P32, DOI DOI 10.1145/291224.291229
  • [7] Bieringer L, 2022, PROCEEDINGS OF THE EIGHTEENTH SYMPOSIUM ON USABLE PRIVACY AND SECURITY, SOUPS 2022, P97
  • [8] Brown Hannah, 2022, FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, P2280, DOI 10.1145/3531146.3534642
  • [9] Carlini N., 2022, arXiv
  • [10] Chiang W., 2023, Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality