The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis

被引:5
作者
Ferrario, Andrea [1 ,2 ]
Sedlakova, Jana [1 ,3 ,4 ]
Trachsel, Manuel [5 ,6 ,7 ]
机构
[1] Univ Zurich, Inst Biomed Eth & Hist Med, Winterthurerstr 30, CH-8006 Zurich, Switzerland
[2] Swiss Fed Inst Technol, Mobiliar Lab Analyt ETH, Zurich, Switzerland
[3] Univ Zurich, Digital Soc Initiat, Zurich, Switzerland
[4] Univ Zurich, Inst Implementat Sci Hlth Care, Zurich, Switzerland
[5] Univ Basel, Basel, Switzerland
[6] Univ Hosp Basel, Basel, Switzerland
[7] Univ Psychiat Clin Basel, Basel, Switzerland
来源
JMIR MENTAL HEALTH | 2024年 / 11卷
关键词
generative AI; large language models; large language model; LLM; LLMs; machine learning; ML; natural language processing; NLP; deep learning; depression; mental health; mental illness; mental disease; mental diseases; mental illnesses; artificial intelligence; AI; digital health; digital technology; digital intervention; digital interventions; ethics; FRAMEWORK;
D O I
10.2196/56569
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate "human-like" features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.
引用
收藏
页数:13
相关论文
共 93 条
  • [1] Alvarez-Melis D, 2018, Arxiv, DOI arXiv:1806.08049
  • [2] [Anonymous], 2014, 2014 ACA Code of Ethics
  • [3] [Anonymous], 2023, PSYCHIATRIST
  • [4] [Anonymous], 2017, ETH PRINC PSYCH COD
  • [5] Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions
    Araujo, Theo
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2018, 85 : 183 - 189
  • [6] How transdisciplinary research teams learn to do knowledge translation (KT), and how KT in turn impacts transdisciplinary research: a realist evaluation and longitudinal case study
    Archibald, Mandy M. M.
    Lawless, Michael T. T.
    de Plaza, Maria Alejandra Pinero
    Kitson, Alison L. L.
    [J]. HEALTH RESEARCH POLICY AND SYSTEMS, 2023, 21 (01)
  • [7] Therapeutic alliance and outcome of psychotherapy: historical excursus, measurements, and prospects for research
    Ardito, Rita B.
    Rebellino, Daniela
    [J]. FRONTIERS IN PSYCHOLOGY, 2011, 2
  • [8] Athalye A, 2018, PR MACH LEARN RES, V80
  • [9] Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
    Ayers, John W.
    Poliak, Adam
    Dredze, Mark
    Leas, Eric C.
    Zhu, Zechariah
    Kelley, Jessica B.
    Faix, Dennis J.
    Goodman, Aaron M.
    Longhurst, Christopher A.
    Hogarth, Michael
    Smith, Davey M.
    [J]. JAMA INTERNAL MEDICINE, 2023, 183 (06) : 589 - 596
  • [10] Evaluating the Therapeutic Alliance With a Free-Text CBT Conversational Agent (Wysa): A Mixed-Methods Study
    Beatty, Clare
    Malik, Tanya
    Meheli, Saha
    Sinha, Chaitali
    [J]. FRONTIERS IN DIGITAL HEALTH, 2022, 4