The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis

被引:5
作者
Ferrario, Andrea [1 ,2 ]
Sedlakova, Jana [1 ,3 ,4 ]
Trachsel, Manuel [5 ,6 ,7 ]
机构
[1] Univ Zurich, Inst Biomed Eth & Hist Med, Winterthurerstr 30, CH-8006 Zurich, Switzerland
[2] Swiss Fed Inst Technol, Mobiliar Lab Analyt ETH, Zurich, Switzerland
[3] Univ Zurich, Digital Soc Initiat, Zurich, Switzerland
[4] Univ Zurich, Inst Implementat Sci Hlth Care, Zurich, Switzerland
[5] Univ Basel, Basel, Switzerland
[6] Univ Hosp Basel, Basel, Switzerland
[7] Univ Psychiat Clin Basel, Basel, Switzerland
来源
JMIR MENTAL HEALTH | 2024年 / 11卷
关键词
generative AI; large language models; large language model; LLM; LLMs; machine learning; ML; natural language processing; NLP; deep learning; depression; mental health; mental illness; mental disease; mental diseases; mental illnesses; artificial intelligence; AI; digital health; digital technology; digital intervention; digital interventions; ethics; FRAMEWORK;
D O I
10.2196/56569
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate "human-like" features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.
引用
收藏
页数:13
相关论文
共 93 条
  • [51] Kumar A, 2024, Arxiv, DOI arXiv:2309.02705
  • [52] Bias and Epistemic Injustice in Conversational AI
    Laacke, Sebastian
    [J]. AMERICAN JOURNAL OF BIOETHICS, 2023, 23 (05) : 46 - 48
  • [53] Landgrebe J, 2022, Why machines will never rule the world, DOI [10.4324/9781003310105, DOI 10.4324/9781003310105]
  • [54] Ethics of large language models in medicine and medical research
    Li, Hanzhou
    Moon, John T.
    Purkayastha, Saptarshi
    Celi, Leo Anthony
    Trivedi, Hari
    Gichoya, Judy W.
    [J]. LANCET DIGITAL HEALTH, 2023, 5 (06): : E333 - E335
  • [55] Li MJ, 2021, Hawaii Int Con Sys S, P4053
  • [56] Lin BH, 2023, Arxiv, DOI [arXiv:2304.00416, DOI 10.48550/ARXIV.2304.00416, 10.48550/arXiv.2304.00416]
  • [57] Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
    Liu, Pengfei
    Yuan, Weizhe
    Fu, Jinlan
    Jiang, Zhengbao
    Hayashi, Hiroaki
    Neubig, Graham
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (09)
  • [58] How much do you trust me? A logico-mathematical analysis of the concept of the intensity of trust
    Loi, Michele
    Ferrario, Andrea
    Vigano, Eleonora
    [J]. SYNTHESE, 2023, 201 (06)
  • [59] The imperative for regulatory oversight of large language models (or generative AI) in healthcare
    Mesko, Bertalan
    Topol, Eric J. J.
    [J]. NPJ DIGITAL MEDICINE, 2023, 6 (01)
  • [60] In principle obstacles for empathic AI: why we can't replace human empathy in healthcare
    Montemayor, Carlos
    Halpern, Jodi
    Fairweather, Abrol
    [J]. AI & SOCIETY, 2022, 37 (04) : 1353 - 1359