A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement

被引:14
作者
Sarkar, Surjodeep [1 ]
Gaur, Manas [1 ]
Chen, Lujie Karen [2 ]
Garg, Muskan [3 ]
Srivastava, Biplav [4 ]
机构
[1] Univ Maryland Baltimore Cty, Dept Comp Sci & Elect Engn, Baltimore, MD 21250 USA
[2] Univ Maryland Baltimore Cty, Dept Informat Syst, Baltimore, MD USA
[3] Mayo Clin, Dept AI & Informat, Rochester, MN USA
[4] Univ South Carolina, AI Inst, Columbia, SC USA
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2023年 / 6卷
关键词
explainable AI; safety; conversational AI; evaluation metrics; knowledge-infused learning; mental health; AI; SYSTEM;
D O I
10.3389/frai.2023.1229805
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations.
引用
收藏
页数:14
相关论文
共 143 条
[1]   Perceptions and Opinions of Patients About Mental Health Chatbots: Scoping Review [J].
Abd-Alrazaq, Alaa A. ;
Alajlani, Mohannad ;
Ali, Nashva ;
Denecke, Kerstin ;
Bewick, Bridgette M. ;
Househ, Mowafa .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2021, 23 (01)
[2]   Designing Personality-Adaptive Conversational Agents for Mental Health Care [J].
Ahmad, Rangina ;
Siemon, Dominik ;
Gnewuch, Ulrich ;
Robra-Bissantz, Susanne .
INFORMATION SYSTEMS FRONTIERS, 2022, 24 (03) :923-943
[3]  
Althoff Tim, 2016, Trans Assoc Comput Linguist, V4, P463, DOI 10.1162/tacl_a_00111
[4]  
[Anonymous], 2018, National Survey on Drug Use and Health: Veterans
[5]  
Bai Y., arXiv, DOI [DOI 10.48550/ARXIV.2204.05862, 10.48550/arXiv.2204.05862]
[6]  
Bai YT, 2022, Arxiv, DOI arXiv:2212.08073
[7]  
Bao F. S., 2022, PREPRINT, DOI [10.48550/arXiv.2212.1001, DOI 10.48550/ARXIV.2212.1001]
[8]   Explainable Machine Learning in Deployment [J].
Bhatt, Umang ;
Xiang, Alice ;
Sharma, Shubham ;
Weller, Adrian ;
Taly, Ankur ;
Jia, Yunhan ;
Ghosh, Joydeep ;
Puri, Ruchir ;
Moura, Jose M. F. ;
Eckersley, Peter .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :648-657
[9]   The Unified Medical Language System (UMLS): integrating biomedical terminology [J].
Bodenreider, O .
NUCLEIC ACIDS RESEARCH, 2004, 32 :D267-D270
[10]  
Bommasani R., 2021, arXiv