Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis

被引:133
作者
Choudhury, Avishek [1 ,2 ]
Shamszare, Hamid [1 ]
机构
[1] West Virginia Univ, Ind & Management Syst Engn, Benjamin M Statler Coll Engn & Mineral Resources, Morgantown, WV USA
[2] West Virginia Univ, Benjamin M Statler Coll Engn & Mineral Resources, Ind & Management Syst Engn, 321 Engn Sci Bldg,1306 Evansdale Dr, Morgantown, WV 26506 USA
基金
英国科研创新办公室;
关键词
ChatGPT; trust in AI; artificial intelligence; technology adoption; behavioral intention; chatbot; human factors; trust; adoption; intent; survey; shared accountability; AI policy; PLS-SEM;
D O I
10.2196/47184
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: ChatGPT (Chat Generative Pre-trained Transformer) has gained popularity for its ability to generate human-like responses. It is essential to note that overreliance or blind trust in ChatGPT, especially in high-stakes decision-making contexts, can have severe consequences. Similarly, lacking trust in the technology can lead to underuse, resulting in missed opportunities. Objective: This study investigated the impact of users' trust in ChatGPT on their intent and actual use of the technology. Four hypotheses were tested: (1) users' intent to use ChatGPT increases with their trust in the technology; (2) the actual use of ChatGPT increases with users' intent to use the technology; (3) the actual use of ChatGPT increases with users' trust in the technology; and (4) users' intent to use ChatGPT can partially mediate the effect of trust in the technology on its actual use. Methods: This study distributed a web-based survey to adults in the United States who actively use ChatGPT (version 3.5) at least once a month between February 2023 through March 2023. The survey responses were used to develop 2 latent constructs: Trust and Intent to Use , with Actual Use being the outcome variable. The study used partial least squares structural equation modeling to evaluate and test the structural model and hypotheses. Results: In the study, 607 respondents completed the survey. The primary uses of ChatGPT were for information gathering (n=219, 36.1%), entertainment (n=203, 33.4%), and problem-solving (n=135, 22.2%), with a smaller number using it for health-related queries (n=44, 7.2%) and other activities (n=6, 1%). Our model explained 50.5% and 9.8% of the variance in Intent to Use and Actual Use , respectively, with path coefficients of 0.711 and 0.221 for Trust on Intent to Use and Actual Use, respectively. The bootstrapped results failed to reject all 4 null hypotheses, with Trust having a significant direct effect on both Intent to Use (beta=0.711, 95% CI 0.656-0.764) and Actual Use (beta=0.302, 95% CI 0.229-0.374). The indirect effect of Trust on Actual Use , partially mediated by Intent to Use , was also significant (beta=0.113, 95% CI 0.001-0.227). Conclusions: Our results suggest that trust is critical to users' adoption of ChatGPT. It remains crucial to highlight that ChatGPT was not initially designed for health care applications. Therefore, an overreliance on it for health-related advice could potentially lead to misinformation and subsequent health risks. Efforts must be focused on improving the ChatGPT's ability to distinguish between queries that it can safely handle and those that should be redirected to human experts (health care professionals). Although risks are associated with excessive trust in artificial intelligence-driven chatbots such as ChatGPT, the potential risks can be reduced by advocating for shared accountability and fostering collaboration between developers, subject matter experts, and human factors researchers.
引用
收藏
页数:11
相关论文
共 53 条
  • [11] Castelvecchi Davide, 2022, Nature, DOI 10.1038/d41586-022-04383-z
  • [12] ChatGPT and other artificial intelligence applications speed up scientific writing
    Chen, Tzeng-Ji
    [J]. JOURNAL OF THE CHINESE MEDICAL ASSOCIATION, 2023, 86 (04) : 351 - 353
  • [13] Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms
    Cheng, Xusen
    Zhang, Xiaoping
    Cohen, Jason
    Mou, Jian
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (03)
  • [14] I Asked a ChatGPT to Write an Editorial About How We Can Incorporate Chatbots Into Neurosurgical Research and Patient Care.
    D'Amico, Randy S.
    White, Timothy G.
    Shah, Harshal A.
    Langer, David J.
    [J]. NEUROSURGERY, 2023, 92 (04) : 663 - 664
  • [15] ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health
    De Angelis, Luigi
    Baglivo, Francesco
    Arzilli, Guglielmo
    Privitera, Gaetano Pierpaolo
    Ferragina, Paolo
    Tozzi, Alberto Eugenio
    Rizzo, Caterina
    [J]. FRONTIERS IN PUBLIC HEALTH, 2023, 11
  • [16] Determinants of Continuance Intention towards Banks' Chatbot Services in Vietnam: A Necessity for Sustainable Development
    Dung Minh Nguyen
    Chiu, Yen-Ting Helena
    Huy Duc Le
    [J]. SUSTAINABILITY, 2021, 13 (14)
  • [17] Can ChatGPT pass the life support exams without entering the American heart association course?
    Fijaoko, Nino
    Gosak, Lucija
    Stiglic, Gregor
    Picard, Christopher T.
    Douma, Matthew John
    [J]. RESUSCITATION, 2023, 185
  • [18] Gilson Aidan, 2023, JMIR Med Educ, V9, pe45312, DOI 10.2196/45312
  • [19] Hair J. F., 2021, Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook, P49, DOI [10.1007/978-3- 030-80519-7_3, DOI 10.1007/978-3-030-80519-7_3, 10.1007/978-3-030-80519-7_3]
  • [20] PLS-SEM: INDEED A SILVER BULLET
    Hair, Joe F.
    Ringle, Christian M.
    Sarstedt, Marko
    [J]. JOURNAL OF MARKETING THEORY AND PRACTICE, 2011, 19 (02) : 139 - 151