Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis

被引:133
作者
Choudhury, Avishek [1 ,2 ]
Shamszare, Hamid [1 ]
机构
[1] West Virginia Univ, Ind & Management Syst Engn, Benjamin M Statler Coll Engn & Mineral Resources, Morgantown, WV USA
[2] West Virginia Univ, Benjamin M Statler Coll Engn & Mineral Resources, Ind & Management Syst Engn, 321 Engn Sci Bldg,1306 Evansdale Dr, Morgantown, WV 26506 USA
基金
英国科研创新办公室;
关键词
ChatGPT; trust in AI; artificial intelligence; technology adoption; behavioral intention; chatbot; human factors; trust; adoption; intent; survey; shared accountability; AI policy; PLS-SEM;
D O I
10.2196/47184
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: ChatGPT (Chat Generative Pre-trained Transformer) has gained popularity for its ability to generate human-like responses. It is essential to note that overreliance or blind trust in ChatGPT, especially in high-stakes decision-making contexts, can have severe consequences. Similarly, lacking trust in the technology can lead to underuse, resulting in missed opportunities. Objective: This study investigated the impact of users' trust in ChatGPT on their intent and actual use of the technology. Four hypotheses were tested: (1) users' intent to use ChatGPT increases with their trust in the technology; (2) the actual use of ChatGPT increases with users' intent to use the technology; (3) the actual use of ChatGPT increases with users' trust in the technology; and (4) users' intent to use ChatGPT can partially mediate the effect of trust in the technology on its actual use. Methods: This study distributed a web-based survey to adults in the United States who actively use ChatGPT (version 3.5) at least once a month between February 2023 through March 2023. The survey responses were used to develop 2 latent constructs: Trust and Intent to Use , with Actual Use being the outcome variable. The study used partial least squares structural equation modeling to evaluate and test the structural model and hypotheses. Results: In the study, 607 respondents completed the survey. The primary uses of ChatGPT were for information gathering (n=219, 36.1%), entertainment (n=203, 33.4%), and problem-solving (n=135, 22.2%), with a smaller number using it for health-related queries (n=44, 7.2%) and other activities (n=6, 1%). Our model explained 50.5% and 9.8% of the variance in Intent to Use and Actual Use , respectively, with path coefficients of 0.711 and 0.221 for Trust on Intent to Use and Actual Use, respectively. The bootstrapped results failed to reject all 4 null hypotheses, with Trust having a significant direct effect on both Intent to Use (beta=0.711, 95% CI 0.656-0.764) and Actual Use (beta=0.302, 95% CI 0.229-0.374). The indirect effect of Trust on Actual Use , partially mediated by Intent to Use , was also significant (beta=0.113, 95% CI 0.001-0.227). Conclusions: Our results suggest that trust is critical to users' adoption of ChatGPT. It remains crucial to highlight that ChatGPT was not initially designed for health care applications. Therefore, an overreliance on it for health-related advice could potentially lead to misinformation and subsequent health risks. Efforts must be focused on improving the ChatGPT's ability to distinguish between queries that it can safely handle and those that should be redirected to human experts (health care professionals). Although risks are associated with excessive trust in artificial intelligence-driven chatbots such as ChatGPT, the potential risks can be reduced by advocating for shared accountability and fostering collaboration between developers, subject matter experts, and human factors researchers.
引用
收藏
页数:11
相关论文
共 53 条
  • [1] Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
    Ab Hamid, M. R.
    Sami, W.
    Sidek, M. H. Mohmad
    [J]. 1ST INTERNATIONAL CONFERENCE ON APPLIED & INDUSTRIAL MATHEMATICS AND STATISTICS 2017 (ICOAIMS 2017), 2017, 890
  • [2] The impact of artificial intelligence in medicine on the future role of the physician
    Ahuja, Abhimanyu S.
    [J]. PEERJ, 2019, 7
  • [3] Aljanabi M., 2023, Mesopotamian J. CyberSecur., V2023, P16, DOI [DOI 10.58496/MJCS/2023/003, 10.58496/mjcs/2023/003]
  • [4] [Anonymous], INTR CHATGPT
  • [5] Asch DA, 2023, NEJM Catal Innov Care Deliv, V4, P1, DOI [DOI 10.1056/CAT.23.0043, 10.1056/cat.23.0043]
  • [6] ChatGPT and the Future of Medical Writing
    Biswas, Som
    [J]. RADIOLOGY, 2023, 307 (02)
  • [7] ARTIFICIAL-INTELLIGENCE - WHERE ARE WE
    BOBROW, DG
    HAYES, PJ
    [J]. ARTIFICIAL INTELLIGENCE, 1985, 25 (03) : 375 - 415
  • [8] ChatGPT: five priorities for research
    Bockting, Claudi
    van Dis, Eva A. M.
    Bollen, Johan
    van Rooij, Robert
    Zuidema, Willem L.
    [J]. NATURE, 2023, 614 (7947) : 224 - 226
  • [9] Learning from Artificial Intelligence's Previous Awakenings: The History of Expert Systems
    Brock, David C.
    [J]. AI MAGAZINE, 2018, 39 (03) : 3 - 15
  • [10] Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios
    Cascella, Marco
    Montomoli, Jonathan
    Bellini, Valentina
    Bignami, Elena
    [J]. JOURNAL OF MEDICAL SYSTEMS, 2023, 47 (01)