(Why) Do We Trust AI?: A Case of AI-based Health Chatbots

被引:0
作者
Prakash, Ashish Viswanath [1 ]
Das, Saini [2 ]
机构
[1] Indian Inst Management, Tiruchirappalli, Tamil Nadu, India
[2] Indian Inst Technol Kharagpur, Kharagpur, India
关键词
Artificial Intelligence; Health Chatbot; Trust in Technology; Explainability; Contextualization; Free Simulation Experiment; ANTHROPOMORPHISM INCREASES TRUST; COMMON METHOD VARIANCE; SERVICE QUALITY; ARTIFICIAL-INTELLIGENCE; E-COMMERCE; INFORMATION QUALITY; USER ADOPTION; BLACK-BOX; RISK; IMPACT;
D O I
10.3127/ajis.v28i0.4235
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for SelfDiagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers' trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers' trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers' trust in and adoption of AICSD.
引用
收藏
页数:43
相关论文
共 139 条
[91]  
Okoyomon E., 2019, On the ridiculousness of notice and consent: Contradictions in app privacy policies
[92]   Research commentary: Desperately seeking the "IT" in IT research - A call to theorizing the IT artifact [J].
Orlikowski, WJ ;
Iacono, CS .
INFORMATION SYSTEMS RESEARCH, 2001, 12 (02) :121-134
[93]  
PARASURAMAN A, 1988, J RETAILING, V64, P12
[94]   Building effective online marketplaces with institution-based trust [J].
Pavlou, PA ;
Gefen, D .
INFORMATION SYSTEMS RESEARCH, 2004, 15 (01) :37-59
[95]   Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model [J].
Pavlou, PA .
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE, 2003, 7 (03) :101-134
[96]   Effects of Perceived Risk on Intention to Purchase: A Meta-Analysis [J].
Pelaez, Alexander ;
Chen, Chi-Wen ;
Chen, Yan Xian .
JOURNAL OF COMPUTER INFORMATION SYSTEMS, 2019, 59 (01) :73-84
[97]   The role of system trust in business-to-consumer transactions [J].
Pennington, R ;
Wilcox, HD ;
Grover, V .
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS, 2003, 20 (03) :197-226
[98]   Sources of Method Bias in Social Science Research and Recommendations on How to Control It [J].
Podsakoff, Philip M. ;
MacKenzie, Scott B. ;
Podsakoff, Nathan P. .
ANNUAL REVIEW OF PSYCHOLOGY, VOL 63, 2012, 63 :539-569
[99]   Common method biases in behavioral research: A critical review of the literature and recommended remedies [J].
Podsakoff, PM ;
MacKenzie, SB ;
Lee, JY ;
Podsakoff, NP .
JOURNAL OF APPLIED PSYCHOLOGY, 2003, 88 (05) :879-903
[100]   Trust Me, I'm a Chatbot: How Artificial Intelligence in Health Care Fails the Turing Test [J].
Powell, John .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2019, 21 (10)