Me vs. the machine? Subjective evaluations of human- and AI-generated advice

被引:3
作者
Osborne, Merrick R. [1 ]
Bailey, Erica R. [1 ]
机构
[1] Univ Calif Berkeley, Haas Sch Business, Berkeley, CA 94720 USA
关键词
ALGORITHMS;
D O I
10.1038/s41598-025-86623-6
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Artificial intelligence ("AI") has the potential to vastly improve human decision-making. In line with this, researchers have increasingly sought to understand how people view AI, often documenting skepticism and even outright aversion to these tools. In the present research, we complement these findings by documenting the performance of LLMs in the personal advice domain. In addition, we shift the focus in a new direction-exploring how interacting with AI tools, specifically large language models, impacts the user's view of themselves. In five preregistered experiments (N = 1,722), we explore evaluations of human- and ChatGPT-generated advice along three dimensions: quality, effectiveness, and authenticity. We find that ChatGPT produces superior advice relative to the average online participant even in a domain in which people strongly prefer human-generated advice (dating and relationships). We also document a bias against ChatGPT-generated advice which is present only when participants are aware the advice was generated by ChatGPT. Novel to the present investigation, we then explore how interacting with these tools impacts self-evaluations. We manipulate the order in which people interact with these tools relative to self-generation and find that generating advice before interacting with ChatGPT advice boosts the quality ratings of the ChatGPT advice. At the same time, interacting with ChatGPT-generated advice before self-generating advice decreases self-ratings of authenticity. Taken together, we document a bias towards AI in the context of personal advice. Further, we identify an important externality in the use of these tools-they can invoke social comparisons of me vs. the machine.
引用
收藏
页数:10
相关论文
共 27 条
[1]   Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum [J].
Ayers, John W. ;
Poliak, Adam ;
Dredze, Mark ;
Leas, Eric C. ;
Zhu, Zechariah ;
Kelley, Jessica B. ;
Faix, Dennis J. ;
Goodman, Aaron M. ;
Longhurst, Christopher A. ;
Hogarth, Michael ;
Smith, Davey M. .
JAMA INTERNAL MEDICINE, 2023, 183 (06) :589-596
[2]   Task-Dependent Algorithm Aversion [J].
Castelo, Noah ;
Bos, Maarten W. ;
Lehmann, Donald R. .
JOURNAL OF MARKETING RESEARCH, 2019, 56 (05) :809-825
[3]  
DellAcqua F., 2023, Harv. Bus. Sch. Technol. Oper. Mgt Unit. Work Pap
[4]   Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them [J].
Dietvorst, Berkeley J. ;
Simmons, Joseph P. ;
Massey, Cade .
MANAGEMENT SCIENCE, 2018, 64 (03) :1155-1170
[5]   Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err [J].
Dietvorst, Berkeley J. ;
Simmons, Joseph P. ;
Massey, Cade .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2015, 144 (01) :114-126
[6]  
Eshghie M, 2023, Arxiv, DOI [arXiv:2304.09873, DOI 10.48550/ARXIV.2304.09873, 10.48550/arXiv.2304.09873]
[7]  
Festinger L., 1957, Cognitive dissonance
[8]   A THEORY OF SOCIAL COMPARISON PROCESSES [J].
Festinger, Leon .
HUMAN RELATIONS, 1954, 7 (02) :117-140
[9]   SELF-EVALUATION AS A FUNCTION OF ATTRACTION TO THE GROUP [J].
Festinger, Leon ;
Torrey, Jane ;
Willerman, Ben .
HUMAN RELATIONS, 1954, 7 (02) :161-174
[10]   WHEN COMPARISONS ARISE [J].
GILBERT, DT ;
GIESLER, RB ;
MORRIS, KA .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1995, 69 (02) :227-236