Gender Stereotypes toward Non-gendered Generative AI: The Role of Gendered Expertise and Gendered Linguistic Cues

被引:0
作者
Duan, Wen [1 ]
Mcneese, Nathan [1 ]
Li, Lingyuan [2 ]
机构
[1] Clemson University, Clemson, SC
[2] The University of Texas at Austin, Austin, TX
关键词
anthropomorphism; chatbot; gender stereotypes; gendered agent; gendered ai; generative ai;
D O I
10.1145/3701197
中图分类号
学科分类号
摘要
With rapid advancements in large language models (LLMs), generative AI (GenAI) is transforming people's life and work across various domains. Unlike previous AI technologies that are often feminized, most of these GenAI tools are non-gendered, potentially preventing users from applying gender stereotypes. However, GenAI's use of natural language can evoke social perceptions including gender attribution, making it susceptible to gender associations. Using two online experiments, we explored how GenAI's removal of gender could mitigate individuals' gender stereotypes toward it, and how certain linguistic cues could trigger gender stereotypes even if it is non-gendered. We found that the removal of AI's gender did mitigate gender stereotypes toward it, but only to an extent. Additionally, gendered linguistic cues such as politeness, apologies, and tentative language (or lack thereof) could trigger gender stereotypes toward non-gendered AI. We contribute to HCI and CSCW research by providing a timely investigation into individuals' gender stereotypes toward GenAI agents, and one of the first to examine the profound impact of language style on users' perceptions and attributions of social characteristics to GenAI. Our findings shed light on the purposeful and responsible design of GenAI that prioritizes promoting gender equality, thereby ensuring that technological advancements align with evolving social values. © 2025 Owner/Author.
引用
收藏
相关论文
共 98 条
[1]  
Ahmad S.R., Ahmad T.R., Balasubramanian V., Facente S., Kin C., Girod S., Are you really the doctor? Physician experiences with gendered microaggressions from patients, Journal of Women’s Health, 31, 4, pp. 521-532, (2022)
[2]  
China leads the world in adoption of generative AI, survey shows, (2024)
[3]  
Baxter D.N., McDonnell M., McLoughlin R.J., Impact of Chatbot Gender on User’s Stereotypical Perception and Satisfaction, Proceedings of the 32nd International BCS Human Computer Interaction Conference, (2018)
[4]  
Blascovich J., Loomis J., Beall A.C., Swinth K.R., Hoyt C.L., Bailenson J.N., Immersive virtual environment technology as a methodological tool for social psychology, Psychological Inquiry, 13, 2, pp. 103-124, (2002)
[5]  
Bodenhausen G.V., Stereotypes as Judgmental Heuristics: Evidence of Circadian Variations in Discrimination, Psychological Science, 1, 5, pp. 319-322, (1990)
[6]  
Borau S., Otterbring T., Laporte S., Wamba S.F., The most human bot: Female gendering increases humanness perceptions of bots and acceptance of AI, Psychology & Marketing, 38, 7, pp. 1052-1068, (2021)
[7]  
Bowleg L., Malekzadeh A.N., AuBuchon K.E., Ghabrial M.A., Bauer G.R., Rare exemplars and missed opportunities: Intersectionality within current sexual and gender diversity research and scholarship in psychology, Current Opinion in Psychology, 49, 2023, (2023)
[8]  
De'Aira B., Borenstein J., Howard A., Why Should We Gender?: The Effect of Robot Gendering and Occupational Stereotypes on Human Trust and Perceived Competency, Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 13-21, (2020)
[9]  
Bucinca Z., Malaya M.B., Gajos K.Z., To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making, Proceedings of the ACM on Human-Computer Interaction, 5, pp. 1-21, (2021)
[10]  
Cacioppo J.T., Petty R.E., The need for cognition, Journal of personality and social psychology, 42, 1, (1982)