Plato's Shadows in the Digital Cave: Controlling Cultural Bias in Generative AI

被引:4
作者
Karpouzis, Kostas [1 ]
机构
[1] Panteion Univ Social & Polit Sci, Dept Commun Media & Culture, Athens 17671, Greece
关键词
ethics; bias; culture; diversity; fairness; societal impact; generative AI; training data;
D O I
10.3390/electronics13081457
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Generative Artificial Intelligence (AI) systems, like ChatGPT, have the potential to perpetuate and amplify cultural biases embedded in their training data, which are predominantly produced by dominant cultural groups. This paper explores the philosophical and technical challenges of detecting and mitigating cultural bias in generative AI, drawing on Plato's Allegory of the Cave to frame the issue as a problem of limited and distorted representation. We propose a multifaceted approach combining technical interventions, such as data diversification and culturally aware model constraints, with a deeper engagement with the cultural and philosophical dimensions of the problem. Drawing on theories of extended cognition and situated knowledge, we argue that mitigating AI biases requires a reflexive interrogation of the cultural contexts of AI development and a commitment to empowering marginalized voices and perspectives. We claim that controlling cultural bias in generative AI is inseparable from the larger project of promoting equity, diversity, and inclusion in AI development and governance. By bridging philosophical reflection with technical innovation, this paper contributes to the growing discourse on responsible and inclusive AI, offering a roadmap for detecting and mitigating cultural biases while grappling with the profound cultural implications of these powerful technologies.
引用
收藏
页数:13
相关论文
共 30 条
[1]  
[Anonymous], 2010, The allegory of the cave
[2]   Shame and necessity - Williams,B [J].
Austin, N .
CLASSICAL WORLD, 1996, 89 (06) :493-493
[3]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[4]   AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias [J].
Bellamy, R. K. E. ;
Dey, K. ;
Hind, M. ;
Hoffman, S. C. ;
Houde, S. ;
Kannan, K. ;
Lohia, P. ;
Martino, J. ;
Mehta, S. ;
Mojsilovie, A. ;
Nagar, S. ;
Ramamurthy, K. Natesan ;
Richards, J. ;
Saha, D. ;
Sattigeri, P. ;
Singh, M. ;
Varshney, K. R. ;
Zhang, Y. .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
[5]  
Bostrom Nick., 2014, SUPERINTELLIGENCE PA
[6]   Bias in Machine Learning Software: Why? How? What to Do? [J].
Chakraborty, Joymallya ;
Majumder, Suvodeep ;
Menzies, Tim .
PROCEEDINGS OF THE 29TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '21), 2021, :429-440
[7]  
Crisp R., 2014, Aristotle: Nicomachean Ethics
[8]  
Dennett Daniel., 1984, ELBOW ROOM VARIETIES, DOI DOI 10.1017/CBO9781139172714
[9]  
Eubanks Virginia., 2018, AUTOMATING INEQUALIT
[10]  
Fairclough N., 1995, MEDIA DISCOURSE, DOI DOI 10.4324/9781315834368