Poetry Will Not Optimize; or, What Is Literature to AI?

被引:5
作者
Elam, Michele [1 ,2 ,3 ,4 ]
机构
[1] Stanford Univ, Humanities English Dept, Stanford, CA 94305 USA
[2] Stanford Univ, Inst Human Ctr AI, Stanford, CA 94305 USA
[3] African & African Amer Studies, Cambridge, MA USA
[4] Modern Thought & Literature, Grad Program, Stanford, CA USA
关键词
artificial intelligence; literature; art; race; GPT-3;
D O I
10.1215/00029831-10575077
中图分类号
I3/7 [各国文学];
学科分类号
摘要
Literature, poetry, and other forms of noncommercial creative expression challenge the techno-instrumentalist approaches to language, the predictive language generation, informing NLP (large natural language processing models) such as GPT-3 or-4 as well as, more generally, generative AI (text to image, video, audio). Claims that AI systems automate and expedite creativ-ity reflect industry and research priorities of speed, scale, optimization, and frictionlessness driv-ing much artificial intelligence design and application. But poetry will not optimize; the creative process cannot be reduced to a prompt. Some have noted that literary creations generated or aug-mented by artificial intelligence at best can offer form without meaning; using a GPT creation prompted by Maya Angelou's poem "Still I Rise" as a case study, this essay argues that NLP's predictive language generation and what I call algorithmic ahistoricity can also, more disturbingly, render meaning senseless. In doing so, GPT-3's literary experiments are not "failed" because they do not meet some moving target of a literary standard, nor because of technological insufficiency, but because it can make it harder for people to name and navigate their realities. The coda explores an example of AI as literary interlocutor and creative engagement beyond optimization.
引用
收藏
页码:281 / 303
页数:23
相关论文
共 54 条
[1]  
Baldwin J., 1998, Collected essays, P780
[2]  
Banta Martha., 1993, Taylored Lives: Narrative Productions in the Age of Taylor, Veblen, and Ford
[3]  
Barthes Roland, 2001, CONTRIBUTIONS PHILOS, V83, P3
[4]  
Bechard Deni Ellis, 2021, STANFORD MAGAZIN DEC
[5]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623
[6]  
Benjamin R., 2019, Race after technology: Abolitionist tools for the New Jim Code
[7]  
Benjamin Ruha, 2021, KEYNOTE KIEVE LECT C
[8]  
Benjamin Walter., 1969, ILLUMINATIONS, P217
[9]  
Branwen G., 2022, The scaling hypothesis
[10]  
Branwen G., 2019, Gpt-2 neural network poetry