Negotiating the authenticity of AI: how the discourse on AI rejects human indeterminacy

被引:4
|
作者
Beerends, Siri [1 ]
Aydin, Ciano [1 ]
机构
[1] Univ Twente, Fac Behav Management & Social Sci, Philosophy Dept, Enschede, Netherlands
关键词
Authenticity; Negotiation; Artificial intelligence; Human intelligence; Human indeterminacy; ARTIFICIAL-INTELLIGENCE; ALGORITHMS;
D O I
10.1007/s00146-024-01884-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we demonstrate how the language and reasonings that academics, developers, consumers, marketers, and journalists deploy to accept or reject AI as authentic intelligence has far-reaching bearing on how we understand our human intelligence and condition. The discourse on AI is part of what we call the "authenticity negotiation process" through which AI's "intelligence" is given a particular meaning and value. This has implications for scientific theory, research directions, ethical guidelines, design principles, funding, media attention, and the way people relate to and act upon AI. It also has great impact on humanity's self-image and the way we negotiate what it means to be human, existentially, culturally, politically, and legally. We use a discourse analysis of academic papers, AI education programs, and online discussions to demonstrate how AI itself, as well as the products, services, and decisions delivered by AI systems are negotiated as authentic or inauthentic intelligence. In this negotiation process, AI stakeholders indirectly define and essentialize what being human(like) means. The main argument we will develop is that this process of indirectly defining and essentializing humans results in an elimination of the space for humans to be indeterminate. By eliminating this space and, hence, denying indeterminacy, the existential condition of the human being is jeopardized. Rather than re-creating humanity in AI, the AI discourse is re-defining what it means to be human and how humanity is valued and should be treated.
引用
收藏
页码:263 / 276
页数:14
相关论文
共 50 条
  • [41] Hybrid collective intelligence in a human–AI society
    Marieke M. M. Peeters
    Jurriaan van Diggelen
    Karel van den Bosch
    Adelbert Bronkhorst
    Mark A. Neerincx
    Jan Maarten Schraagen
    Stephan Raaijmakers
    AI & SOCIETY, 2021, 36 : 217 - 238
  • [42] Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare
    Kerstan, Sophie
    Bienefeld, Nadine
    Grote, Gudela
    RISK ANALYSIS, 2024, 44 (04) : 939 - 957
  • [43] How will Generative AI impact communication?
    Gans, Joshua S.
    ECONOMICS LETTERS, 2024, 242
  • [44] How AI Systems Can Be Blameworthy
    Altehenger, Hannah
    Menges, Leonhard
    Schulte, Peter
    PHILOSOPHIA, 2024, 52 (04) : 1083 - 1106
  • [45] How AI can be a force for 'not bad'
    Smith N.
    Engineering and Technology, 2022, 17 (08) : 95 - 99
  • [46] Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque
    Uwe Peters
    AI and Ethics, 2023, 3 (3): : 963 - 974
  • [47] HOW TO TEACH AI AT BSC LEVEL?
    Bako, M.
    Aszalos, L.
    12TH INTERNATIONAL CONFERENCE OF EDUCATION, RESEARCH AND INNOVATION (ICERI 2019), 2019, : 5384 - 5390
  • [48] How AI can be a force for good
    Taddeo, Mariarosaria
    Floridi, Luciano
    SCIENCE, 2018, 361 (6404) : 751 - 752
  • [49] And say the AI responded? Dancing around 'autonomy' in AI/human encounters
    Dahlin, Emma
    SOCIAL STUDIES OF SCIENCE, 2024, 54 (01) : 59 - 77
  • [50] How are communication companies adopting AI
    Calvo, S. A. N. T. I. A. G. O. TEjEDOR
    Sauri, Stephanie Vick
    Cervi, Laura
    COMUNICACION Y SOCIEDAD-GUADALAJARA, 2025, 22 : 1 - 25