Deepfakes and depiction: from evidence to communication

被引:0
|
作者
Francesco Pierini
机构
[1] Département d′études Cognitives,Institut Jean Nicod
[2] ENS,undefined
[3] EHESS,undefined
[4] PSL Research University,undefined
来源
Synthese | / 201卷
关键词
Deepfakes; Depiction; Photographs; Videos; Pictures; Images; Communication;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, I present an analysis of the depictive properties of deepfakes. These are videos and pictures produced by deep learning algorithms that automatically modify existing videos and photographs or generate new ones. I argue that deepfakes have an intentional standard of correctness. That is, a deepfake depicts its subject only insofar as its creator intends it to. This is due to the way in which these images are produced, which involves a degree of intentional control similar to that involved in the production of other intentional pictures such as drawings and paintings. This aspect distinguishes deepfakes from real videos and photographs, which instead have a non-intentional standard: their correct interpretation corresponds to the scenes that were recorded by the mechanisms that produced them, and not to what their producers intended them to represent. I show that these depictive properties make deepfakes fit for communicating information in the same way as language and other intentional pictures. That is, they do not provide direct access to the communicated information like non-intentional pictures (e.g., videos and photographs) do. Rather, they convey information indirectly, relying only on the viewer's assumptions about the communicative intentions of the creator of the deepfake. Indirect communication is indeed a prominent function of such media in our society, but it is often overlooked in the philosophical literature. This analysis also explains what is epistemically worrying about deepfakes. With the introduction of this technology, viewers interpreting photorealistic videos and pictures can no longer be sure of which standard to select (i.e., intentional or non-intentional), which makes misinterpretation likely (i.e., they can take an image to have a standard of correctness that it does not have).
引用
收藏
相关论文
共 50 条
  • [31] Critical barriers to effective communication in the construction industry: evidence from Nigeria
    Olanrewaju, Oludolapo Ibrahim
    Bello, Abdulkabir Opeyemi
    Semiu, Mudasiru Abiodun
    Mudashiru, Sikiru Abayomi
    INTERNATIONAL JOURNAL OF CONSTRUCTION MANAGEMENT, 2024,
  • [32] The artifactual theory of depiction: from paintings and sculptures to virtual reality
    Terrone, Enrico
    SYNTHESE, 2025, 205 (04)
  • [33] Five rules for evidence communication
    Blastland, Michael
    Freeman, Alexandra L. J.
    van der Linden, Sander
    Marteau, Theresa M.
    Spiegelhalter, David
    NATURE, 2020, 587 (7834) : 362 - 364
  • [34] Five rules for evidence communication
    Michael Blastland
    Alexandra L. J. Freeman
    Sander van der Linden
    Theresa M. Marteau
    David Spiegelhalter
    Nature, 2020, 587 : 362 - 364
  • [35] The role of communication in expatriate management at SMEs: Evidence from foreign employees in Turkey
    Aldandachly, Sawsan
    Akbas, Ilkay
    Gursoy, Ozgur Burcak
    JOURNAL OF THE INTERNATIONAL COUNCIL FOR SMALL BUSINESS, 2025,
  • [36] Communication competence, self-efficacy, and spiritual intelligence: evidence from nurses
    Gholamhossein Mehralian
    Ali Reza Yusefi
    Neda Dastyar
    Shima Bordbar
    BMC Nursing, 22
  • [37] Does feedback influence patient - professional communication? Empirical evidence from Italy
    Murante, Anna Maria
    Vainieri, Milena
    Rojas, Diana
    Nuti, Sabina
    HEALTH POLICY, 2014, 116 (2-3) : 273 - 280
  • [38] Communication competence, self-efficacy, and spiritual intelligence: evidence from nurses
    Mehralian, Gholamhossein
    Yusefi, Ali Reza
    Dastyar, Neda
    Bordbar, Shima
    BMC NURSING, 2023, 22 (01)
  • [39] Improvised detection of deepfakes from visual inputs using light weight deep ensemble model
    Saroj Kumar Panda
    Tausif Diwan
    Omprakash G. Kakde
    Jitendra V. Tembhurne
    Multimedia Tools and Applications, 2023, 82 : 20101 - 20118
  • [40] Improvised detection of deepfakes from visual inputs using light weight deep ensemble model
    Panda, Saroj Kumar
    Diwan, Tausif
    Kakde, Omprakash G. G.
    Tembhurne, Jitendra V. V.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (13) : 20101 - 20118