Deepfake detection with and without content warnings

被引:1
作者
Lewis, Andrew [1 ]
Vu, Patrick [2 ]
Duch, Raymond M. [1 ]
Chowdhury, Areeq
机构
[1] Univ Oxford, Oxford, England
[2] Brown Univ, Providence, RI USA
来源
ROYAL SOCIETY OPEN SCIENCE | 2023年 / 10卷 / 11期
关键词
deepfake; experiments; manual detection;
D O I
10.1098/rsos.231214
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The rapid advancement of 'deepfake' video technology-which uses deep learning artificial intelligence algorithms to create fake videos that look real-has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.
引用
收藏
页数:13
相关论文
共 19 条
  • [1] Asnani V, 2023, Arxiv, DOI arXiv:2106.07873
  • [2] Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
    Chesney, Bobby
    Citron, Danielle
    [J]. CALIFORNIA LAW REVIEW, 2019, 107 (06) : 1753 - 1819
  • [3] Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media
    Clayton, Katherine
    Blair, Spencer
    Busam, Jonathan A.
    Forstner, Samuel
    Glance, John
    Green, Guy
    Kawata, Anna
    Kovvuri, Akhila
    Martin, Jonathan
    Morgan, Evan
    Sandhu, Morgan
    Sang, Rachel
    Scholz-Bright, Rachel
    Welch, Austin T.
    Wolff, Andrew G.
    Zhou, Amanda
    Nyhan, Brendan
    [J]. POLITICAL BEHAVIOR, 2020, 42 (04) : 1073 - 1095
  • [4] Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents
    Coppock, Alexander
    McClellan, Oliver A.
    [J]. RESEARCH & POLITICS, 2019, 6 (01)
  • [5] Seeing What You Want to See: How Imprecise Uncertainty Ranges Enhance Motivated Reasoning
    Dieckmann, Nathan F.
    Gregory, Robin
    Peters, Ellen
    Hartman, Robert
    [J]. RISK ANALYSIS, 2017, 37 (03) : 471 - 486
  • [6] Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?
    Dobber, Tom
    Metoui, Nadia
    Trilling, Damian
    Helberger, Natali
    de Vreese, Claes
    [J]. INTERNATIONAL JOURNAL OF PRESS-POLITICS, 2021, 26 (01) : 69 - 91
  • [7] Dolhansky B, 2020, Arxiv, DOI [arXiv:2006.07397, DOI 10.48550/ARXIV.2006.07397]
  • [8] The effectiveness of short-format refutational fact-checks
    Ecker, Ullrich K. H.
    O'Reilly, Ziggy
    Reid, Jesse S.
    Chang, Ee Pin
    [J]. BRITISH JOURNAL OF PSYCHOLOGY, 2020, 111 (01) : 36 - 54
  • [9] Europol, 2020, Malicious Uses and Abuses of Artificial Intelligence
  • [10] The Epistemic Threat of Deepfakes
    Fallis D.
    [J]. Philosophy & Technology, 2021, 34 (4) : 623 - 643