The evolution of political memes: Detecting and characterizing internet memes with multi-modal deep learning

被引:57
|
作者
Beskow, David M. [1 ]
Kumar, Sumeet [1 ]
Carley, Kathleen M. [1 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
关键词
Deep learning; Multi-modal learning; Computer vision; Meme-detection; Meme;
D O I
10.1016/j.ipm.2019.102170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Combining humor with cultural relevance, Internet memes have become an ubiquitous artifact of the digital age. As Richard Dawkins described in his book The Selfish Gene, memes behave like cultural genes as they propagate and evolve through a complex process of 'mutation' and 'inheritance'. On the Internet, these memes activate inherent biases in a culture or society, sometimes replacing logical approaches to persuasive argument. Despite their fair share of success on the Internet, their detection and evolution have remained understudied. In this research, we propose and evaluate Meme-Hunter, a multi-modal deep learning model to classify images on the Internet as memes vs non-memes, and compare this to uni-modal approaches. We then use image similarity, meme specific optical character recognition, and face detection to find and study families of memes shared on Twitter in the 2018 US Mid-term elections. By mapping meme mutation in an electoral process, this study confirms Richard Dawkins' concept of meme evolution.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Multi-modal learning and its application for biomedical data
    Liu, Jin
    Zhang, Yu-Dong
    Cai, Hongming
    FRONTIERS IN MEDICINE, 2024, 10
  • [42] Multi-modal deep learning networks for RGB-D pavement waste detection and recognition
    Li, Yangke
    Zhang, Xinman
    WASTE MANAGEMENT, 2024, 177 : 125 - 134
  • [43] Multi-modal Network Representation Learning
    Zhang, Chuxu
    Jiang, Meng
    Zhang, Xiangliang
    Ye, Yanfang
    Chawla, Nitesh, V
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3557 - 3558
  • [44] Detecting and Grounding Multi-Modal Media Manipulation and Beyond
    Shao, Rui
    Wu, Tianxing
    Wu, Jianlong
    Nie, Liqiang
    Liu, Ziwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5556 - 5574
  • [45] Detecting Functional Objects using Multi-Modal Data
    Ellis, Seth T.
    Harrison, Andre V.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [46] Deep learning supported breast cancer classification with multi-modal image fusion
    Hamdy, Eman
    Zaghloul, Mohamed Saad
    Badawy, Osama
    2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, : 319 - 325
  • [47] Deep Learning Based Multi-Modal Fusion Architectures for Maritime Vessel Detection
    Farahnakian, Fahimeh
    Heikkonen, Jukka
    REMOTE SENSING, 2020, 12 (16)
  • [48] Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning
    Hssayeni, Murtadha D.
    Ghoraani, Behnaz
    IEEE ACCESS, 2021, 9 : 21642 - 21652
  • [49] Multi-modal deep learning for joint prediction of otitis media and diagnostic difficulty
    Sundgaard, Josefine Vilsboll
    Hannemose, Morten Rieger
    Laugesen, Soren
    Bray, Peter
    Harte, James
    Kamide, Yosuke
    Tanaka, Chiemi
    Paulsen, Rasmus R.
    Christensen, Anders Nymark
    LARYNGOSCOPE INVESTIGATIVE OTOLARYNGOLOGY, 2024, 9 (01):
  • [50] Multi-modal deep feature learning for RGB-D object detection
    Xu, Xiangyang
    Li, Yuncheng
    Wu, Gangshan
    Luo, Jiebo
    PATTERN RECOGNITION, 2017, 72 : 300 - 313