The evolution of political memes: Detecting and characterizing internet memes with multi-modal deep learning

被引:57
|
作者
Beskow, David M. [1 ]
Kumar, Sumeet [1 ]
Carley, Kathleen M. [1 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
关键词
Deep learning; Multi-modal learning; Computer vision; Meme-detection; Meme;
D O I
10.1016/j.ipm.2019.102170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Combining humor with cultural relevance, Internet memes have become an ubiquitous artifact of the digital age. As Richard Dawkins described in his book The Selfish Gene, memes behave like cultural genes as they propagate and evolve through a complex process of 'mutation' and 'inheritance'. On the Internet, these memes activate inherent biases in a culture or society, sometimes replacing logical approaches to persuasive argument. Despite their fair share of success on the Internet, their detection and evolution have remained understudied. In this research, we propose and evaluate Meme-Hunter, a multi-modal deep learning model to classify images on the Internet as memes vs non-memes, and compare this to uni-modal approaches. We then use image similarity, meme specific optical character recognition, and face detection to find and study families of memes shared on Twitter in the 2018 US Mid-term elections. By mapping meme mutation in an electoral process, this study confirms Richard Dawkins' concept of meme evolution.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Effective deep learning-based multi-modal retrieval
    Wei Wang
    Xiaoyan Yang
    Beng Chin Ooi
    Dongxiang Zhang
    Yueting Zhuang
    The VLDB Journal, 2016, 25 : 79 - 101
  • [22] MULTI-MODAL DEEP LEARNING ON IMAGING GENETICS FOR SCHIZOPHRENIA CLASSIFICATION
    Kanyal, Ayush
    Kandula, Srinivas
    Calhoun, Vince
    Ye, Dong Hye
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [23] A Unified Deep Learning Framework for Multi-Modal Multi-Dimensional Data
    Xi, Pengcheng
    Goubran, Rafik
    Shu, Chang
    2019 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA), 2019,
  • [24] Multi-modal body part segmentation of infants using deep learning
    Voss, Florian
    Brechmann, Noah
    Lyra, Simon
    Rixen, Joeran
    Leonhardt, Steffen
    Antink, Christoph Hoog
    BIOMEDICAL ENGINEERING ONLINE, 2023, 22 (01)
  • [25] OctopusNet: A Deep Learning Segmentation Network for Multi-modal Medical Images
    Chen, Yu
    Chen, Jiawei
    Wei, Dong
    Li, Yuexiang
    Zheng, Yefeng
    MULTISCALE MULTIMODAL MEDICAL IMAGING, MMMI 2019, 2020, 11977 : 17 - 25
  • [26] MDNNSyn: A Multi-Modal Deep Learning Framework for Drug Synergy Prediction
    Li, Lei
    Li, Haitao
    Ishdorj, Tseren-Onolt
    Zheng, Chunhou
    Su, Yansen
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (10) : 6225 - 6236
  • [27] Combining Multi-Modal Statistics for Welfare Prediction Using Deep Learning
    Sharma, Pulkit
    Manandhar, Achut
    Thomson, Patrick
    Katuva, Jacob
    Hope, Robert
    Clifton, David A.
    SUSTAINABILITY, 2019, 11 (22)
  • [28] Multi-modal data clustering using deep learning: A systematic review
    Raya, Sura
    Orabi, Mariam
    Afyouni, Imad
    Al Aghbari, Zaher
    NEUROCOMPUTING, 2024, 607
  • [29] Heterogeneous structural responses recovery based on multi-modal deep learning
    Du, Bowen
    Wu, Liyu
    Sun, Leilei
    Xu, Fei
    Li, Linchao
    STRUCTURAL HEALTH MONITORING-AN INTERNATIONAL JOURNAL, 2023, 22 (02): : 799 - 813
  • [30] A comparative review on multi-modal sensors fusion based on deep learning
    Tang, Qin
    Liang, Jing
    Zhu, Fangqi
    SIGNAL PROCESSING, 2023, 213