Multimodal Machine Learning: A Survey and Taxonomy

被引:2046
作者
Baltrusaitis, Tadas [1 ]
Ahuja, Chaitanya [2 ]
Morency, Louis-Philippe [2 ]
机构
[1] Microsoft Corp, Cambridge CB1 2FB, England
[2] Carnegie Mellon Univ, Language Technol Inst, Pittsburgh, PA 15213 USA
基金
美国国家科学基金会;
关键词
Multimodal; machine learning; introductory; survey; EMOTION RECOGNITION; NEURAL-NETWORKS; SPEECH; TEXT; FUSION; VIDEO; LANGUAGE; MODELS; GENERATION; ALIGNMENT;
D O I
10.1109/TPAMI.2018.2798607
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.
引用
收藏
页码:423 / 443
页数:21
相关论文
共 258 条
  • [1] Agrawal Aishwarya, 2016, P 2016 C EMPIRICAL M, DOI [10. 18653/v1/D16-1203, DOI 10.18653/V1/D16-1203]
  • [2] Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011
    Anagnostopoulos, Christos-Nikolaos
    Iliou, Theodoros
    Giannoukos, Ioannis
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2015, 43 (02) : 155 - 177
  • [3] Expressive Visual Text-To-Speech Using Active Appearance Models
    Anderson, Robert
    Stenger, Bjoern
    Wan, Vincent
    Cipolla, Roberto
    [J]. 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 3382 - 3389
  • [4] Neural Module Networks
    Andreas, Jacob
    Rohrbach, Marcus
    Darrell, Trevor
    Klein, Dan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 39 - 48
  • [5] Anguera X, 2014, INTERSPEECH, P1405
  • [6] [Anonymous], 2015, P INT C MACH LEARN
  • [7] [Anonymous], 2007, INFORM RETRIEVAL MUS, DOI DOI 10.1007/978-3-540-74048-3
  • [8] [Anonymous], 2012, Association for Computational Linguistics
  • [9] [Anonymous], 2014, ADV NEURAL INFORM PR
  • [10] [Anonymous], 2013, EMNLP