Two-Level Multimodal Fusion for Sentiment Analysis in Public Security

被引:8
作者
Sun, Jianguo [1 ]
Yin, Hanqi [1 ]
Tian, Ye [1 ]
Wu, Junpeng [1 ]
Shen, Linshan [1 ]
Chen, Lei [2 ]
机构
[1] Harbin Engn Univ, Coll Comp Sci & Technol, Harbin 150001, Heilongjiang, Peoples R China
[2] Georgia Southern Univ, Coll Engn & Comp, Statesboro, GA 30458 USA
关键词
CLASSIFICATION; NETWORK;
D O I
10.1155/2021/6662337
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large amounts of data are widely stored in cyberspace. Not only can they bring much convenience to people's lives and work, but they can also assist the work in the information security field, such as microexpression recognition and sentiment analysis in the criminal investigation. Thus, it is of great significance to recognize and analyze the sentiment information, which is usually described by different modalities. Due to the correlation among different modalities data, multimodal can provide more comprehensive and robust information than unimodal in data analysis tasks. The complementary information from different modalities can be obtained by multimodal fusion methods. These approaches can process multimodal data through fusion algorithms and ensure the accuracy of the information used for subsequent classification or prediction tasks. In this study, a two-level multimodal fusion (T1MF) method with both data-level and decision-level fusion is proposed to achieve the sentiment analysis task. In the data-level fusion stage, a tensor fusion network is utilized to obtain the text-audio and text-video embeddings by fusing the text with audio and video features, respectively. During the decision-level fusion stage, the soft fusion method is adopted to fuse the classification or prediction results of the upstream classifiers, so that the final classification or prediction results can be as accurate as possible. The proposed method is tested on the CMU-MOSI, CMU-MOSEI, and IEMOCAP datasets, and the empirical results and ablation studies confirm the effectiveness of T1MF in capturing useful information from all the test modalities.
引用
收藏
页数:10
相关论文
共 37 条
  • [1] Implementation of hybrid image fusion technique for feature enhancement in medical diagnosis
    Agarwal, Jyoti
    Bedi, Sarabjeet Singh
    [J]. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2015, 5 : 1 - 17
  • [2] A decision support system based on multisensor data fusion for sustainable greenhouse management
    Aiello, Giuseppe
    Giovino, Irene
    Vallone, Mariangela
    Catania, Pietro
    Argento, Antonella
    [J]. JOURNAL OF CLEANER PRODUCTION, 2018, 172 : 4057 - 4065
  • [3] Multimodal Machine Learning: A Survey and Taxonomy
    Baltrusaitis, Tadas
    Ahuja, Chaitanya
    Morency, Louis-Philippe
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) : 423 - 443
  • [4] Benediktsson J.A., 2014, DECISION FUSION CLAS
  • [5] Classification of multisource and hyperspectral data based on decision fusion
    Benediktsson, JA
    Kanellopoulos, I
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 1999, 37 (03): : 1367 - 1377
  • [6] IEMOCAP: interactive emotional dyadic motion capture database
    Busso, Carlos
    Bulut, Murtaza
    Lee, Chi-Chun
    Kazemzadeh, Abe
    Mower, Emily
    Kim, Samuel
    Chang, Jeannette N.
    Lee, Sungbok
    Narayanan, Shrikanth S.
    [J]. LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) : 335 - 359
  • [7] A Review and Meta-Analysis of Multimodal Affect Detection Systems
    D'Mello, Sidney K.
    Kory, Jacqueline
    [J]. ACM COMPUTING SURVEYS, 2015, 47 (03)
  • [8] Degottex G, 2014, INT CONF ACOUST SPEE, DOI 10.1109/ICASSP.2014.6853739
  • [9] Doshi P., P 13 INT C MULT INT, P169
  • [10] Multimodal multitask deep learning model for Alzheimer's disease progression detection based on time series data
    El-Sappagh, Shaker
    Abuhmed, Tamer
    Islam, S. M. Riazul
    Kwak, Kyung Sup
    [J]. NEUROCOMPUTING, 2020, 412 : 197 - 215