共 61 条
- [21] Ma Y., Hao Y., Chen M., Et al., Audio-visual emotion fusion (avef): a deep efficient weighted approach, Inf Fusion, 46, pp. 184-192, (2019)
- [22] Zadeh A., Chen M., Poria S., Tensor fusion network for multimodal sentiment analysis, In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing., pp. 1103-1114, (2017)
- [23] Liu Z., Shen Y., Lakshminarasimhan V.B., Efficient Low-Rank Multimodal Fusion with Modality-Specific Factors, (2018)
- [24] Jin T., Huang S., Li Y., Dual low-rank multimodal fusion, . In: Findings of the Association for Computational Linguistics: EMNLP 2020., pp. 377-387, (2020)
- [25] Fu Z., Liu F., Xu Q., Et al., Nhfnet: a non-homogeneous fusion network for multimodal sentiment analysis, 2022 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, (2022)
- [26] Kim K., Park S., Aobert: all-modalities-in-one bert for multimodal sentiment analysis, Inf Fusion, 92, pp. 37-45, (2023)
- [27] Zadeh A., Liang P.P., Poria S., Et al., Multi-attention recurrent network for human communication comprehension, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, (2018)
- [28] Zadeh A., Liang P.P., Mazumder N., Memory fusion network for multi-view sequential learning., (2018)
- [29] Bagher Zadeh A., Liang P.P., Poria S., Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph, In: Proceedings of the 56Th Annual Meeting of the Association for Computational Linguistics, 1, pp. 2236-2246, (2018)
- [30] Wang Y., Shen Y., Liu Z., Et al., Words can shift: Dynamically adjusting word representations using nonverbal behaviors, Proceedings of the AAAI Conference on Artificial Intelligence, 33, 1, pp. 7216-7223, (2019)