共 41 条
Enhancing Cross-Language Multimodal Emotion Recognition With Dual Attention Transformers
被引:0
|作者:
Zaidi, Syed Aun Muhammad
[1
]
Latif, Siddique
[2
]
Qadir, Junaid
[3
]
机构:
[1] Informat Technol Univ ITU, Lahore 54700, Pakistan
[2] Queensland Univ Technol QUT, Brisbane, Qld 4000, Australia
[3] Qatar Univ, Coll Engn, Comp Sci & Engn Dept, Doha, Qatar
来源:
IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY
|
2024年
/
5卷
关键词:
Co-attention networks;
graph attention networks;
multi-modal learning;
multimodal emotion recognition;
SPEECH;
D O I:
10.1109/OJCS.2024.3486904
中图分类号:
TP3 [计算技术、计算机技术];
学科分类号:
0812 ;
摘要:
Despite the recent progress in emotion recognition, state-of-the-art systems are unable to achieve improved performance in cross-language settings. In this article we propose a Multimodal Dual Attention Transformer (MDAT) model to improve cross-language multimodal emotion recognition. Our model utilises pre-trained models for multimodal feature extraction and is equipped with dual attention mechanisms including graph attention and co-attention to capture complex dependencies across different modalities and languages to achieve improved cross-language multimodal emotion recognition. In addition, our model also exploits a transformer encoder layer for high-level feature representation to improve emotion classification accuracy. This novel construct preserves modality-specific emotional information while enhancing cross-modality and cross-language feature generalisation, resulting in improved performance with minimal target language data. We assess our model's performance on four publicly available emotion recognition datasets and establish its superior effectiveness compared to recent approaches and baseline models.
引用
收藏
页码:684 / 693
页数:10
相关论文