A Discriminant Information Theoretic Learning Framework for Multi-modal Feature Representation

被引:2
作者
Gao, Lei [1 ]
Guan, Ling [1 ]
机构
[1] Ryerson Univ, 350 Victoria St, Toronto, ON M5B 2K3, Canada
关键词
Discriminative representation; complementary representation; information theoretic learning; multi-modal feature representation; image recognition; audio-visual emotion recognition; CANONICAL CORRELATION-ANALYSIS; GRAPH CONVOLUTIONAL NETWORKS; FEATURE FUSION; LEVEL FUSION; EMOTION; RECOGNITION; MODEL; SPARSE; AUTOENCODERS; ALGORITHMS;
D O I
10.1145/3587253
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As sensory and computing technology advances, multi-modal features have been playing a central role in ubiquitously representing patterns and phenomena for effective information analysis and recognition. As a result, multi-modal feature representation is becoming a progressively significant direction of academic research and real applications. Nevertheless, numerous challenges remain ahead, especially in the joint utilization of discriminatory representations and complementary representations from multi-modal features. In this article, a discriminant information theoretic learning (DITL) framework is proposed to address these challenges. By employing this proposed framework, the discrimination and complementation within the given multi-modal features are exploited jointly, resulting in a high-quality feature representation. According to characteristics of the DITL framework, the newly generated feature representation is further optimized, leading to lower computational complexity and improved system performance. To demonstrate the effectiveness and generality of DITL, we conducted experiments on several recognition examples, including both static cases, such as handwritten digit recognition, face recognition, and object recognition, and dynamic cases, such as video-based human emotion recognition and action recognition. The results show that the proposed framework outperforms state-of-the-art algorithms.
引用
收藏
页数:24
相关论文
共 50 条
[41]   Joint and Individual Feature Fusion Hashing for Multi-modal Retrieval [J].
Yu, Jun ;
Zheng, Yukun ;
Wang, Yinglin ;
Li, Zuhe ;
Zhu, Liang .
COGNITIVE COMPUTATION, 2023, 15 (03) :1053-1064
[42]   Joint and Individual Feature Fusion Hashing for Multi-modal Retrieval [J].
Jun Yu ;
Yukun Zheng ;
Yinglin Wang ;
Zuhe Li ;
Liang Zhu .
Cognitive Computation, 2023, 15 :1053-1064
[43]   Multi-modal remote sensory learning for multi-objects over autonomous devices [J].
Naseer, Aysha ;
Almudawi, Naif ;
Aljuaid, Hanan ;
Alazeb, Abdulwahab ;
AlQahtani, Yahay ;
Algarni, Asaad ;
Jalal, Ahmad ;
Liu, Hui .
FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2025, 13
[44]   A deep learning based framework for the registration of three dimensional multi-modal medical images of the head [J].
Islam, Kh Tohidul ;
Wijewickrema, Sudanthi ;
O'Leary, Stephen .
SCIENTIFIC REPORTS, 2021, 11 (01)
[45]   Research on emotion recognition methods based on multi-modal physiological signal feature fusion [J].
Zhang, Zhiwen ;
Yu, Naigong ;
Bian, Yan ;
Yan, Jinhan .
Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2025, 42 (01) :17-23
[46]   Heterogeneous Feature Selection With Multi-Modal Deep Neural Networks and Sparse Group LASSO [J].
Zhao, Lei ;
Hu, Qinghua ;
Wang, Wenwu .
IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (11) :1936-1948
[47]   FuseNet: a multi-modal feature fusion network for 3D shape classification [J].
Zhao, Xin ;
Chen, Yinhuang ;
Yang, Chengzhuan ;
Fang, Lincong .
VISUAL COMPUTER, 2025, 41 (04) :2973-2985
[48]   Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information [J].
Ghoniem, Rania M. ;
Algarni, Abeer D. ;
Shaalan, Khaled .
INFORMATION, 2019, 10 (07)
[49]   A Multi-Modal Deep Learning Approach for Emotion Recognition [J].
Shahzad, H. M. ;
Bhatti, Sohail Masood ;
Jaffar, Arfan ;
Rashid, Muhammad .
INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 36 (02) :1561-1570
[50]   Multi-Modal Song Mood Detection with Deep Learning [J].
Pyrovolakis, Konstantinos ;
Tzouveli, Paraskevi ;
Stamou, Giorgos .
SENSORS, 2022, 22 (03)