Multiview Representation Learning via Information-Theoretic Optimization

被引:0
作者
Yan, Weiqing [1 ]
Yao, Shuochen [1 ]
Tang, Chang [2 ]
Zhou, Wujie [3 ]
机构
[1] Yantai Univ, Sch Comp & Control Engn, Yantai 261400, Peoples R China
[2] China Univ Geosci, Sch Comp Sci, Wuhan 430074, Peoples R China
[3] Zhejiang Univ Sci & Technol, Sch Informat & Elect Engn, Hangzhou 310023, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Mutual information; Encoding; Representation learning; Interviews; Correlation; Feature extraction; Data models; Data mining; Training; Robustness; Coding mutual information; coding rate reduction; multiview representation learning (MVRL);
D O I
10.1109/TNNLS.2025.3546660
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiview data, characterized by rich features, are crucial in many machine learning applications. However, effectively extracting intraview features and integrating interview information present significant challenges in multiview learning (MVL). Traditional deep network-based approaches often involve learning multiple layers to derive latent. In these methods, the features of different classes are typically implicitly embedded rather than systematically organized. This lack of structure makes it challenging to explicitly map classes to independent principal subspaces in the feature space, potentially causing class overlap and confusion. Consequently, the capability of these representations to accurately capture the intrinsic structure of the data remains uncertain. In this article, we introduce an innovative multiview representation learning (MVRL) by maximizing two information-theoretic metrics: intraview coding rate reduction and interview mutual information. Specifically, in the intraview representation learning, we aim to optimize feature representations by maximizing the coding rate difference between the entire dataset and individual classes. This process expands the feature representation space while compressing the representations within each class, resulting in more compact feature representations within each viewpoint. Subsequently, we align and fuse these view-specific features through space transformation and cross-sample fusion to achieve consistent representation across multiple views. Finally, we maximize information transmission to maintain consistency and correlation among data representations across views. By maximizing mutual information between the consensus representations and view-specific representations, our method ensures that the learned representations capture more concise intrinsic features and correlations among different views, thereby enhancing the performance and generalization ability of MVL. Experiments show that the proposed methods have achieved excellent performance.
引用
收藏
页数:12
相关论文
共 78 条
[1]  
Bachman P, 2019, ADV NEUR IN, V32
[2]   Fisher Information and Mutual Information Constraints [J].
Barnes, Leighton Pate ;
Ozgur, Ayfer .
2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, :2179-2184
[3]   The landscape of microbial phenotypic traits and associated genes [J].
Brbic, Maria ;
Piskorec, Matija ;
Vidulin, Vedrana ;
Krisko, Anita ;
Smuc, Tomislav ;
Supek, Fran .
NUCLEIC ACIDS RESEARCH, 2016, 44 (21) :10074-10090
[4]   Joint stage recognition and anatomical annotation of drosophila gene expression patterns [J].
Cai, Xiao ;
Wang, Hua ;
Huang, Heng ;
Ding, Chris .
BIOINFORMATICS, 2012, 28 (12) :I16-I24
[5]   Representation Learning in Multi-view Clustering: A Literature Review [J].
Chen, Man-Sheng ;
Lin, Jia-Qi ;
Li, Xiang-Long ;
Liu, Bao-Yu ;
Wang, Chang-Dong ;
Huang, Dong ;
Lai, Jian-Huang .
DATA SCIENCE AND ENGINEERING, 2022, 7 (03) :225-241
[6]  
Chen T, 2020, PR MACH LEARN RES, V119
[7]   Diversity embedding deep matrix factorization for multi-view clustering [J].
Chen, Zexi ;
Lin, Pengfei ;
Chen, Zhaoliang ;
Ye, Dongyi ;
Wang, Shiping .
INFORMATION SCIENCES, 2022, 610 :114-125
[8]  
Cheng PY, 2020, PR MACH LEARN RES, V119
[9]  
Chu TZ, 2024, Arxiv, DOI arXiv:2306.05272
[10]   CTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction [J].
Dai, Xili ;
Tong, Shengbang ;
Li, Mingyang ;
Wu, Ziyang ;
Psenka, Michael ;
Chan, Kwan Ho Ryan ;
Zhai, Pengyuan ;
Yu, Yaodong ;
Yuan, Xiaojun ;
Shum, Heung-Yeung ;
Ma, Yi .
ENTROPY, 2022, 24 (04)