Relation-Aware Alignment Attention Network for Multi-view Multi-label Learning

被引:1
作者
Zhang, Yi [1 ]
Shen, Jundong [1 ]
Yu, Cheng [1 ]
Wang, Chongjun [1 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Dept Comp Sci & Technol, Nanjing, Peoples R China
来源
DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT II | 2021年 / 12682卷
基金
中国国家自然科学基金;
关键词
Multi-view multi-label; View interactions; Label correlations; Label-view dependence; Alignment attention; NEURAL-NETWORKS;
D O I
10.1007/978-3-030-73197-7_31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-View Multi-Label (MVML) learning refers to complex objects represented by multi-view features and associated with multiple labels simultaneously. Modeling flexible view consistency is recently demanded, yet existing approaches cannot fully exploit the complementary information across multiple views and meanwhile preserve view-specific properties. Additionally, each label has heterogeneous features from multiple views and probably correlates with other labels via common views. Traditional strategy tends to select features that are distinguishable for all labels. However, globally shared features cannot handle the label heterogeneity. Furthermore, previous studies model view consistency and label correlations independently, where interactions between views and labels are not fully exploited. In this paper, we propose a novel MVML learning approach named Relation-aware Alignment attention Network (RAIN), where three types of relationships are considered. Specifically, 1) view interactions: capture diverse and complementary information for deep correlated subspace learning; 2) label correlations: adopt multi-head attention to learn semantic label embedding; 3) label-view dependence: dynamically extracts label-specific representation with the guidance of learned label embedding. Experiments on various MVML datasets demonstrate the effectiveness of RAIN compared with state-of-the-arts. We also experiment on one real-world Herbs dataset, which shows promising results for clinical decision support.
引用
收藏
页码:465 / 482
页数:18
相关论文
共 50 条
[21]   Multi-Label Graph Convolutional Network Representation Learning [J].
Shi, Min ;
Tang, Yufei ;
Zhu, Xingquan ;
Liu, Jianxun .
IEEE TRANSACTIONS ON BIG DATA, 2022, 8 (05) :1169-1181
[22]   Multi-label Ensemble Learning [J].
Shi, Chuan ;
Kong, Xiangnan ;
Yu, Philip S. ;
Wang, Bai .
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT III, 2011, 6913 :223-239
[23]   Multi-instance multi-label learning [J].
Zhou, Zhi-Hua ;
Zhang, Min-Ling ;
Huang, Sheng-Jun ;
Li, Yu-Feng .
ARTIFICIAL INTELLIGENCE, 2012, 176 (01) :2291-2320
[24]   Label-Aware Recurrent Reading for Multi-Label Classification [J].
Ming, Shenglan ;
Liu, Huajun ;
Luo, Ziming ;
Huang, Peng ;
Li, Mark Junjie .
2022 ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING (CACML 2022), 2022, :498-504
[25]   MLNE: Multi-Label Network Embedding [J].
Shi, Min ;
Tang, Yufei ;
Zhu, Xingquan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (09) :3682-3695
[26]   A Review on Multi-Label Learning Algorithms [J].
Zhang, Min-Ling ;
Zhou, Zhi-Hua .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (08) :1819-1837
[27]   Unconstrained Multimodal Multi-Label Learning [J].
Huang, Yan ;
Wang, Wei ;
Wang, Liang .
IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (11) :1923-1935
[28]   Multi-Label Text Classification model integrating Label Attention and Historical Attention [J].
Sun, Guoying ;
Cheng, Yanan ;
Dong, Fangzhou ;
Wang, Luhua ;
Zhao, Dong ;
Zhang, Zhaoxin ;
Tong, Xiaojun .
KNOWLEDGE-BASED SYSTEMS, 2024, 296
[29]   A Label-Specific Attention-Based Network with Regularized Loss for Multi-label Classification [J].
Luo, Xiangyang ;
Ran, Xiangying ;
Sun, Wei ;
Xu, Yunlai ;
Wang, Chongjun .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 :731-742
[30]   Multi-label Feature Extraction With Distance-Based Graph Attention Network [J].
Peng, Yue ;
Qian, Kun ;
Song, Guojie ;
Min, Fan .
ROUGH SETS, IJCRS 2022, 2022, 13633 :203-216