Partially disentangled latent relations for multi-label deep learning

被引:2
作者
Lian, Si-ming [1 ]
Liu, Jian-wei [1 ]
Lu, Run-kun [1 ]
Luo, Xiong-lin [1 ]
机构
[1] China Univ Petr CUP, Coll Informat Sci & Engn, Dept Automat, Beijing Campus,260 Mailbox, Beijing 102249, Peoples R China
关键词
Disentangled latent relations; Diffusion; Self-attention; Feature representation; Multi-label learning; CLASSIFICATION;
D O I
10.1007/s00521-020-05381-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For multi-label learning, the specific features are extracted from the instances under the supervised of class label is meaningful, and the "purified" feature representation can also be shared with other features during learning process. Besides, it is essential to distinguish the inter-instance relations in input space and inter-label correlation relations in the output space on the multi-label datasets, which is conducive to improve the performance of the multi-label algorithm. However, most current multi-label algorithms aim to capture the mapping between instances and labels, while ignoring the information about instance relations and label correlations in the multi-label data structure. Motivated by these issues, we leverage the deep network to learn the special feature representations for multi-label components without abandoning overlapped features which may belong to other multi-label components. Meanwhile, the Euclidean matrices are leveraged to construct the diagonal matrix for the diffusion function, obtaining the new class latent representation by graph-based diffusion method preserve the inter-instance relations; it ensures that similar features have similar label sets. Further, considering that the contributions of these feature representation are different and have distinct influences on the final multi-label prediction results, the self-attention mechanism is introduced to fusion the other label-specific instance features to build the new joint feature representation, which derives dynamic weights for multi-label prediction. Finally, experimental results on the real data sets show promising wide availability for our approach.
引用
收藏
页码:6039 / 6064
页数:26
相关论文
共 50 条
[1]  
Bi W., 2013, 30 INT C MACH LEARN, P405
[2]  
Chen GB, 2017, IEEE IJCNN, P2377, DOI 10.1109/IJCNN.2017.7966144
[3]   Alignment Based Feature Selection for Multi-label Learning [J].
Chen, Linlin ;
Chen, Degang .
NEURAL PROCESSING LETTERS, 2019, 50 (03) :2323-2344
[4]  
Chen Y.-N., 2012, Advances in Neural Information Processing Systems, V1, P1529
[5]  
Chen ZS, 2019, PR MACH LEARN RES, V101, P411
[6]  
Dou Z, 2020, LEARNING GLOBAL LOCA
[7]  
Dupont E, 2018, ADV NEUR IN, V31
[8]  
Fei Wu, 2015, IEEE Transactions on Big Data, V1, P109, DOI 10.1109/TBDATA.2015.2497270
[9]  
Gardner A, 2019, IEEE SYS MAN CYBERN, P3447, DOI [10.1109/smc.2019.8914200, 10.1109/SMC.2019.8914200]
[10]  
Gong XW, 2020, AAAI CONF ARTIF INTE, V34, P4012