Compressed labeling on distilled labelsets for multi-label learning

被引:40
|
作者
Zhou, Tianyi [3 ]
Tao, Dacheng [3 ]
Wu, Xindong [1 ,2 ]
机构
[1] Hefei Univ Technol, Dept Comp Sci, Hefei 230009, Peoples R China
[2] Univ Vermont, Dept Comp Sci, Burlington, VT 05405 USA
[3] Univ Technol Sydney, Fac Engn & IT, Ctr Quantum Computat & Intelligent Syst QCIS, Broadway, NSW 2007, Australia
基金
美国国家科学基金会;
关键词
NEURAL-NETWORKS; CLASSIFICATION; ALGORITHMS;
D O I
10.1007/s10994-011-5276-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Directly applying single-label classification methods to the multi-label learning problems substantially limits both the performance and speed due to the imbalance, dependence and high dimensionality of the given label matrix. Existing methods either ignore these three problems or reduce one with the price of aggravating another. In this paper, we propose a {0,1} label matrix compression and recovery method termed "compressed labeling (CL)" to simultaneously solve or at least reduce these three problems. CL first compresses the original label matrix to improve balance and independence by preserving the signs of its Gaussian random projections. Afterward, we directly utilize popular binary classification methods (e.g., support vector machines) for each new label. A fast recovery algorithm is developed to recover the original labels from the predicted new labels. In the recovery algorithm, a "labelset distilling method" is designed to extract distilled labelsets (DLs), i.e., the frequently appeared label subsets from the original labels via recursive clustering and subtraction. Given a distilled and an original label vector, we discover that the signs of their random projections have an explicit joint distribution that can be quickly computed from a geometric inference. Based on this observation, the original label vector is exactly determined after performing a series of Kullback-Leibler divergence based hypothesis tests on the distribution about the new labels. CL significantly improves the balance of the training samples and reduces the dependence between different labels. Moreover, it accelerates the learning process by training fewer binary classifiers for compressed labels, and makes use of label dependence via DLs based tests. Theoretically, we prove the recovery bounds of CL which verifies the effectiveness of CL for label compression and multi-label classification performance improvement brought by label correlations preserved in DLs. We show the effectiveness, efficiency and robustness of CL via 5 groups of experiments on 21 datasets from text classification, image annotation, scene classification, music categorization, genomics and web page classification.
引用
收藏
页码:69 / 126
页数:58
相关论文
共 50 条
  • [1] NkEL: nearest k-labelsets ensemble for multi-label learning
    Zhong, Xi-Yan
    Zhang, Yu-Li
    Wang, Dan-Dong
    Min, Fan
    APPLIED INTELLIGENCE, 2025, 55 (01)
  • [2] Mutual Information Based K-Labelsets Ensemble for Multi-Label Classification
    Wang, Ran
    Kwong, Sam
    Jia, Yuheng
    Huang, Zhiqi
    Wu, Lang
    2018 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2018,
  • [3] Alignment Based Kernel Selection for Multi-Label Learning
    Chen, Linlin
    Chen, Degang
    Wang, Hui
    NEURAL PROCESSING LETTERS, 2019, 49 (03) : 1157 - 1177
  • [4] Robust Multi-Graph Multi-Label Learning With Dual-Granularity Labeling
    Wang, Yejiang
    Zhao, Yuhai
    Wang, Zhengkui
    Zhang, Chengqi
    Wang, Xingwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (10) : 6509 - 6524
  • [5] Multi-label Ensemble Learning
    Shi, Chuan
    Kong, Xiangnan
    Yu, Philip S.
    Wang, Bai
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT III, 2011, 6913 : 223 - 239
  • [6] On the consistency of multi-label learning
    Gao, Wei
    Zhou, Zhi-Hua
    ARTIFICIAL INTELLIGENCE, 2013, 199 : 22 - 44
  • [7] Multi-Label Manifold Learning
    Hou, Peng
    Geng, Xin
    Zhang, Min-Ling
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1680 - 1686
  • [8] Partial Multi-Label Learning
    Xie, Ming-Kun
    Huang, Sheng-Jun
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 4302 - 4309
  • [9] Labeling Information Enhancement for Multi-label Learning with Low-Rank Subspace
    Tao, An
    Xu, Ning
    Geng, Xin
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2018, 11012 : 671 - 683
  • [10] Asymmetry label correlation for multi-label learning
    Bao, Jiachao
    Wang, Yibin
    Cheng, Yusheng
    APPLIED INTELLIGENCE, 2022, 52 (06) : 6093 - 6105