Incomplete multi-view partial multi-label classification via deep semantic structure preservation

被引:0
作者
Li, Chaoran [1 ]
Wu, Xiyin [1 ]
Peng, Pai [1 ]
Zhang, Zhuhong [1 ]
Lu, Xiaohuan [1 ]
机构
[1] Guizhou Univ, Coll Big Data & Informat Engn, Guiyang, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-view multi-label learning; Incomplete view; Missing label; Pseudo-labeling; Graph constraint learning; TUTORIAL; MODEL;
D O I
10.1007/s40747-024-01562-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in multi-view multi-label learning are often hampered by the prevalent challenges of incomplete views and missing labels, common in real-world data due to uncertainties in data collection and manual annotation. These challenges restrict the capacity of the model to fully utilize the diverse semantic information of each sample, posing significant barriers to effective learning. Despite substantial scholarly efforts, many existing methods inadequately capture the depth of semantic information, focusing primarily on shallow feature extractions that fail to maintain semantic consistency. To address these shortcomings, we propose a novel Deep semantic structure-preserving (SSP) model that effectively tackles both incomplete views and missing labels. SSP innovatively incorporates a graph constraint learning (GCL) scheme to ensure the preservation of semantic structure throughout the feature extraction process across different views. Additionally, the SSP integrates a pseudo-labeling self-paced learning (PSL) strategy to address the often-overlooked issue of missing labels, enhancing the classification accuracy while preserving the distribution structure of data. The SSP model creates a unified framework that synergistically employs GCL and PSL to maintain the integrity of semantic structural information during both feature extraction and classification phases. Extensive evaluations across five real datasets demonstrate that the SSP method outperforms existing approaches, including lrMMC, MVL-IV, MvEL, iMSF, iMvWL, NAIML, and DD-IMvMLC-net. It effectively mitigates the impacts of data incompleteness and enhances semantic representation fidelity.
引用
收藏
页码:7661 / 7679
页数:19
相关论文
共 45 条
  • [41] LIFT: Multi-Label Learning with Label-Specific Features
    Zhang, Min-Ling
    Wu, Lei
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (01) : 107 - 120
  • [42] A Review on Multi-Label Learning Algorithms
    Zhang, Min-Ling
    Zhou, Zhi-Hua
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (08) : 1819 - 1837
  • [43] Zhang W., 2013, 23 INT JOINT C ART I, P1910
  • [44] Consistency and diversity neural network multi-view multi-label learning
    Zhao, Dawei
    Gao, Qingwei
    Lu, Yixiang
    Sun, Dong
    Cheng, Yusheng
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 218
  • [45] Multi-view learning overview: Recent progress and new challenges
    Zhao, Jing
    Xie, Xijiong
    Xu, Xin
    Sun, Shiliang
    [J]. INFORMATION FUSION, 2017, 38 : 43 - 54