Dual-view graph convolutional network for multi-label text classification

被引:1
作者
Li, Xiaohong [1 ]
You, Ben [1 ]
Peng, Qixuan [1 ]
Feng, Shaojie [1 ]
机构
[1] Northwest Normal Univ, Coll Comp Sci & Engn, Lanzhou 730070, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-label classification; Graph convolutional networks; Random walk model; Label co-occurrence; Label graph; NEURAL-NETWORKS;
D O I
10.1007/s10489-024-05666-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-label text classification refers to assigning multiple relevant category labels to each text, which has been widely applied in the real world. To enhance the performance of multi-label text classification, most existing methods only focus on optimizing document and label representations, assuming accurate label-document similarity is crucial. However, whether the potential relevance between labels and if the problem of the long-tail distribution of labels could be solved are also key factors affecting the performance of multi-label classification. To this end, we propose a multi-label text classification model called DV-MLTC, which is based on a dual-view graph convolutional network to predict multiple labels for text. Specifically, we utilize graph convolutional neural networks to explore the potential correlation between labels in both the global and local views. First, we capture the global consistency of labels on the global label graph based on existing statistical information and generate label paths through a random walk algorithm to reconstruct the label graph. Then, to capture relationships between low-frequency co-occurring labels on the reconstructed graph, we guide the generation of reasonable co-occurring label pairs within the local neighborhood by utilizing the local consistency of labels, which also helps alleviate the long-tail distribution of labels. Finally, we integrate the global and local consistency of labels to address the problem of highly skewed distribution caused by incomplete label co-occurrence patterns in the label co-occurrence graph. The Evaluation shows that our proposed model achieves competitive results compared to existing state-of-the-art methods. Moreover, our model achieves a better balance between efficiency and performance.
引用
收藏
页码:9363 / 9380
页数:18
相关论文
共 45 条
[1]   Multi-Label Image Recognition with Graph Convolutional Networks [J].
Chen, Zhao-Min ;
Wei, Xiu-Shen ;
Wang, Peng ;
Guo, Yanwen .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5172-5181
[2]  
Fan Caoyun, 2023, Applied Intelligence
[3]   LABEL-AWARE TEXT REPRESENTATION FOR MULTI-LABEL TEXT CLASSIFICATION [J].
Guo, Hao ;
Li, Xiangyang ;
Zhang, Lei ;
Liu, Jia ;
Chen, Wei .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :7728-7732
[4]   Aspect-level sentiment analysis with aspect-specific context position information [J].
Huang, Bo ;
Guo, Ruyan ;
Zhu, Yimin ;
Fang, Zhijun ;
Zeng, Guohui ;
Liu, Jin ;
Wang, Yini ;
Fujita, Hamido ;
Shi, Zhicai .
KNOWLEDGE-BASED SYSTEMS, 2022, 243
[5]   Label-Aware Document Representation via Hybrid Attention for Extreme Multi-Label Text Classification [J].
Huang, Xin ;
Chen, Boli ;
Xiao, Lin ;
Yu, Jian ;
Jing, Liping .
NEURAL PROCESSING LETTERS, 2022, 54 (05) :3601-3617
[6]  
Huang Y, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P8153
[7]   Label Correlation Based Graph Convolutional Network for Multi-label Text Classification [J].
Huy-The Vu ;
Minh-Tien Nguyen ;
Van-Chien Nguyen ;
Manh-Tran Tien ;
Van-Hau Nguyen .
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
[8]   Label-representative graph convolutional network for multi-label text classification [J].
Huy-The Vu ;
Minh-Tien Nguyen ;
Van-Chien Nguyen ;
Minh-Hieu Pham ;
Van-Quyet Nguyen ;
Van-Hau Nguyen .
APPLIED INTELLIGENCE, 2023, 53 (12) :14759-14774
[9]  
Ionescu RT, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P363
[10]  
Kipf T. N., 2017, P INT C LEARN REPR, DOI DOI 10.48550/ARXIV.1609.02907