Automated Social Text Annotation With Joint Multilabel Attention Networks

被引:15
作者
Dong, Hang [1 ,2 ,3 ]
Wang, Wei [2 ]
Huang, Kaizhu [4 ,5 ]
Coenen, Frans [1 ]
机构
[1] Univ Liverpool, Dept Comp Sci, Liverpool L69 7ZX, Merseyside, England
[2] Xian Jiaotong Liverpool Univ, Dept Comp Sci & Software Engn, Suzhou 215123, Peoples R China
[3] Univ Edinburgh, Ctr Med Informat, Usher Inst, Edinburgh EH16 4UX, Midlothian, Scotland
[4] Xian Jiaotong Liverpool Univ, Dept Elect & Elect Engn, Suzhou 215123, Peoples R China
[5] Alibaba Zhejiang Univ Joint Inst Frontier Technol, Hangzhou 310000, Peoples R China
基金
中国国家自然科学基金;
关键词
Attention mechanisms; automated social annotation; deep learning; multilabel classification; recurrent neural networks (RNNs); CLASSIFICATION; QUALITY;
D O I
10.1109/TNNLS.2020.3002798
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automated social text annotation is the task of suggesting a set of tags for shared documents on social media platforms. The automated annotation process can reduce users' cognitive overhead in tagging and improve tag management for better search, browsing, and recommendation of documents. It can be formulated as a multilabel classification problem. We propose a novel deep learning-based method for this problem and design an attention-based neural network with semantic-based regularization, which can mimic users' reading and annotation behavior to formulate better document representation, leveraging the semantic relations among labels. The network separately models the title and the content of each document and injects an explicit, title-guided attention mechanism into each sentence. To exploit the correlation among labels, we propose two semantic-based loss regularizers, i.e., similarity and subsumption, which enforce the output of the network to conform to label semantics. The model with the semantic-based loss regularizers is referred to as the joint multilabel attention network (JMAN). We conducted a comprehensive evaluation study and compared JMAN to the state-of-the-art baseline models, using four large, real-world social media data sets. In terms of F-1, JMAN significantly outperformed bidirectional gated recurrent unit (Bi-GRU) relatively by around 12.8%-78.6% and the hierarchical attention network (HAN) by around 3.9%-23.8%. The JMAN model demonstrates advantages in convergence and training speed. Further improvement of performance was observed against latent Dirichlet allocation (LDA) and support vector machine (SVM). When applying the semantic-based loss regularizers, the performance of HAN and Bi-GRU in terms of F-1 was also boosted. It is also found that dynamic update of the label semantic matrices (JMAN(d)) has the potential to further improve the performance of JMAN but at the cost of substantial memory and warrants further study.
引用
收藏
页码:2224 / 2238
页数:15
相关论文
共 73 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] [Anonymous], 2007, P 15 ACM INT C MULTI, DOI DOI 10.1145/1291233.1291245
  • [3] Long short-term memory
    Hochreiter, S
    Schmidhuber, J
    [J]. NEURAL COMPUTATION, 1997, 9 (08) : 1735 - 1780
  • [4] [Anonymous], 2017, P S APPL COMP NEW YO, DOI DOI 10.1145/3019612.3019664
  • [5] Bahdanau D., 2014, ABS14090473 CORR
  • [6] Baker S, 2017, BIONLP 2017, P307
  • [7] A Survey on Tag Recommendation Methods
    Belem, Fabiano M.
    Almeida, Jussara M.
    Goncalves, Marcos A.
    [J]. JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY, 2017, 68 (04) : 830 - 844
  • [8] The social bookmark and publication management system bibsonomy A platform for evaluating and demonstrating Web 2.0 research
    Benz, Dominik
    Hotho, Andreas
    Jaeschke, Robert
    Krause, Beate
    Mitzlaff, Folke
    Schmitz, Christoph
    Stumme, Gerd
    [J]. VLDB JOURNAL, 2010, 19 (06) : 849 - 875
  • [9] Latent Dirichlet allocation
    Blei, DM
    Ng, AY
    Jordan, MI
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) : 993 - 1022
  • [10] Chen BH, 2016, IEEE IJCNN, P1458, DOI 10.1109/IJCNN.2016.7727370