Pre-trained language models for keyphrase prediction: A review

被引:2
|
作者
Umair, Muhammad [1 ]
Sultana, Tangina [1 ,2 ]
Lee, Young-Koo [1 ]
机构
[1] Kyung Hee Univ, Dept Comp Sci & Engn, Global Campus, Yongin, South Korea
[2] Hajee Mohammad Danesh Sci & Technol Univ, Dept Elect & Commun Engn, Dinajpur, Bangladesh
来源
ICT EXPRESS | 2024年 / 10卷 / 04期
关键词
Keyphrases; Keyphrase extraction; Keyphrase generation; Pre-trained language models; Natural language processing; Large language models; Review; EXTRACTION;
D O I
10.1016/j.icte.2024.05.015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Keyphrase Prediction (KP) is essential for identifying keyphrases in a document that can summarize its content. However, recent Natural Language Processing (NLP) advances have developed more efficient KP models using deep learning techniques. The limitation of a comprehensive exploration jointly both keyphrase extraction and generation using pre-trained language models spotlights a critical gap in the literature, compelling our survey paper to bridge this deficiency and offer a unified and in-depth analysis to address limitations in previous surveys. This paper extensively examines the topic of pre-trained language models for keyphrase prediction (PLM-KP), which are trained on large text corpora via different learning (supervisor, unsupervised, semi-supervised, and self-supervised) techniques, to provide respective insights into these two types of tasks in NLP, precisely, Keyphrase Extraction (KPE) and Keyphrase Generation (KPG). We introduce appropriate taxonomies for PLM-KPE and KPG to highlight these two main tasks of NLP. Moreover, we point out some promising future directions for predicting keyphrases. (c) 2024 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:871 / 890
页数:20
相关论文
共 50 条
  • [31] Rethinking Textual Adversarial Defense for Pre-Trained Language Models
    Wang, Jiayi
    Bao, Rongzhou
    Zhang, Zhuosheng
    Zhao, Hai
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2526 - 2540
  • [32] Automated Assessment of Inferences Using Pre-Trained Language Models
    Yoo, Yongseok
    APPLIED SCIENCES-BASEL, 2024, 14 (09):
  • [33] Improving Braille-Chinese translation with jointly trained and pre-trained language models
    Huang, Tianyuan
    Su, Wei
    Liu, Lei
    Cai, Chuan
    Yu, Hailong
    Yuan, Yongna
    DISPLAYS, 2024, 82
  • [34] A Comparison of SVM Against Pre-trained Language Models (PLMs) for Text Classification Tasks
    Wahba, Yasmen
    Madhavji, Nazim
    Steinbacher, John
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, LOD 2022, PT II, 2023, 13811 : 304 - 313
  • [35] Comparing pre-trained language models for Spanish hate speech detection
    Miriam Plaza-del-Arco, Flor
    Dolores Molina-Gonzalez, M.
    Alfonso Urena-Lopez, L.
    Teresa Martin-Valdivia, M.
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 166
  • [36] Pre-trained language models with domain knowledge for biomedical extractive summarization
    Xie Q.
    Bishop J.A.
    Tiwari P.
    Ananiadou S.
    Knowledge-Based Systems, 2022, 252
  • [37] Effectiveness of Pre-Trained Language Models for the Japanese Winograd Schema Challenge
    Takahashi, Keigo
    Oka, Teruaki
    Komachi, Mamoru
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2023, 27 (03) : 511 - 521
  • [38] A survey on moral foundation theory and pre-trained language models: current advances and challenges
    Zangari, Lorenzo
    Greco, Candida Maria
    Picca, Davide
    Tagarelli, Andrea
    AI & SOCIETY, 2025,
  • [39] Enhancing radiology report generation through pre-trained language models
    Leonardi, Giorgio
    Portinale, Luigi
    Santomauro, Andrea
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2024,
  • [40] PaleAle 6.0: Prediction of Protein Relative Solvent Accessibility by Leveraging Pre-Trained Language Models (PLMs)
    Alanazi, Wafa
    Meng, Di
    Pollastri, Gianluca
    BIOMOLECULES, 2025, 15 (01)