Pre-trained language models for keyphrase prediction: A review

被引:2
|
作者
Umair, Muhammad [1 ]
Sultana, Tangina [1 ,2 ]
Lee, Young-Koo [1 ]
机构
[1] Kyung Hee Univ, Dept Comp Sci & Engn, Global Campus, Yongin, South Korea
[2] Hajee Mohammad Danesh Sci & Technol Univ, Dept Elect & Commun Engn, Dinajpur, Bangladesh
来源
ICT EXPRESS | 2024年 / 10卷 / 04期
关键词
Keyphrases; Keyphrase extraction; Keyphrase generation; Pre-trained language models; Natural language processing; Large language models; Review; EXTRACTION;
D O I
10.1016/j.icte.2024.05.015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Keyphrase Prediction (KP) is essential for identifying keyphrases in a document that can summarize its content. However, recent Natural Language Processing (NLP) advances have developed more efficient KP models using deep learning techniques. The limitation of a comprehensive exploration jointly both keyphrase extraction and generation using pre-trained language models spotlights a critical gap in the literature, compelling our survey paper to bridge this deficiency and offer a unified and in-depth analysis to address limitations in previous surveys. This paper extensively examines the topic of pre-trained language models for keyphrase prediction (PLM-KP), which are trained on large text corpora via different learning (supervisor, unsupervised, semi-supervised, and self-supervised) techniques, to provide respective insights into these two types of tasks in NLP, precisely, Keyphrase Extraction (KPE) and Keyphrase Generation (KPG). We introduce appropriate taxonomies for PLM-KPE and KPG to highlight these two main tasks of NLP. Moreover, we point out some promising future directions for predicting keyphrases. (c) 2024 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:871 / 890
页数:20
相关论文
共 50 条
  • [41] Identifying Styles of Cross-Language Classics with Pre-Trained Models
    Zhang Y.
    Deng S.
    Hu H.
    Wang D.
    Data Analysis and Knowledge Discovery, 2023, 7 (10) : 50 - 62
  • [42] Gauging, enriching and applying geography knowledge in Pre-trained Language Models
    Ramrakhiyani, Nitin
    Varma, Vasudeva
    Palshikar, Girish Keshav
    Pawar, Sachin
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (01)
  • [43] UniRaG: Unification, Retrieval, and Generation for Multimodal Question Answering With Pre-Trained Language Models
    Lim, Qi Zhi
    Lee, Chin Poo
    Lim, Kian Ming
    Samingan, Ahmad Kamsani
    IEEE ACCESS, 2024, 12 : 71505 - 71519
  • [44] Intelligent Completion of Ancient Texts Based on Pre-trained Language Models
    Li J.
    Ming C.
    Guo Z.
    Qian T.
    Peng Z.
    Wang X.
    Li X.
    Li J.
    Data Analysis and Knowledge Discovery, 2024, 8 (05) : 59 - 67
  • [45] A Short Survey of Pre-trained Language Models for Conversational Al-A New Age in NLP
    Zaib, Munazza
    Sheng, Quan Z.
    Zhang, Wei Emma
    PROCEEDINGS OF THE AUSTRALASIAN COMPUTER SCIENCE WEEK MULTICONFERENCE (ACSW 2020), 2020,
  • [46] Evaluating Embeddings from Pre-Trained Language Models and Knowledge Graphs for Educational Content Recommendation
    Li, Xiu
    Henriksson, Aron
    Duneld, Martin
    Nouri, Jalal
    Wu, Yongchao
    FUTURE INTERNET, 2024, 16 (01)
  • [47] Hybrid multi-document summarization using pre-trained language models
    Ghadimi, Alireza
    Beigy, Hamid
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 192
  • [48] A survey of text classification based on pre-trained language model
    Wu, Yujia
    Wan, Jun
    NEUROCOMPUTING, 2025, 616
  • [49] POLYPHONE DISAMBIGUATION AND ACCENT PREDICTION USING PRE-TRAINED LANGUAGE MODELS IN JAPANESE TTS FRONT-END
    Hida, Rem
    Hamada, Masaki
    Kamada, Chie
    Tsunoo, Emiru
    Sekiya, Toshiyuki
    Kumakura, Toshiyuki
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7132 - 7136
  • [50] Leveraging Pre-trained Language Models for Time Interval Prediction in Text-Enhanced Temporal Knowledge Graphs
    Islakoglu, Duygu Sezen
    Chekol, Melisachew Wudage
    Velegrakis, Yannis
    SEMANTIC WEB, PT I, ESWC 2024, 2024, 14664 : 59 - 78