Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning

被引:12
作者
Bu, Kun [1 ]
Liu, Yuanchao [1 ]
Ju, Xiaolong [1 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, 92 West Dazhi St, Harbin 15001, Heilongjiang, Peoples R China
关键词
Sentiment analysis; Prompt learning; Word embedding; Pre-trained models; Natural language processing; SOCIAL MEDIA; LANGUAGE MODELS; NEURAL-NETWORKS; KNOWLEDGE;
D O I
10.1016/j.knosys.2023.111148
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sentiment analysis is one of the traditional well-known tasks in Natural Language Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of the frontiers of NLP, and the knowledge in PMs is usually leveraged to improve machine learning models' performance for a variety of downstream NLP tasks including sentiment analysis. However, there are also some shortcomings in PM-based approaches. For example, many studies pointed out there are gaps between pre-training and fine-tuning. In addition, because of the timeconsuming and high-cost data annotation process, the labeled training data are usually precious and scarce, which often leads to the over-fitting of models. The recent advent of prompt learning technology provides a promising solution to the above challenges. In this paper, we first discussed the background of prompt learning and its basic principle. Prompt learning changes the model input by adding templates, allowing learning tasks to adapt actively to pre-trained models, and therefore can promote the innovation and applicability of pre-trained models. Then we investigated the evolution of sentiment analysis and explored the application of prompt learning to different sentiment analysis tasks. Our research and review show that prompt learning is more suitable for sentiment analysis tasks and can achieve good performance. Finally, we also provided some future research directions on prompt-based sentiment analysis. Our survey demonstrated that prompt learning can facilitate the efficient utilization of pre-trained models in sentiment analysis and other tasks, which makes it a new paradigm worthy of further exploration.
引用
收藏
页数:18
相关论文
共 282 条
  • [1] Sentiment analysis in multiple languages: Feature selection for opinion classification in Web forums
    Abbasi, Ahmed
    Chen, Hsinchun
    Salem, Arab
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2008, 26 (03)
  • [2] Abburi H., 2016, Multimodal sentiment analysis of telugu songs, P48
  • [3] Abdelwahab Omar., 2016, Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), P164, DOI DOI 10.18653/V1/S16-1024
  • [4] Persistent Anti-Muslim Bias in Large Language Models
    Abid, Abubakar
    Farooqi, Maheen
    Zou, James
    [J]. AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 298 - 306
  • [5] Akyurek A. F., 2022, FINDINGS ASS COMPUTA, P551
  • [6] Al Sallab Ahmad, 2015, P 2 WORKSH AR NAT LA, P9
  • [7] Approaches to Cross-Domain Sentiment Analysis: A Systematic Literature Review
    Al-Moslmi, Tareq
    Omar, Nazlia
    Abdullah, Salwani
    Albared, Mohammed
    [J]. IEEE ACCESS, 2017, 5 : 16173 - 16192
  • [8] Improvement of Sentiment Analysis based on Clustering of Word2Vec Features
    Alshari, Eissa M.
    Azman, Azreen
    Doraisamy, Shyamala
    Mustapha, Norwati
    Alkeshr, Mustafa
    [J]. 2017 28TH INTERNATIONAL WORKSHOP ON DATABASE AND EXPERT SYSTEMS APPLICATIONS (DEXA), 2017, : 123 - 126
  • [9] Altowayan AA, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), P3820, DOI 10.1109/BigData.2016.7841054
  • [10] Systematic literature review of arabic aspect-based sentiment analysis
    Alyami, Salha
    Alhothali, Areej
    Jamal, Amani
    [J]. JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (09) : 6524 - 6551