LLMs to the Moon? Reddit Market Sentiment Analysis with Large Language Models

被引:13
|
作者
Deng, Xiang [1 ,4 ]
Bashlovkina, Vasilisa [2 ]
Han, Feng [2 ]
Baumgartner, Simon [2 ]
Bendersky, Michael [3 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
[2] Google Res, NYC, New York, NY USA
[3] Google Res, Mountain View, CA USA
[4] Google, Mountain View, CA 94043 USA
关键词
Sentiment Analysis; Social Media; Finance; Large Language Model; Natural Language Processing; TEXTUAL ANALYSIS;
D O I
10.1145/3543873.3587605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Market sentiment analysis on social media content requires knowledge of both financial markets and social media jargon, which makes it a challenging task for human raters. The resulting lack of high-quality labeled data stands in the way of conventional supervised learning methods. In this work, we conduct a case study approaching this problem with semi-supervised learning using a large language model (LLM). We select Reddit as the target social media platform due to its broad coverage of topics and content types. Our pipeline first generates weak financial sentiment labels for Reddit posts with an LLM and then uses that data to train a small model that can be served in production. We find that prompting the LLM to produce Chain-of-Thought summaries and forcing it through several reasoning paths helps generate more stable and accurate labels, while training the student model using a regression loss further improves distillation quality. With only a handful of prompts, the final model performs on par with existing supervised models. Though production applications of our model are limited by ethical considerations, the model's competitive performance points to the great potential of using LLMs for tasks that otherwise require skill-intensive annotation.
引用
收藏
页码:1014 / 1019
页数:6
相关论文
共 50 条
  • [21] A Survey on the Use of Large Language Models (LLMs) in Fake News
    Papageorgiou, Eleftheria
    Chronis, Christos
    Varlamis, Iraklis
    Himeur, Yassine
    FUTURE INTERNET, 2024, 16 (08)
  • [22] Addressing digital inequities in the age of large language models (LLMs)
    Ng, Olivia
    Han, Siew Ping
    MEDICAL EDUCATION, 2024, 58 (12) : 1545 - 1546
  • [23] An Evaluation of Reasoning Capabilities of Large Language Models in Financial Sentiment Analysis
    Du, Kelvin
    Xing, Frank
    Mao, Rui
    Cambria, Erik
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 189 - 194
  • [24] Sentiment analysis of online responses in the performing arts with large language models
    Seong, Baekryun
    Song, Kyungwoo
    HELIYON, 2023, 9 (12)
  • [25] Revisiting Sentiment Analysis for Software Engineering in the Era of Large Language Models
    Zhang, Ting
    Irsan, Ivana clairine
    Thung, Ferdian
    Lo, David
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (03)
  • [26] LLMs and NLP Models in Cryptocurrency Sentiment Analysis: A Comparative Classification Study
    Roumeliotis, Konstantinos I.
    Tselikas, Nikolaos D.
    Nasiopoulos, Dimitrios K.
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (06)
  • [27] Harnessing large language models (LLMs) for candidate gene prioritization and selection
    Toufiq, Mohammed
    Rinchai, Darawan
    Bettacchioli, Eleonore
    Kabeer, Basirudeen Syed Ahamed
    Khan, Taushif
    Subba, Bishesh
    White, Olivia
    Yurieva, Marina
    George, Joshy
    Jourde-Chiche, Noemie
    Chiche, Laurent
    Palucka, Karolina
    Chaussabel, Damien
    JOURNAL OF TRANSLATIONAL MEDICINE, 2023, 21 (01)
  • [28] LLMs4OL: Large Language Models for Ontology Learning
    Giglou, Hamed Babaei
    D'Souza, Jennifer
    Auer, Soeren
    SEMANTIC WEB, ISWC 2023, PART I, 2023, 14265 : 408 - 427
  • [29] Innovation and application of Large Language Models (LLMs) in dentistry - a scoping review
    Umer, Fahad
    Batool, Itrat
    Naved, Nighat
    BDJ OPEN, 2024, 10 (01)
  • [30] Large Language Models (LLMs) Enable Few-Shot Clustering
    Vijay, Viswanathan
    Kiril, Gashteovski
    Carolin, Lawrence
    Tongshuang, Wu
    Graham, Neubig
    NEC Technical Journal, 2024, 17 (02): : 80 - 90