LLMs to the Moon? Reddit Market Sentiment Analysis with Large Language Models

被引:17
作者
Deng, Xiang [1 ,4 ]
Bashlovkina, Vasilisa [2 ]
Han, Feng [2 ]
Baumgartner, Simon [2 ]
Bendersky, Michael [3 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
[2] Google Res, NYC, New York, NY USA
[3] Google Res, Mountain View, CA USA
[4] Google, Mountain View, CA 94043 USA
来源
COMPANION OF THE WORLD WIDE WEB CONFERENCE, WWW 2023 | 2023年
关键词
Sentiment Analysis; Social Media; Finance; Large Language Model; Natural Language Processing; TEXTUAL ANALYSIS;
D O I
10.1145/3543873.3587605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Market sentiment analysis on social media content requires knowledge of both financial markets and social media jargon, which makes it a challenging task for human raters. The resulting lack of high-quality labeled data stands in the way of conventional supervised learning methods. In this work, we conduct a case study approaching this problem with semi-supervised learning using a large language model (LLM). We select Reddit as the target social media platform due to its broad coverage of topics and content types. Our pipeline first generates weak financial sentiment labels for Reddit posts with an LLM and then uses that data to train a small model that can be served in production. We find that prompting the LLM to produce Chain-of-Thought summaries and forcing it through several reasoning paths helps generate more stable and accurate labels, while training the student model using a regression loss further improves distillation quality. With only a handful of prompts, the final model performs on par with existing supervised models. Though production applications of our model are limited by ethical considerations, the model's competitive performance points to the great potential of using LLMs for tasks that otherwise require skill-intensive annotation.
引用
收藏
页码:1014 / 1019
页数:6
相关论文
共 24 条
[1]  
ACKLEY DH, 1985, COGNITIVE SCI, V9, P147
[2]  
Araci D, 2019, Arxiv, DOI [arXiv:1908.10063, DOI 10.48550/ARXIV.1908.10063]
[3]  
Bradley Daniel, 2021, MENDELU Working Papers in Business and Economics 2021-76
[4]  
Brown Tom B., 2020, Advances in Neural Information Processing Systems
[5]  
Chen C.C., 2018, P 1 FIN NARR PROC WO, P37
[6]  
Chen CC, 2020, PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), P6106
[7]  
Chowdhery A., 2022, J. Mac. Learn. Res, V24, P1, DOI DOI 10.48550/ARXIV.2204.02311
[8]  
Ficler J., 2017, Workshop on stylistic variation, DOI 10.18653/v1/W17-4912
[9]   Knowledge Distillation: A Survey [J].
Gou, Jianping ;
Yu, Baosheng ;
Maybank, Stephen J. ;
Tao, Dacheng .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) :1789-1819
[10]  
Kadous Kathryn, 2017, WorkingPaper, DOI [10.2139/ssrn.2968407, DOI 10.2139/SSRN.2968407]