Assessing Generative Language Models in Classification Tasks: Performance and Self-evaluation Capabilities in the Environmental and Climate Change Domain

被引:3
作者
Grasso, Francesca [1 ]
Locci, Stefano [1 ]
机构
[1] Univ Turin, Dept Comp Sci, Corso Svizzera 185, I-10149 Turin, Italy
来源
NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, PT II, NLDB 2024 | 2024年 / 14763卷
关键词
Large Language Models; Text Classification; Climate Change;
D O I
10.1007/978-3-031-70242-6_29
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper examines the performance of two Large Language Models (LLMs) - GPT-3.5-Turbo and Llama-2-13b - and one Small Language Model (SLM) - Gemma-2b, across three different classification tasks within the climate change (CC) and environmental domain. Employing BERT-based models as a baseline, we compare their efficacy against these transformer-based models. Additionally, we assess the models' self-evaluation capabilities by analyzing the calibration of verbalized confidence scores in these text classification tasks. Our findings reveal that while BERT-based models generally outperform both the LLMs and SLM, the performance of the large generative models is still noteworthy. Furthermore, our calibration analysis reveals that although Gemma is well-calibrated in initial tasks, it thereafter produces inconsistent results; Llama is reasonably calibrated, and GPT consistently exhibits strong calibration. Through this research, we aim to contribute to the ongoing discussion on the utility and effectiveness of generative LMs in addressing some of the planet's most urgent issues, highlighting their strengths and limitations in the context of ecology and CC.
引用
收藏
页码:302 / 313
页数:12
相关论文
共 35 条
[1]  
[Anonymous], 2023, GPT-4 Technical Report, DOI 10.48550/arXiv.2303.08774
[2]  
Brown TB, 2020, Arxiv, DOI [arXiv:2005.14165, DOI 10.48550/ARXIV.2005.14165]
[3]  
Bulian J, 2024, Arxiv, DOI [arXiv:2310.02932, DOI 10.48550/ARXIV.2310.02932]
[4]  
Chang YP, 2023, Arxiv, DOI arXiv:2307.03109
[5]   Evaluating the ChatGPT family of models for biomedical reasoning and classification [J].
Chen, Shan ;
Li, Yingya ;
Lu, Sheng ;
Van, Hoang ;
Aerts, Hugo J. W. L. ;
Savova, Guergana K. ;
Bitterman, Danielle S. .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (04) :940-948
[6]  
Clavié B, 2023, Arxiv, DOI arXiv:2303.07142
[7]  
Deldjoo Y, 2023, Arxiv, DOI [arXiv:2307.11761, 10.48550/arXiv.2307.11761]
[8]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[9]  
Grasso F., 2024, P 2024 JOINT INT C C, P5461
[10]  
Hershcovich D., 2022, C EMP METH NAT LANG