Risk communication and large language models

被引:0
作者
Sledge, Daniel [1 ]
Thomas, Herschel F. [2 ]
机构
[1] Univ Oklahoma Hlth Sci, Hudson Coll Publ Hlth, 801 NE 13th St,Room 369,POB 26901, Oklahoma City, OK 73104 USA
[2] Univ Texas Austin, Lyndon B Johnson Sch Publ Affairs, Austin, TX USA
关键词
disaster planning and preparedness; large language models; risk communication; SOCIAL MEDIA; INFORMATION;
D O I
10.1002/rhc3.12303
中图分类号
C93 [管理学]; D035 [国家行政管理]; D523 [行政管理]; D63 [国家行政管理];
学科分类号
12 ; 1201 ; 1202 ; 120202 ; 1204 ; 120401 ;
摘要
The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM-based chat programs for risk communication. We examine ChatGPT-generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's Ready.gov website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from Ready.gov. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.
引用
收藏
页数:11
相关论文
共 50 条
[41]   Large Language Models in Finance: A Survey [J].
Li, Yinheng ;
Wang, Shaofei ;
Ding, Han ;
Chen, Hang .
PROCEEDINGS OF THE 4TH ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2023, 2023, :374-382
[42]   Tutorial on Large Language Models for Recommendation [J].
Hua, Wenyue ;
Li, Lei ;
Xu, Shuyuan ;
Chen, Li ;
Zhang, Yongfeng .
PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, :1281-1283
[43]   Large language models for chemistry robotics [J].
Naruki Yoshikawa ;
Marta Skreta ;
Kourosh Darvish ;
Sebastian Arellano-Rubach ;
Zhi Ji ;
Lasse Bjørn Kristensen ;
Andrew Zou Li ;
Yuchi Zhao ;
Haoping Xu ;
Artur Kuramshin ;
Alán Aspuru-Guzik ;
Florian Shkurti ;
Animesh Garg .
Autonomous Robots, 2023, 47 :1057-1086
[44]   Large Language Models and theoretical linguistics [J].
Fox, Danny ;
Katzir, Roni .
THEORETICAL LINGUISTICS, 2024, 50 (1-2) :71-76
[45]   Explainability for Large Language Models: A Survey [J].
Zhao, Haiyan ;
Chen, Hanjie ;
Yang, Fan ;
Liu, Ninghao ;
Deng, Huiqi ;
Cai, Hengyi ;
Wang, Shuaiqiang ;
Yin, Dawei ;
Du, Mengnan .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (02)
[46]   Fusion Pruning for Large Language Models [J].
Jiang, Shixin ;
Liu, Ming ;
Qin, Bing .
2024 IEEE 14TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING, ISCSLP 2024, 2024, :349-352
[47]   Watermarking for Large Language Models: A Survey [J].
Yang, Zhiguang ;
Zhao, Gejian ;
Wu, Hanzhou .
MATHEMATICS, 2025, 13 (09)
[48]   Industrial applications of large language models [J].
Raza, Mubashar ;
Jahangir, Zarmina ;
Riaz, Muhammad Bilal ;
Saeed, Muhammad Jasim ;
Sattar, Muhammad Awais .
SCIENTIFIC REPORTS, 2025, 15 (01)
[49]   On the attribution of confidence to large language models [J].
Keeling, Geoff ;
Street, Winnie .
INQUIRY-AN INTERDISCIPLINARY JOURNAL OF PHILOSOPHY, 2025,
[50]   Eliciting metaknowledge in Large Language Models [J].
Longo, Carmelo Fabio ;
Mongiovi, Misael ;
Bulla, Luana ;
Lieto, Antonio .
COGNITIVE SYSTEMS RESEARCH, 2025, 91