Risk communication and large language models

被引:0
|
作者
Sledge, Daniel [1 ]
Thomas, Herschel F. [2 ]
机构
[1] Univ Oklahoma Hlth Sci, Hudson Coll Publ Hlth, 801 NE 13th St,Room 369,POB 26901, Oklahoma City, OK 73104 USA
[2] Univ Texas Austin, Lyndon B Johnson Sch Publ Affairs, Austin, TX USA
来源
RISK HAZARDS & CRISIS IN PUBLIC POLICY | 2024年
关键词
disaster planning and preparedness; large language models; risk communication; SOCIAL MEDIA; INFORMATION;
D O I
10.1002/rhc3.12303
中图分类号
C93 [管理学]; D035 [国家行政管理]; D523 [行政管理]; D63 [国家行政管理];
学科分类号
12 ; 1201 ; 1202 ; 120202 ; 1204 ; 120401 ;
摘要
The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM-based chat programs for risk communication. We examine ChatGPT-generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's Ready.gov website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from Ready.gov. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Large Language Models Empower Multimodal Integrated Sensing and Communication
    Cheng, Lu
    Zhang, Hongliang
    Di, Boya
    Niyato, Dusit
    Song, Lingyang
    IEEE COMMUNICATIONS MAGAZINE, 2025, : 190 - 197
  • [2] Constraint satisfaction in large language models
    Jacobs, Cassandra L.
    MacDonald, Maryellen C.
    LANGUAGE COGNITION AND NEUROSCIENCE, 2024, 39 (10) : 1231 - 1248
  • [3] Assessing the risk of takeover catastrophe from large language models
    Baum, Seth D.
    RISK ANALYSIS, 2024, : 752 - 765
  • [4] The Potential Impact of Large Language Models on Doctor-Patient Communication: A Case Study in Prostate Cancer
    Geanta, Marius
    Badescu, Daniel
    Chirca, Narcis
    Nechita, Ovidiu Catalin
    Radu, Cosmin George
    Rascu, Stefan
    Radavoi, Daniel
    Sima, Cristian
    Toma, Cristian
    Jinga, Viorel
    HEALTHCARE, 2024, 12 (15)
  • [5] FedsLLM: Federated Split Learning for Large Language Models over Communication Networks
    Zhao, Kai
    Yang, Zhaohui
    Huang, Chongwen
    Chen, Xiaoming
    Zhang, Zhaoyang
    2024 INTERNATIONAL CONFERENCE ON UBIQUITOUS COMMUNICATION, UCOM 2024, 2024, : 438 - 443
  • [6] A Comprehensive Overview of Backdoor Attacks in Large Language Models Within Communication Networks
    Yang, Haomiao
    Xiang, Kunlan
    Ge, Mengyu
    Li, Hongwei
    Lu, Rongxing
    Yu, Shui
    IEEE NETWORK, 2024, 38 (06): : 211 - 218
  • [7] Automating Research in Business and Technical Communication: Large Language Models as Qualitative Coders
    Omizo, Ryan M.
    JOURNAL OF BUSINESS AND TECHNICAL COMMUNICATION, 2024, 38 (03) : 242 - 265
  • [8] Gender bias and stereotypes in Large Language Models
    Kotek, Hadas
    Dockum, Rikker
    Sun, David Q.
    PROCEEDINGS OF THE ACM COLLECTIVE INTELLIGENCE CONFERENCE, CI 2023, 2023, : 12 - 24
  • [9] Risk Considerations for the Department of Defense's Fielding of Large Language Models
    Blowers, Misty
    Salinas, Santos
    Bailey, Seth
    DISRUPTIVE TECHNOLOGIES IN INFORMATION SCIENCES VIII, 2024, 13058
  • [10] Large Language Models and Rule-Based Approaches in Domain-Specific Communication
    Halvonik, Dominik
    Kapusta, Jozef
    IEEE ACCESS, 2024, 12 : 107046 - 107058