Can Large Language Models Assist in Hazard Analysis?

被引:1
|
作者
Diemert, Simon [1 ]
Weber, Jens H. [1 ]
机构
[1] Univ Victoria, Victoria, BC, Canada
来源
COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2023 WORKSHOPS | 2023年 / 14182卷
关键词
Hazard Analysis; Artificial Intelligence; Large Language Models; Co-Hazard Analysis;
D O I
10.1007/978-3-031-40953-0_35
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs), such as GPT-3, have demonstrated remarkable natural language processing and generation capabilities and have been applied to a variety tasks, such as source code generation. This paper explores the potential of integrating LLMs in the hazard analysis for safety-critical systems, a process which we refer to as co-hazard analysis (CoHA). In CoHA, a human analyst interacts with an LLM via a context-aware chat session and uses the responses to support elicitation of possible hazard causes. In a preliminary experiment, we explore CoHA with three increasingly complex versions of a simple system, using Open AI's ChatGPT service. The quality of ChatGPT's responses were systematically assessed to determine the feasibility of CoHA given the current state of LLM technology. The results suggest that LLMs may be useful for supporting human analysts performing hazard analysis.
引用
收藏
页码:410 / 422
页数:13
相关论文
共 50 条
  • [21] A Surgical Perspective on Large Language Models
    Miller, Robert
    ANNALS OF SURGERY, 2023, 278 (02) : E211 - E213
  • [22] Eliciting metaknowledge in Large Language Models
    Longo, Carmelo Fabio
    Mongiovi, Misael
    Bulla, Luana
    Lieto, Antonio
    COGNITIVE SYSTEMS RESEARCH, 2025, 91
  • [23] Can large language models help augment English psycholinguistic datasets?
    Trott, Sean
    BEHAVIOR RESEARCH METHODS, 2024, 56 (06) : 6082 - 6100
  • [24] Can Euler Diagrams Improve Syllogistic Reasoning in Large Language Models?
    Ando, Risako
    Ozeki, Kentaro
    Morishita, Takanobu
    Abe, Hirohiko
    Mineshima, Koji
    Okada, Mitsuhiro
    DIAGRAMMATIC REPRESENTATION AND INFERENCE, DIAGRAMS 2024, 2024, 14981 : 232 - 248
  • [25] Large language models can segment narrative events similarly to humans
    Michelmann, Sebastian
    Kumar, Manoj
    Norman, Kenneth A.
    Toneva, Mariya
    BEHAVIOR RESEARCH METHODS, 2025, 57 (01)
  • [26] Using Large Language Models to Improve Sentiment Analysis in Latvian Language
    Purvins, Pauls
    Urtans, Evalds
    Caune, Vairis
    BALTIC JOURNAL OF MODERN COMPUTING, 2024, 12 (02): : 165 - 175
  • [27] Can large language models be sensitive to culture suicide risk assessment?
    Levkovich, Inbar
    Shinan-Altman, S.
    Elyoseph, Zohar
    JOURNAL OF CULTURAL COGNITIVE SCIENCE, 2024, 8 (03) : 275 - 287
  • [28] Comparative Analysis of Large Language Models in Source Code Analysis
    Erdogan, Huseyin
    Turan, Nezihe Turhan
    Onan, Aytug
    INTELLIGENT AND FUZZY SYSTEMS, INFUS 2024 CONFERENCE, VOL 1, 2024, 1088 : 185 - 192
  • [29] Large Language Models Can Accomplish Business Process Management Tasks
    Grohs, Michael
    Abb, Luka
    Elsayed, Nourhan
    Rehse, Jana-Rebecca
    BUSINESS PROCESS MANAGEMENT WORKSHOPS, BPM 2023, 2024, 492 : 453 - 465
  • [30] The Language of Creativity: Evidence from Humans and Large Language Models
    Orwig, William
    Edenbaum, Emma R.
    Greene, Joshua D.
    Schacter, Daniel L.
    JOURNAL OF CREATIVE BEHAVIOR, 2024, 58 (01) : 128 - 136