Prompting GPT-4 to support automatic safety case generation

被引:1
|
作者
Sivakumar, Mithila [1 ]
Belle, Alvine B. [1 ]
Shan, Jinjun [1 ]
Shahandashti, Kimya Khakzad [1 ]
机构
[1] York Univ, Lassonde Sch Engn, 4700 Keele St, Toronto, ON M3J 1P3, Canada
关键词
Safety cases; Safety assurance; Machine learning; Large language models; Generative AI; Requirements engineering;
D O I
10.1016/j.eswa.2024.124653
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the ever-evolving field of software engineering, the advent of large language models and conversational interfaces, exemplified by ChatGPT, represents a significant revolution. While their potential is evident in various domains, this paper expands upon our previous research, where we experimented with GPT -4, on its ability to create safety cases. A safety case is a structured argument supported by a body of evidence to demonstrate that a given system is safe to operate in a given environment. In this paper, we first determine GPT -4's comprehension of the Goal Structuring Notation (GSN), a well-established notation for visually representing safety cases. Additionally, we conduct four distinct experiments using GPT -4 to evaluate its ability to generate safety cases within a specified system and application domain. To assess GPT -4's performance in this context, we compare the results it produces with the ground-truth safety cases developed for an X-ray system, a machine learning-enabled component for tire noise recognition in a vehicle, and a lane management system from the automotive domain. This comparison enables us to gain valuable insights into the model's generative capabilities. Our findings indicate that GPT -4 is able to generate moderately accurate and reasonable safety cases.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Exploring the capabilities of large language models for the generation of safety cases: the case of GPT-4
    Sivakumar, Mithila
    Belle, Alvine Boaye
    Shan, Jinjun
    Shahandashti, Kimya Khakzad
    32ND INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS, REW 2024, 2024, : 35 - 45
  • [2] Feedback-Generation for Programming Exercises With GPT-4
    Azaiz, Imen
    Kiesler, Natalie
    Strickroth, Sven
    PROCEEDINGS OF THE 2024 CONFERENCE INNOVATION AND TECHNOLOGY IN COMPUTER SCIENCE EDUCATION, VOL 1, ITICSE 2024, 2024, : 31 - 37
  • [3] GPT-4 as a biomedical simulator
    Schaefer M.
    Reichl S.
    ter Horst R.
    Nicolas A.M.
    Krausgruber T.
    Piras F.
    Stepper P.
    Bock C.
    Samwald M.
    Computers in Biology and Medicine, 2024, 178
  • [4] Assessing the quality of automatic-generated short answers using GPT-4
    Rodrigues L.
    Dwan Pereira F.
    Cabral L.
    Gašević D.
    Ramalho G.
    Ferreira Mello R.
    Computers and Education: Artificial Intelligence, 2024, 7
  • [5] GPT-4 passes the bar exam
    Katz, Daniel Martin
    Bommarito, Michael James
    Gao, Shang
    Arredondo, Pablo
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2024, 382 (2270):
  • [6] Generative AI Copilot to Support Safety Analyses of Human-Robot Collaborations: Hazard Operability Analysis and GPT-4
    Kranz, Philipp
    Schirmer, Fabian
    Kaupp, Tobias
    Daun, Marian
    IEEE SOFTWARE, 2024, 41 (06) : 65 - 72
  • [7] Using GPT-4 to Generate Failure Logic
    Clegg, Kester
    Habli, Ibrahim
    McDermid, John
    COMPUTER SAFETY, RELIABILITY, AND SECURITY. SAFECOMP 2024 WORKSHOPS, 2024, 14989 : 148 - 159
  • [8] Text understanding in GPT-4 versus humans
    Shultz, Thomas R.
    Wise, Jamie M.
    Nobandegani, Ardavan S.
    ROYAL SOCIETY OPEN SCIENCE, 2025, 12 (02):
  • [9] Leveraging GPT-4 for food effect summarization to enhance product-specific guidance development via iterative prompting
    Shi, Yiwen
    Ren, Ping
    Wang, Jing
    Han, Biao
    ValizadehAslani, Taha
    Agbavor, Felix
    Zhang, Yi
    Hu, Meng
    Zhao, Liang
    Liang, Hualou
    JOURNAL OF BIOMEDICAL INFORMATICS, 2023, 148
  • [10] GPT-4 is here: what scientists think
    Sanderson, Katharine
    NATURE, 2023, 615 (7954) : 773 - 773