PROSPER: Extracting Protocol Specifications Using Large Language Models

被引:5
|
作者
Sharma, Prakhar [1 ]
Yegneswaran, Vinod [1 ]
机构
[1] SRI Int, Menlo Pk, CA 94025 USA
来源
PROCEEDINGS OF THE 22ND ACM WORKSHOP ON HOT TOPICS IN NETWORKS, HOTNETS 2023 | 2023年
关键词
Large language models; request for comments; protocol specifications; protocol FSMs; automated extraction;
D O I
10.1145/3626111.3628205
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We explore the application of Large Language Models (LLMs) (specifically GPT-3.5-turbo) to extract specifications and automating understanding of networking protocols from Internet Request for Comments (RFC) documents. LLMs have proven successful in specialized domains like medical and legal text understanding, and this work investigates their potential in automatically comprehending RFCs. We develop Artifact Miner, a tool to extract diagram artifacts from RFCs. We then couple extracted artifacts with natural language text to extract protocol automata using GPT-turbo 3.5 (chatGPT) and present our zero-shot and few-shot extraction results. We call this framework for FSM extraction 'PROSPER: Protocol Specification Miner'. We compare PROSPER with existing state-of-the-art techniques for protocol FSM state and transition extraction. Our experiments indicate that employing artifacts along with text for extraction can lead to lower false positives and better accuracy for both extracted states and transitions. Finally, we discuss efficient prompt engineering techniques, the errors we encountered, and pitfalls of using LLMs for knowledge extraction from specialized domains such as RFC documents.
引用
收藏
页码:41 / 47
页数:7
相关论文
共 50 条
  • [1] Extracting Software Product Line Feature Models from Natural Language Specifications
    Sree-Kumar, Anjali
    Planas, Elena
    Clariso, Robert
    SPLC'18: PROCEEDINGS OF THE 22ND INTERNATIONAL SYSTEMS AND SOFTWARE PRODUCT LINE CONFERENCE, VOL 1, 2018, : 43 - 53
  • [2] Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
    Kim, Woo-Seok
    Lim, Seongho
    Kim, Gun-Woo
    Choi, Sang-Min
    MATHEMATICS, 2025, 13 (02)
  • [3] Extracting Domain Models from Textual Requirements in the Era of Large Language Models
    Arulmohan, Sathurshan
    Meurs, Marie-Jean
    Mosser, Sebastien
    2023 ACM/IEEE INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION, MODELS-C, 2023, : 580 - 587
  • [4] A Universal Prompting Strategy for Extracting Process Model Information from Natural Language Text Using Large Language Models
    Neuberger, Julian
    Ackermann, Lars
    van der Aa, Han
    Jablonski, Stefan
    CONCEPTUAL MODELING, ER 2024, 2025, 15238 : 38 - 55
  • [6] Dual Adapter Tuning of Vision–Language Models Using Large Language Models
    Mohammad Reza Zarei
    Abbas Akkasi
    Majid Komeili
    International Journal of Computational Intelligence Systems, 18 (1)
  • [7] Using large language models for extracting stressful life events to assess their impact on preventive colon cancer screening adherence
    Scherbakov, Dmitry
    Heider, Paul M.
    Wehbe, Ramsey
    Alekseyenko, Alexander V.
    Lenert, Leslie A.
    Obeid, Jihad S.
    BMC PUBLIC HEALTH, 2025, 25 (01)
  • [8] Using Large Language Models in Business Processes
    Grisold, Thomas
    vom Brocke, Jan
    Kratsch, Wolfgang
    Mendling, Jan
    Vidgof, Maxim
    BUSINESS PROCESS MANAGEMENT, BPM 2023, 2023, 14159 : XXIX - XXXI
  • [9] Accelerating Pharmacovigilance using Large Language Models
    Prakash, Mukkamala Venkata Sai
    Parab, Ganesh
    Veeramalla, Meghana
    Reddy, Siddartha
    Varun, V.
    Gopalakrishnan, Saisubramaniam
    Pagidipally, Vishal
    Vaddina, Vishal
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1182 - 1183
  • [10] Extracting Legal Norm Analysis Categories from German Law Texts with Large Language Models
    Bachinger, Sarah T.
    Feddoul, Leila
    Mauch, Marianne
    Koenig-Ries, Birgitta
    PROCEEDINGS OF THE 25TH ANNUAL INTERNATIONAL CONFERENCE ON DIGITAL GOVERNMENT RESEARCH, DGO 2024, 2024, : 481 - 493