Using Large Language Models for the Interpretation of Building Regulations

被引:0
|
作者
Fuchs, Stefan [1 ]
Witbrock, Michael [1 ]
Dimyadi, Johannes [1 ,2 ]
Amor, Robert [1 ]
机构
[1] School of Computer Science, The University of Auckland, 38 Princes Street, Auckland
[2] CAS (Codify Asset Solutions Limited), Auckland
关键词
automated compliance checking; building codes; gpt; Large language models; semantic parsing;
D O I
10.32738/JEPPM-2024-0035
中图分类号
学科分类号
摘要
Compliance checking is an essential part of a construction project. The recent rapid uptake of building information models (BIM) in the construction industry has created more opportunities for automated compliance checking (ACC). BIM enable sharing of digital building design data that can be used to check compliance with legal requirements, which are conventionally conveyed in natural language and not intended for machine processing. Creating a computable representation of legal requirements suitable for ACC is complex, costly, and time-consuming. Large language models (LLMs) such as the generative pre-trained transformers (GPT), GPT-3.5 and GPT-4, powering OpenAI’s ChatGPT, can generate logically coherent text and source code responding to user prompts. This capability could be used to automate the conversion of building regulations into a semantic and computable representation. This paper evaluates the performance of LLMs in translating building regulations into LegalRuleML in a few-shot learning setup. By providing GPT-3.5 with only a few example translations, it can learn the basic structure of the format. Using a system prompt, we further specify the LegalRuleML representation and explore the existence of expert domain knowledge in the model. Such domain knowledge might be ingrained in GPT-3.5 through the broad pre-training but needs to be brought forth by careful contextualisation. Finally, we investigate whether strategies such as chain-of-thought reasoning and self-consistency could apply to this use case. As LLMs become more sophisticated, the increased common sense, logical coherence and means to domain adaptation can significantly support ACC, leading to more efficient and effective checking processes. Copyright © Journal of Engineering, Project, and Production Management (EPPM-Journal).
引用
收藏
相关论文
共 50 条
  • [31] The Security of Using Large Language Models: A Survey with Emphasis on ChatGPT
    Zhou, Wei
    Zhu, Xiaogang
    Han, Qing-Long
    Li, Lin
    Chen, Xiao
    Wen, Sheng
    Xiang, Yang
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2025, 12 (01) : 1 - 26
  • [32] Using Large Language Models to Shape Social Robots' Speech
    Sevilla-Salcedo, Javier
    Fernandez-Rodicio, Enrique
    Martin-Galvan, Laura
    Castro-Gonzalez, Alvaro
    Castillo, Jose C.
    Salichs, Miguel A.
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2023, 8 (03): : 6 - 20
  • [33] Using Large Language Models to Enhance Programming Error Messages
    Leinonen, Juho
    Hellas, Arto
    Sarsa, Sami
    Reeves, Brent
    Denny, Paul
    Prather, James
    Becker, Brett A.
    PROCEEDINGS OF THE 54TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, VOL 1, SIGCSE 2023, 2023, : 563 - 569
  • [34] PROSPER: Extracting Protocol Specifications Using Large Language Models
    Sharma, Prakhar
    Yegneswaran, Vinod
    PROCEEDINGS OF THE 22ND ACM WORKSHOP ON HOT TOPICS IN NETWORKS, HOTNETS 2023, 2023, : 41 - 47
  • [35] Diagnosing infeasible optimization problems using large language models
    Chen, Hao
    Constante-Flores, Gonzalo E.
    Li, Can
    INFOR, 2024, 62 (04) : 573 - 587
  • [36] Investigating legal question generation using large language models
    Deroy, Aniket
    Ghosh, Kripabandhu
    Ghosh, Saptarshi
    ARTIFICIAL INTELLIGENCE AND LAW, 2025,
  • [37] Code Detection for Hardware Acceleration Using Large Language Models
    Martinez, Pablo Antonio
    Bernabe, Gregorio
    Garcia, Jose Manuel
    IEEE ACCESS, 2024, 12 : 35271 - 35281
  • [38] Building an intelligent diabetes Q&A system with knowledge graphs and large language models
    Qin, Zhenkai
    Wu, Dongze
    Zang, Zhidong
    Chen, Xiaolong
    Zhang, Hongfeng
    Wong, Cora Un In
    FRONTIERS IN PUBLIC HEALTH, 2025, 13
  • [39] Perils and opportunities in using large language models in psychological research
    Abdurahman, Suhaib
    Atari, Mohammad
    Karimi-Malekabadi, Farzan
    Xue, Mona J.
    Trager, Jackson
    Park, Peter S.
    Golazizian, Preni
    Omrani, Ali
    Dehghani, Morteza
    PNAS NEXUS, 2024, 3 (07):
  • [40] Automated Grading in Coding Exercises Using Large Language Models
    Lagakis, Paraskevas
    Demetriadis, Stavros
    Psathas, Georgios
    SMART MOBILE COMMUNICATION & ARTIFICIAL INTELLIGENCE, VOL 1, IMCL 2023, 2024, 936 : 363 - 373