Using Large Language Models for the Interpretation of Building Regulations

被引:0
|
作者
Fuchs, Stefan [1 ]
Witbrock, Michael [1 ]
Dimyadi, Johannes [1 ,2 ]
Amor, Robert [1 ]
机构
[1] School of Computer Science, The University of Auckland, 38 Princes Street, Auckland,1010, New Zealand
[2] CAS (Codify Asset Solutions Limited), Auckland, New Zealand
关键词
Semantics;
D O I
10.32738/JEPPM-2024-0035
中图分类号
学科分类号
摘要
Compliance checking is an essential part of a construction project. The recent rapid uptake of building information models (BIM) in the construction industry has created more opportunities for automated compliance checking (ACC). BIM enable sharing of digital building design data that can be used to check compliance with legal requirements, which are conventionally conveyed in natural language and not intended for machine processing. Creating a computable representation of legal requirements suitable for ACC is complex, costly, and time-consuming. Large language models (LLMs) such as the generative pre-trained transformers (GPT), GPT-3.5 and GPT-4, powering OpenAI’s ChatGPT, can generate logically coherent text and source code responding to user prompts. This capability could be used to automate the conversion of building regulations into a semantic and computable representation. This paper evaluates the performance of LLMs in translating building regulations into LegalRuleML in a few-shot learning setup. By providing GPT-3.5 with only a few example translations, it can learn the basic structure of the format. Using a system prompt, we further specify the LegalRuleML representation and explore the existence of expert domain knowledge in the model. Such domain knowledge might be ingrained in GPT-3.5 through the broad pre-training but needs to be brought forth by careful contextualisation. Finally, we investigate whether strategies such as chain-of-thought reasoning and self-consistency could apply to this use case. As LLMs become more sophisticated, the increased common sense, logical coherence and means to domain adaptation can significantly support ACC, leading to more efficient and effective checking processes. Copyright © Journal of Engineering, Project, and Production Management (EPPM-Journal).
引用
收藏
相关论文
共 50 条
  • [1] Manner implicatures in large language models
    Cong, Yan
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [2] Building a Language-Independent Discourse Parser using Universal Networking Language
    Navaneethakrishnan, Subalalitha Chinnaudayar
    Parthasarathi, Ranjani
    COMPUTATIONAL INTELLIGENCE, 2015, 31 (04) : 593 - 618
  • [3] Strong and weak alignment of large language models with human values
    Khamassi, Mehdi
    Nahon, Marceau
    Chatila, Raja
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [4] CoLLM: Integrating Collaborative Embeddings Into Large Language Models for Recommendation
    Zhang, Yang
    Feng, Fuli
    Zhang, Jizhi
    Bao, Keqin
    Wang, Qifan
    He, Xiangnan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (05) : 2329 - 2340
  • [5] IoV-BERT-IDS: Hybrid Network Intrusion Detection System in IoV Using Large Language Models
    Fu, Mengyi
    Wang, Pan
    Liu, Minyao
    Zhang, Ze
    Zhou, Xiaokang
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (02) : 1909 - 1921
  • [6] Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely
    Kauf, Carina
    Ivanova, Anna A.
    Rambelli, Giulia
    Chersoni, Emmanuele
    She, Jingyuan Selena
    Chowdhury, Zawad
    Fedorenko, Evelina
    Lenci, Alessandro
    COGNITIVE SCIENCE, 2023, 47 (11)
  • [7] Leveraging Large Language Models for Realizing Truly Intelligent User Interfaces
    Oelen, Allard
    Auer, Soeren
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [8] Incorporating Molecular Knowledge in Large Language Models via Multimodal Modeling
    Yang, Zekun
    Lv, Kun
    Shu, Jian
    Li, Zheng
    Xiao, Ping
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2025,
  • [9] Time Series Classification With Large Language Models via Linguistic Scaffolding
    Jang, Hyeongwon
    Yong Yang, June
    Hwang, Jaeryong
    Yang, Eunho
    IEEE ACCESS, 2024, 12 : 170387 - 170398
  • [10] Large Language Models for Metaphor Detection: Bhagavad Gita and Sermon on the Mount
    Chandra, Rohitash
    Tiwari, Abhishek
    Jain, Naman
    Badhe, Sushrut
    IEEE ACCESS, 2024, 12 : 84452 - 84469