DesignQA: A Multimodal Benchmark for Evaluating Large Language Models' Understanding of Engineering Documentation

被引:0
作者
Doris, Anna C. [1 ]
Grandi, Daniele [2 ]
Tomich, Ryan [3 ]
Alam, Md Ferdous [1 ]
Ataei, Mohammadmehdi [4 ]
Cheong, Hyunmin [4 ]
Ahmed, Faez [1 ]
机构
[1] MIT, Dept Mech Engn, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] Autodesk Res, Landmark Market 1, Ste 400, San Francisco, CA 94105 USA
[3] MIT Motorsports, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[4] Autodesk Res, 661 Univ Ave, Toronto, ON M5G 1M1, Canada
基金
美国国家科学基金会;
关键词
artificial intelligence; data-driven engineering; machine learning for engineering applications;
D O I
10.1115/1.4067333
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
This research introduces DesignQA, a novel benchmark aimed at evaluating the proficiency of multimodal large language models (MLLMs) in comprehending and applying engineering requirements in technical documentation. Developed with a focus on real-world engineering challenges, DesignQA uniquely combines multimodal data-including textual design requirements, CAD images, and engineering drawings-derived from the Formula SAE student competition. Unlike many existing MLLM benchmarks, DesignQA contains document-grounded visual questions where the input image and the input document come from different sources. The benchmark features automatic evaluation metrics and is divided into segments-Rule Comprehension, Rule Compliance, and Rule Extraction-based on tasks that engineers perform when designing according to requirements. We evaluate state-of-the-art models (at the time of writing) like GPT-4o, GPT-4, Claude-Opus, Gemini-1.0, and LLaVA-1.5 against the benchmark, and our study uncovers the existing gaps in MLLMs' abilities to interpret complex engineering documentation. The MLLMs tested, while promising, struggle to reliably retrieve relevant rules from the Formula SAE documentation, face challenges in recognizing technical components in CAD images and encounter difficulty in analyzing engineering drawings. These findings underscore the need for multimodal models that can better handle the multifaceted questions characteristic of design according to technical documentation. This benchmark sets a foundation for future advancements in AI-supported engineering design processes. DesignQA is publicly available at online.
引用
收藏
页数:17
相关论文
共 48 条
  • [1] Anthropic, Prompt Engineering Overview
  • [2] Barbhuiya R. K.., 2023, Ind. J. Edu. Technol, V5, P266
  • [3] Technical language processing: Unlocking maintenance knowledge
    Brundage, Michael P.
    Sexton, Thurston
    Hodkiewicz, Melinda
    Dima, Alden
    Lukens, Sarah
    [J]. MANUFACTURING LETTERS, 2021, 27 : 42 - 46
  • [4] Bubeck S., 2023, arXiv, DOI arXiv:2303.12712
  • [5] Chang YP, 2023, Arxiv, DOI [arXiv:2307.03109, DOI 10.48550/ARXIV.2307.03109, 10.48550/ARXIV.2307.03109]
  • [6] Chen Yang, 2023, arXiv, DOI arXiv:2302.11713
  • [7] The future landscape of large language models in medicine
    Clusmann, Jan
    Kolbinger, Fiona R.
    Muti, Hannah Sophie
    Carrero, Zunamys I.
    Eckardt, Jan-Niklas
    Laleh, Narmin Ghaffari
    Loeffler, Chiara Maria Lavinia
    Schwarzkopf, Sophie-Caroline
    Unger, Michaela
    Veldhuizen, Gregory P.
    Wagner, Sophia J.
    Kather, Jakob Nikolas
    [J]. COMMUNICATIONS MEDICINE, 2023, 3 (01):
  • [8] Dasigi P, 2021, Arxiv, DOI arXiv:2105.03011
  • [9] Dettmers T., 2024, Advances in Neural Information Processing Systems, V36
  • [10] Dima Alden, 2021, Appl AI Lett, V2, DOI 10.1002/ail2.33