Fine-Tuning Large Language Models for Ontology Engineering: A Comparative Analysis of GPT-4 and Mistral

被引:0
|
作者
Doumanas, Dimitrios [1 ]
Soularidis, Andreas [1 ]
Spiliotopoulos, Dimitris [2 ]
Vassilakis, Costas [3 ]
Kotis, Konstantinos [1 ]
机构
[1] Univ Aegean, Dept Cultural Technol & Commun, Intelligent Syst Lab, Mitilini 81100, Greece
[2] Univ Peloponnese, Dept Management Sci & Technol, Tripolis 22100, Greece
[3] Univ Peloponnese, Dept Informat & Telecommun, Tripolis 22100, Greece
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 04期
关键词
large language models (LLMs) fine-tuning; ontology engineering (OE); domain-specific knowledge; search and rescue (SAR);
D O I
10.3390/app15042146
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Ontology engineering (OE) plays a critical role in modeling and managing structured knowledge across various domains. This study examines the performance of fine-tuned large language models (LLMs), specifically GPT-4 and Mistral 7B, in efficiently automating OE tasks. Foundational OE textbooks are used as the basis for dataset creation and for feeding the LLMs. The methodology involved segmenting texts into manageable chapters, generating question-answer pairs, and translating visual elements into description logic to curate fine-tuned datasets in JSONL format. This research aims to enhance the models' abilities to generate domain-specific ontologies, with hypotheses asserting that fine-tuned LLMs would outperform base models, and that domain-specific datasets would significantly improve their performance. Comparative experiments revealed that GPT-4 demonstrated superior accuracy and adherence to ontology syntax, albeit with higher computational costs. Conversely, Mistral 7B excelled in speed and cost efficiency but struggled with domain-specific tasks, often generating outputs that lacked syntactical precision and relevance. The presented results highlight the necessity of integrating domain-specific datasets to improve contextual understanding and practical utility in specialized applications, such as Search and Rescue (SAR) missions in wildfire incidents. Both models, despite their limitations, exhibited potential in understanding OE principles. However, their performance underscored the importance of aligning training data with domain-specific knowledge to emulate human expertise effectively. This study, based on and extending our previous work on the topic, concludes that fine-tuned LLMs with targeted datasets enhance their utility in OE, offering insights into improving future models for domain-specific applications. The findings advocate further exploration of hybrid solutions to balance accuracy and efficiency.
引用
收藏
页数:34
相关论文
empty
未找到相关数据