Semantic role extraction in law texts: a comparative analysis of language models for legal information extraction

被引:0
作者
Bakker, Roos M. [1 ,2 ]
Schoevers, Akke J. [1 ,3 ]
van Drie, Romy A. N. [1 ]
Schraagen, Marijn P. [3 ]
de Boer, Maaike H. T. [1 ]
机构
[1] TNO, Dept Data Sci, The Hague, Netherlands
[2] Leiden Univ, Ctr Linguist, Leiden, Netherlands
[3] Univ Utrecht, Nat Language Proc, Utrecht, Netherlands
关键词
Semantic role labelling; Large language models; Legislation; Legal information extraction; Legal semantic roles;
D O I
10.1007/s10506-025-09437-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Norms are essential in our society: they dictate how individuals should behave and interact within a community. They can be written down in laws or other written sources. Interpretations often differ; this is where formalisations offer a solution. They express an interpretation of a source of norms in a transparent manner. However, creating these interpretations is labour intensive. Natural language processing techniques can support this process. Previous work showed the potential of transformer-based models for Dutch law texts. In this paper, we (1) introduce a dataset of 2335 English sentences annotated with legal semantic roles conform the Flint framework; (2) fine-tune a collection of language models on this dataset, and (3) query two non-fine-tuned generative large language models (LLMs). This allows us to compare performance of fine-tuned domain-specific, task-specific, and general language models with non-fine-tuned generative LLMs. The results show that models fine-tuned on our dataset have the best performance (accuracy around 0.88). Furthermore, domain-specific models perform better than general models, indicating that domain knowledge is of added value for this task. Finally, different methods of querying LLMs perform unsatisfactorily, with maximum accuracy scores around 0.6. This indicates that for specific tasks, such as this adaptation of semantic role labelling, the process of annotating data and fine-tuning a smaller language model is preferred over querying a generative LLM, especially when domain-specific models are available.
引用
收藏
页数:35
相关论文
共 59 条
  • [1] Agrawal M., 2022, P 2022 C EMPIRICAL M, P1998, DOI [10.18653/v1/2022.emnlp-main.130, DOI 10.18653/V1/2022.EMNLP-MAIN.130, DOI 10.18653/V1/2022.EMNLP-MAIN]
  • [2] Almazrouei Ebtesam, 2023, Falcon-40B: an open large language model with state-of-the-art performance
  • [3] [Anonymous], 2023, Anthropic: Claude 2
  • [4] Atty Eleti, 2023, Function calling and other API updates
  • [5] Bakker RM, 2022, LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, P448
  • [6] Bakker RM, 2022, EKAW (Companion)
  • [7] Beltagy I, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3615
  • [8] Biagioli C, 2005, P 10 INT C ART INT L, P133, DOI DOI 10.1145/1165485.1165506
  • [9] Bozinovski S., 1976, Proc. Symp. Informatica, V3, P121
  • [10] Towards Semantic Interpretation of Legal Modifications through Deep Syntactic Analysis
    Brighi, Raffaella
    Lesmo, Leonardo
    Mazzei, Alessandro
    Palmirani, Monica
    Radicioni, Daniele P.
    [J]. LEGAL KNOWLEDGE AND INFORMATION SYSTEMS, 2008, 189 : 202 - +