Bringing order into the realm of Transformer-based language models for artificial intelligence and law

被引:9
|
作者
Greco, Candida M. [1 ]
Tagarelli, Andrea [1 ]
机构
[1] Univ Calabria, Dept Comp Engn Modeling Elect & Syst Engn DIMES, I-87036 Arcavacata Di Rende, CS, Italy
关键词
Language models; BERT; GPT; Legal search; Legal document review; Legal outcome prediction; Retrieval; Entailment; Inference; Caselaw data; Statutory law data; Benchmarks; AI for law;
D O I
10.1007/s10506-023-09374-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology for the successful development of deep-learning-based solutions to problems and applications that require natural language processing and understanding. Like for other textual domains, TLMs have indeed pushed the state-of-the-art of AI approaches for many tasks of interest in the legal domain. Despite the first Transformer model being proposed about six years ago, there has been a rapid progress of this technology at an unprecedented rate, whereby BERT and related models represent a major reference, also in the legal domain. This article provides the first systematic overview of TLM-based methods for AI-driven problems and tasks in the legal sphere. A major goal is to highlight research advances in this field so as to understand, on the one hand, how the Transformers have contributed to the success of AI in supporting legal processes, and on the other hand, what are the current limitations and opportunities for further research development.
引用
收藏
页码:863 / 1010
页数:148
相关论文
共 50 条
  • [41] Influence of Language Proficiency on the Readability of Review Text and Transformer-based Models for Determining Language Proficiency
    Sazzed, Salim
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 881 - 886
  • [42] Stress Test Evaluation of Transformer-based Models in Natural Language Understanding Tasks
    Aspillaga, Carlos
    Carvallo, Andres
    Araujo, Vladimir
    PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), 2020, : 1882 - 1894
  • [43] Classifying Drug Ratings Using User Reviews with Transformer-Based Language Models
    Shiju, Akhil
    He, Zhe
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 163 - 169
  • [44] Transformers-sklearn: a toolkit for medical language understanding with transformer-based models
    Feihong Yang
    Xuwen Wang
    Hetong Ma
    Jiao Li
    BMC Medical Informatics and Decision Making, 21
  • [45] Catching but a glimpse?-Navigating crowdsourced solution spaces with transformer-based language models
    Just, Julian
    Hutter, Katja
    Fueller, Johann
    CREATIVITY AND INNOVATION MANAGEMENT, 2024, 33 (04) : 718 - 741
  • [46] No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models
    Kaddour, Jean
    Key, Oscar
    Nawrot, Piotr
    Minervini, Pasquale
    Kusner, Matt J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [47] Transformers-sklearn: a toolkit for medical language understanding with transformer-based models
    Yang, Feihong
    Wang, Xuwen
    Ma, Hetong
    Li, Jiao
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2021, 21 (SUPPL 2)
  • [48] Transformer-Based Music Language Modelling and Transcription
    Zonios, Christos
    Pavlopoulos, John
    Likas, Aristidis
    PROCEEDINGS OF THE 12TH HELLENIC CONFERENCE ON ARTIFICIAL INTELLIGENCE, SETN 2022, 2022,
  • [49] Transformer-based Natural Language Understanding and Generation
    Zhang, Feng
    An, Gaoyun
    Ruan, Qiuqi
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 281 - 284
  • [50] Not all quantifiers are equal: Probing transformer-based language models' understanding of generalised quantifiers
    Madusanka, Tharindu
    Zahid, Iqra
    Li, Hao
    Pratt-Hartmann, Ian
    Batista-Navarro, Riza
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 8680 - 8692