Bringing order into the realm of Transformer-based language models for artificial intelligence and law

被引:9
|
作者
Greco, Candida M. [1 ]
Tagarelli, Andrea [1 ]
机构
[1] Univ Calabria, Dept Comp Engn Modeling Elect & Syst Engn DIMES, I-87036 Arcavacata Di Rende, CS, Italy
关键词
Language models; BERT; GPT; Legal search; Legal document review; Legal outcome prediction; Retrieval; Entailment; Inference; Caselaw data; Statutory law data; Benchmarks; AI for law;
D O I
10.1007/s10506-023-09374-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology for the successful development of deep-learning-based solutions to problems and applications that require natural language processing and understanding. Like for other textual domains, TLMs have indeed pushed the state-of-the-art of AI approaches for many tasks of interest in the legal domain. Despite the first Transformer model being proposed about six years ago, there has been a rapid progress of this technology at an unprecedented rate, whereby BERT and related models represent a major reference, also in the legal domain. This article provides the first systematic overview of TLM-based methods for AI-driven problems and tasks in the legal sphere. A major goal is to highlight research advances in this field so as to understand, on the one hand, how the Transformers have contributed to the success of AI in supporting legal processes, and on the other hand, what are the current limitations and opportunities for further research development.
引用
收藏
页码:863 / 1010
页数:148
相关论文
共 50 条
  • [31] Named Entity Recognition in Cyber Threat Intelligence Using Transformer-based Models
    Evangelatos, Pavlos
    Iliou, Christos
    Mavropoulos, Thanassis
    Apostolou, Konstantinos
    Tsikrika, Theodora
    Vrochidis, Stefanos
    Kompatsiaris, Ioannis
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 348 - 353
  • [32] EEG Classification with Transformer-Based Models
    Sun, Jiayao
    Xie, Jin
    Zhou, Huihui
    2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 92 - 93
  • [33] Quantifying the Bias of Transformer-Based Language Models for African American English in Masked Language Modeling
    Salutari, Flavia
    Ramos, Jerome
    Rahmani, Hossein A.
    Linguaglossa, Leonardo
    Lipani, Aldo
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT I, 2023, 13935 : 532 - 543
  • [34] Incorporating Medical Knowledge to Transformer-based Language Models for Medical Dialogue Generation
    Naseem, Usman
    Bandi, Ajay
    Raza, Shaina
    Rashid, Junaid
    Chakravarthi, Bharathi Raja
    PROCEEDINGS OF THE 21ST WORKSHOP ON BIOMEDICAL LANGUAGE PROCESSING (BIONLP 2022), 2022, : 110 - 115
  • [35] Task-Specific Transformer-Based Language Models in HealthCare:Scoping Review
    Cho, Ha Na
    Jun, Tae Joon
    Kim, Young-Hak
    Kang, Heejun
    Ahn, Imjin
    Gwon, Hansle
    Kim, Yunha
    Seo, Jiahn
    Choi, Heejung
    Kim, Minkyoung
    Han, Jiye
    Kee, Gaeun
    Park, Seohyun
    Ko, Soyoung
    JMIR MEDICAL INFORMATICS, 2024, 12
  • [36] A Comparative Analysis of Transformer-based Protein Language Models for Remote Homology Prediction
    Kabir, Anowarul
    Moldwin, Asher
    Shehu, Amarda
    14TH ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, BCB 2023, 2023,
  • [37] Transformer-based Language Models and Homomorphic Encryption: An Intersection with BERT-tiny
    Rovida, Lorenzo
    Leporati, Alberto
    PROCEEDINGS OF THE 10TH ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, IWSPA 2024, 2024, : 3 - 13
  • [38] Boost Transformer-based Language Models with GPU-Friendly Sparsity and Quantization
    Yu, Chong
    Chen, Tao
    Gan, Zhongxue
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 218 - 235
  • [39] Empirical Study of Tweets Topic Classification Using Transformer-Based Language Models
    Mandal, Ranju
    Chen, Jinyan
    Becken, Susanne
    Stantic, Bela
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 340 - 350
  • [40] An Architecture for Accelerated Large-Scale Inference of Transformer-Based Language Models
    Ganiev, Amir
    Chapin, Colt
    de Andrade, Anderson
    Liu, Chen
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2021, 2021, : 163 - 169