GPT-4 passes the bar exam

被引:83
|
作者
Katz, Daniel Martin [1 ,2 ,3 ,4 ]
Bommarito, Michael James [1 ,2 ,3 ,4 ]
Gao, Shang [5 ]
Arredondo, Pablo [2 ,5 ]
机构
[1] Chicago Kent Coll Law, Illinois Tech, Chicago, IL 60661 USA
[2] Stanford Ctr Legal Informat, CodeX, Stanford, CA USA
[3] Bucerius Law Sch, Hamburg, Germany
[4] 273 Ventures LLC, Woburn, MA USA
[5] Casetext Inc, Herndon, VA USA
来源
PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES | 2024年 / 382卷 / 2270期
关键词
large language models; Bar Exam; GPT-4; legal services; legal complexity; legal language; LAW;
D O I
10.1098/rsta.2023.0254
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In this paper, we experimentally evaluate the zero-shot performance of GPT-4 against prior generations of GPT on the entire uniform bar examination (UBE), including not only the multiple-choice multistate bar examination (MBE), but also the open-ended multistate essay exam (MEE) and multistate performance test (MPT) components. On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 when compared with much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human test-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society.This article is part of the theme issue 'A complexity science approach to law and governance'.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Assessing the quality of automatic-generated short answers using GPT-4
    Rodrigues L.
    Dwan Pereira F.
    Cabral L.
    Gašević D.
    Ramalho G.
    Ferreira Mello R.
    Computers and Education: Artificial Intelligence, 2024, 7
  • [32] Performance of GPT-3.5 and GPT-4 on the Korean Pharmacist Licensing Examination: Comparison Study
    Jin, Hye Kyung
    Kim, Eunyoung
    JMIR MEDICAL EDUCATION, 2024, 10
  • [33] Revolutionizing Neurosurgery with GPT-4: A Leap Forward or Ethical Conundrum?
    Li, Wenbo
    Fu, Mingshu
    Liu, Siyu
    Yu, Hongyu
    ANNALS OF BIOMEDICAL ENGINEERING, 2023, 51 (10) : 2105 - 2112
  • [34] Prompting GPT-4 to support automatic safety case generation
    Sivakumar, Mithila
    Belle, Alvine B.
    Shan, Jinjun
    Shahandashti, Kimya Khakzad
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [35] Using GPT-4 to Provide Tiered, Formative Code Feedback
    Ha Nguyen
    Allan, Vicki
    PROCEEDINGS OF THE 55TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE 2024, VOL. 1, 2024, : 958 - 964
  • [36] Case study identification with GPT-4 and implications for mapping studies
    Petersen, Kai
    INFORMATION AND SOFTWARE TECHNOLOGY, 2024, 171
  • [37] Assessing GPT-4's accuracy in answering clinical pharmacological questions on pain therapy
    Stroop, Anna
    Stroop, Tabea
    Alsofy, Samer Zawy
    Wegner, Moritz
    Nakamura, Makoto
    Stroop, Ralf
    BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, 2025,
  • [38] GPT-3.5 Turbo and GPT-4 Turbo in Title and Abstract Screening for Systematic Reviews
    Oami, Takehiko
    Okada, Yohei
    Nakada, Taka-aki
    JMIR MEDICAL INFORMATICS, 2025, 13
  • [39] Revolutionizing Neurosurgery with GPT-4: A Leap Forward or Ethical Conundrum?
    Wenbo Li
    Mingshu Fu
    Siyu Liu
    Hongyu Yu
    Annals of Biomedical Engineering, 2023, 51 : 2105 - 2112
  • [40] Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
    Takagi, Soshi
    Watari, Takashi
    Erabi, Ayano
    Sakaguchi, Kota
    JMIR MEDICAL EDUCATION, 2023, 9