Context-Aware Classification of Legal Document Pages

被引:1
作者
Fragkogiannis, Pavlos [1 ]
Forster, Martina [2 ]
Lee, Grace E. [1 ]
Zhang, Dell [1 ]
机构
[1] Thomson Reuters Labs, London, England
[2] Thomson Reuters Labs, Zug, Switzerland
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
关键词
document page classification; neural networks; transformers;
D O I
10.1145/3539618.3591839
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For many business applications that require the processing, indexing, and retrieval of professional documents such as legal briefs (in PDF format etc.), it is often essential to classify the pages of any given document into their corresponding types beforehand. Most existing studies in the field of document image classification either focus on single-page documents or treat multiple pages in a document independently. Although in recent years a few techniques have been proposed to exploit the context information from neighboring pages to enhance document page classification, they typically cannot be utilized with large pre-trained language models due to the constraint on input length. In this paper, we present a simple but effective approach that overcomes the above limitation. Specifically, we enhance the input with extra tokens carrying sequential information about previous pages - introducing recurrence - which enables the usage of pre-trained Transformer models like BERT for context-aware page classification. Our experiments conducted on two legal datasets in English and Portuguese respectively show that the proposed approach can significantly improve the performance of document page classification compared to the non-recurrent setup as well as the other context-aware baselines.
引用
收藏
页码:3285 / 3289
页数:5
相关论文
共 27 条
  • [1] Al Adel A., 2021, P 2021 INT C ENG TEL, P1, DOI DOI 10.1109/ENT50460.2021.9681776
  • [2] Appalaraju Srikar, 2021, DOCFORMER END TO END, DOI [10.48550/arXiv.2106.11539, DOI 10.48550/ARXIV.2106.11539]
  • [3] Bakkali Souhail, 2020, VISUAL TEXTUAL DEEP, P562
  • [4] Biten Ali Furkan, 2022, OCR IDL OCR ANNOTATI
  • [5] A TEST FOR SYMMETRY IN CONTINGENCY TABLES
    BOWKER, AH
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1948, 43 (244) : 572 - 574
  • [6] Burtsev Mikhail S., 2021, MEMORY TRANSFORMER
  • [7] Chan Y.-H., 2019, P 2 WORKSH MACH READ, P154, DOI 10.18653/v1/D19-5821
  • [8] Dai ZH, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P2978
  • [9] Das Arindam, 2018, ARXIV180109321CS
  • [10] Dauphinee Tyler, 2019, ARXIV191204376CS