Supervised Contrastive Learning for Interpretable Long-Form Document Matching

被引:3
作者
Jha, Akshita [1 ]
Rakesh, Vineeth [2 ]
Chandrashekar, Jaideep [2 ]
Samavedhi, Adithya [1 ]
Reddy, Chandan K. [1 ]
机构
[1] Virginia Tech, 900 N Glebe Rd, Arlington, VA 22203 USA
[2] InterDigital, 4410 El Camino Real Suite 120, Los Altos, CA 94022 USA
基金
美国国家科学基金会;
关键词
Semantic text matching; long documents; contrastive learning; attention; embeddings; interpretability; transformer; BERT;
D O I
10.1145/3542822
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advancements in deep learning techniques have transformed the area of semantic textmatching (STM). However, most state-of-the-art models are designed to operate with short documents such as tweets, user reviews, comments, and so on. These models have fundamental limitations when applied to long-form documents such as scientific papers, legal documents, and patents. When handling such long documents, there are three primary challenges: (i) the presence of different contexts for the same word throughout the document, (ii) small sections of contextually similar text between two documents, but dissimilar text in the remaining parts (this defies the basic understanding of "similarity"), and (iii) the coarse nature of a single global similarity measure which fails to capture the heterogeneity of the document content. In this article, we describe CoLDE: Contrastive Long Document Encoder-a transformer-based framework that addresses these challenges and allows for interpretable comparisons of long documents. CoLDE uses unique positional embeddings and a multi-headed chunkwise attention layer in conjunction with a supervised contrastive learning framework to capture similarity at three different levels: (i) high-level similarity scores between a pair of documents, (ii) similarity scores between different sections within and across documents, and (iii) similarity scores between different chunks in the same document and across other documents. These fine-grained similarity scores aid in better interpretability. We evaluate CoLDE on three long document datasets namely, ACL Anthology publications, Wikipedia articles, and USPTO patents. Besides outperforming the state-of-theart methods on the document matching task, CoLDE is also robust to changes in document length and text perturbations and provides interpretable results. The code for the proposed model is publicly available at https://github.com/InterDigitalInc/CoLDE.
引用
收藏
页数:17
相关论文
共 45 条
  • [1] Adhikari A, 2019, Arxiv, DOI arXiv:1904.08398
  • [2] Alvarez-Melis David, 2017, P C EMP METH NAT LAN, P412, DOI 10.18653/v1/D17-1042
  • [3] Amiri H, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P1882
  • [4] Beltagy I, 2020, Arxiv, DOI arXiv:2004.05150
  • [5] Chen Minmin, 2017, ARXIV170702377
  • [6] Chen T, 2020, PR MACH LEARN RES, V119
  • [7] Child R, 2019, Arxiv, DOI [arXiv:1904.10509, DOI 10.48550/ARXIV.1904.10509]
  • [8] Choromanski Krzysztof, 2020, P INT C LEARNING REP
  • [9] Dai ZH, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P2978
  • [10] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171