MeaningBERT: assessing meaning preservation between sentences

被引:2
作者
Beauchemin, David [1 ]
Saggion, Horacio [2 ]
Khoury, Richard [1 ]
机构
[1] Univ Laval, Dept Comp Sci & Software Engn, Grp Res Artificial Intelligence, Quebec City, PQ, Canada
[2] Univ Pompeu Fabra, Dept Informat & Commun Technol, Large Scale Text Understanding Syst Lab, Nat Language Proc Grp, Barcelona, Spain
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2023年 / 6卷
基金
加拿大自然科学与工程研究理事会;
关键词
evaluation of text simplification systems; meaning preservation; automatic text simplification; lexical simplification; syntactic simplification; few-shot evaluation of text simplification systems;
D O I
10.3389/frai.2023.1223924
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of automatic text simplification, assessing whether or not the meaning of the original text has been preserved during simplification is of paramount importance. Metrics relying on n-gram overlap assessment may struggle to deal with simplifications which replace complex phrases with their simpler paraphrases. Current evaluation metrics for meaning preservation based on large language models (LLMs), such as BertScore in machine translation or QuestEval in summarization, have been proposed. However, none has a strong correlation with human judgment of meaning preservation. Moreover, such metrics have not been assessed in the context of text simplification research. In this study, we present a meta-evaluation of several metrics we apply to measure content similarity in text simplification. We also show that the metrics are unable to pass two trivial, inexpensive content preservation tests. Another contribution of this study is MeaningBERT (https://github.com/GRAAL-Research/MeaningBERT), a new trainable metric designed to assess meaning preservation between two sentences in text simplification, showing how it correlates with human judgment. To demonstrate its quality and versatility, we will also present a compilation of datasets used to assess meaning preservation and benchmark our study against a large selection of popular metrics.
引用
收藏
页数:10
相关论文
共 36 条
  • [21] Reimers N, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3982
  • [22] Saggion H., 2017, SynthesisLectures on Human Language Technologies, V32, DOI [10.1007/978-3-031-02166-4, DOI 10.1007/978-3-031-02166-4]
  • [23] Scialom T, 2021, Arxiv, DOI arXiv:2104.07560
  • [24] Scialom T, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P6594
  • [25] Scialom T, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P3246
  • [26] Sellam T., 2020, P 58 ANN M ASS COMP, P7881, DOI [10.18653/v1/2020.acl-main.704, DOI 10.18653/V1/2020.ACL-MAIN.704]
  • [27] Sulem E., 2018, P 2018 C N AM CHAPT, V1, P685, DOI [10.18653/v1/N18-1063, DOI 10.18653/V1/N18-1063]
  • [28] Sulem E, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P738
  • [29] Vasilyev O., 2020, Proceedings of the Evaluation and Comparison of NLP Systems Workshop, DOI [10.18653/v1/2020.eval4nlp-1.2, DOI 10.18653/V1/2020.EVAL4NLP-1.2]
  • [30] Wolf T, 2020, Arxiv, DOI [arXiv:1910.03771, DOI 10.48550/ARXIV.1910.03771, 10.48550/arXiv.1910.03771]