Cross level semantic similarity: an evaluation framework for universal measures of similarity

被引:9
作者
Jurgens, David [1 ]
Pilehvar, Mohammad Taher [2 ]
Navigli, Roberto [2 ]
机构
[1] McGill Univ, Montreal, PQ, Canada
[2] Univ Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Rome, Italy
基金
欧洲研究理事会;
关键词
Similarity; Evaluation; Semantic textual similarity;
D O I
10.1007/s10579-015-9318-3
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Semantic similarity has typically been measured across items of approximately similar sizes. As a result, similarity measures have largely ignored the fact that different types of linguistic item can potentially have similar or even identical meanings, and therefore are designed to compare only one type of linguistic item. Furthermore, nearly all current similarity benchmarks within NLP contain pairs of approximately the same size, such as word or sentence pairs, preventing the evaluation of methods that are capable of comparing different sized items. To address this, we introduce a new semantic evaluation called cross-level semantic similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. Our pilot CLSS task was presented as part of SemEval-2014, which attracted 19 teams who submitted 38 systems. CLSS data contains a rich mixture of pairs, spanning from paragraphs to word senses to fully evaluate similarity measures that are capable of comparing items of any type. Furthermore, data sources were drawn from diverse corpora beyond just newswire, including domain-specific texts and social media. We describe the annotation process and its challenges, including a comparison with crowdsourcing, and identify the factors that make the dataset a rigorous assessment of a method's quality. Furthermore, we examine in detail the systems participating in the SemEval task to identify the common factors associated with high performance and which aspects proved difficult to all systems. Our findings demonstrate that CLSS poses a significant challenge for similarity methods and provides clear directions for future work on universal similarity methods that can compare any pair of items.
引用
收藏
页码:5 / 33
页数:29
相关论文
共 49 条
[21]   Automatic summarising: The state of the art [J].
Jones, Karen Spaerck .
INFORMATION PROCESSING & MANAGEMENT, 2007, 43 (06) :1449-1481
[22]  
Jurgens D., 2014, Transactions of the ACL, V2, P449
[23]  
Jurgens D., 2013, Second Joint Conference on Lexical and Computational Semantics, P290
[24]  
Jurgens David., 2014, Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), P17
[25]  
Jurgens David, 2015, P 2015 C N AM CHAPT, P1459, DOI [10.3115/v1/N15-1169, DOI 10.3115/V1/N15-1169]
[26]  
Kilgarriff Adam., 2001, Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, P17
[27]  
Kim S.N., 2010, P 5 INT WORKSH SEM E, P21
[28]  
Kochn P., 2005, P MACHINE TRANSLATIO, P79
[29]  
Krippendorff K., 2018, CONTENT ANAL INTRO I
[30]   A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge [J].
Landauer, TK ;
Dumais, ST .
PSYCHOLOGICAL REVIEW, 1997, 104 (02) :211-240