Cross level semantic similarity: an evaluation framework for universal measures of similarity

被引:9
作者
Jurgens, David [1 ]
Pilehvar, Mohammad Taher [2 ]
Navigli, Roberto [2 ]
机构
[1] McGill Univ, Montreal, PQ, Canada
[2] Univ Roma La Sapienza, Piazzale Aldo Moro 5, I-00185 Rome, Italy
基金
欧洲研究理事会;
关键词
Similarity; Evaluation; Semantic textual similarity;
D O I
10.1007/s10579-015-9318-3
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Semantic similarity has typically been measured across items of approximately similar sizes. As a result, similarity measures have largely ignored the fact that different types of linguistic item can potentially have similar or even identical meanings, and therefore are designed to compare only one type of linguistic item. Furthermore, nearly all current similarity benchmarks within NLP contain pairs of approximately the same size, such as word or sentence pairs, preventing the evaluation of methods that are capable of comparing different sized items. To address this, we introduce a new semantic evaluation called cross-level semantic similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. Our pilot CLSS task was presented as part of SemEval-2014, which attracted 19 teams who submitted 38 systems. CLSS data contains a rich mixture of pairs, spanning from paragraphs to word senses to fully evaluate similarity measures that are capable of comparing items of any type. Furthermore, data sources were drawn from diverse corpora beyond just newswire, including domain-specific texts and social media. We describe the annotation process and its challenges, including a comparison with crowdsourcing, and identify the factors that make the dataset a rigorous assessment of a method's quality. Furthermore, we examine in detail the systems participating in the SemEval task to identify the common factors associated with high performance and which aspects proved difficult to all systems. Our findings demonstrate that CLSS poses a significant challenge for similarity methods and provides clear directions for future work on universal similarity methods that can compare any pair of items.
引用
收藏
页码:5 / 33
页数:29
相关论文
共 49 条
[1]  
Agirre Eneko, 2014, SEMEVAL COLING, P81
[2]  
[Anonymous], 2012, Proceedings of the First Joint Conference on Lexical and Computational Semantics
[3]  
[Anonymous], 2009, N AM CHAPTER ASS COM
[4]  
[Anonymous], 1998, WordNet, DOI DOI 10.7551/MITPRESS/7287.001.0001
[5]  
[Anonymous], 2012, P 6 INT WORKSHOP SEM
[6]  
[Anonymous], 2013, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
[7]  
[Anonymous], 1998, An information-theoretic definition of similarity
[8]  
[Anonymous], 2013, SEM
[9]  
[Anonymous], 2012, * SEM 2012: The First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)
[10]  
[Anonymous], 2012, SEM 2012 1 JOINT C L