Efficacy of Deep Neural Embeddings- Based Semantic Similarity in Automatic Essay Evaluation

被引:1
作者
Hendre, Manik [1 ]
Mukherjee, Prasenjit [1 ]
Preet, Raman [1 ]
Godse, Manish [2 ]
机构
[1] Ramanbyte Pvt Ltd, Pune, Maharashtra, India
[2] Pune Inst Business Management, Pune, Maharashtra, India
关键词
ELMo; Embedding; Essay Grading; Global Vectors; Semantic Similarity; Sentence Encoder;
D O I
10.4018/IJCINI.323190
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semantic similarity is used extensively for understanding the context and meaning of the text data. In this paper, use of the semantic similarity in an automatic essay evaluation system is proposed. Different text embedding methods are used to compute the semantic similarity. Recent neural embedding methods including Google sentence encoder (GSE), embeddings for language models (ELMo), and global vectors (GloVe) are employed for computing the semantic similarity. Traditional methods of textual data representation such as TF-IDF and Jaccard index are also used in finding the semantic similarity. Experimental analysis of an intra-class and inter-class semantic similarity score distributions shows that the GSE outperforms other methods by accurately distinguishing essays from the same or different set/topic. Semantic similarity calculated using the GSE method is further used for finding the correlation with human rated essay scores, which shows high correlation with the human-rated scores on various essay traits.
引用
收藏
页数:14
相关论文
共 34 条
[1]  
[Anonymous], 2016, Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning
[2]  
Attali Y., 2006, J TECHNOL LEARN ASSE, V4, DOI DOI 10.1002/J.2333-8504.2004.TB01972.X
[3]   Jumping NLP Curves: A Review of Natural Language Processing Research [J].
Cambria, Erik ;
White, Bebo .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2014, 9 (02) :48-57
[4]  
Cer D, 2018, Arxiv, DOI [arXiv:1803.11175, DOI 10.48550/ARXIV.1803.11175, 10.48550/arXiv.1803.11175]
[5]  
Christie J. R., 1999, P 3 ANN COMP ASS ASS
[6]  
Clark E, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P2748
[7]  
Cozma M, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2, P503
[8]  
Deng L., 2018, Deep Learning in Natural Language Processing, DOI [DOI 10.1007/978-981-10-5209-5, 10.1007/978-981-10-5209-5]
[9]  
Dong Fei, 2016, P 2016 C EMP METH NA, P1072, DOI DOI 10.18653/V1/D16-1115
[10]  
Fauzi M. A., 2017, P INT C ADV IM PROC, P151, DOI [10.1145/3133264.3133303, DOI 10.1145/3133264.3133303]