Combining Contrastive Learning and Sequence Learning for Automated Essay Scoring

被引:0
作者
Wang, XiaoYi [1 ]
Liu, Jie [1 ,2 ]
Zhou, Jianshe [1 ]
Jiong, Wang [1 ]
机构
[1] Capital Normal Univ, Beijing, Peoples R China
[2] North China Univ Technol, Beijing, Peoples R China
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IX | 2024年 / 15024卷
基金
中国国家自然科学基金;
关键词
Automated essay scoring; Data augmentation; Contrastive learning;
D O I
10.1007/978-3-031-72356-8_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The objective of automated essay scoring (AES) is to employ artificial intelligence techniques to automate the scoring process and minimize the impact of subjective factors on grading. Previous works tend to treat it solely as a regression or classification task, without considering the integration of both. Additionally, neural networks trained on limited samples often exhibit poor performance in capturing the deep semantics of texts. To enhance the performance of AES, we propose a novel approach that combines contrastive learning with sequence learning, effectively integrating regression loss and classification loss. This paper employs a variety of data augmentation techniques to construct negative samples suitable for contrastive learning, aiming to alleviate the inherent sample imbalance issue in essay datasets. Additionally, we propose to utilize sequence learning for essay scoring, incorporating empirical distribution based on the general distribution characteristics of the essay dataset to address the issue of unbalanced prediction results caused by sample imbalance. Experimental results demonstrate that the proposed multi-task learning framework outperforms the single-task learning framework in enhancing the effectiveness of automatic essay scoring.
引用
收藏
页码:3 / 18
页数:16
相关论文
共 24 条
[1]  
Alikaniotis D, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P715
[2]  
Dong F, 2017, P 21 C COMP NAT LANG, P153
[3]  
Farag Y, 2018, P 2018 C N AM CHAPT, V1, P263, DOI [DOI 10.18653/V1/N18-1024, 10.18653/v1/N18-1024]
[4]  
Gao TY, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P6894
[5]  
Giorgi J, 2021, 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), P879
[6]  
Hamner B., 2012, The hewlett foundation: Automated essay scoring
[7]  
Hussein MA, 2020, INT J ADV COMPUT SC, V11, P287
[8]  
Kumar R, 2022, NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, P1485
[9]  
Larkey L. S., 1998, Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, P90, DOI 10.1145/290941.290965
[10]   Automatic Essay Scoring Method Based on Multi-Scale Features [J].
Li, Feng ;
Xi, Xuefeng ;
Cui, Zhiming ;
Li, Dongyang ;
Zeng, Wanting .
APPLIED SCIENCES-BASEL, 2023, 13 (11)