Enhancing Chinese Essay Discourse Logic Evaluation Through Optimized Fine-Tuning of Large Language Models

被引:0
作者
Song, Jinwang [1 ]
Song, Yanxin [1 ]
Zhou, Guangyu [1 ]
Fu, Wenhui [1 ]
Zhang, Kunli [1 ]
Zan, Hongying [1 ]
机构
[1] Zhengzhou Univ, Zhengzhou, Peoples R China
来源
NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT V, NLPCC 2024 | 2025年 / 15363卷
关键词
Essay Evaluation; Large Language Models; Natural Language Processing;
D O I
10.1007/978-981-97-9443-0_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the high complexity and diversity of writing, automated essay evaluation systems face significant challenges. Large language models (LLMs), representing the latest peak in NLP technology for semantic understanding, hold immense potential for advancing essay evaluation systems. In the NLPCC 2024 Shared Task 4 Chinese Essay Discourse Logic Evaluation and Integration, we investigated improving LLMs' capabilities in evaluating essay logic, coherence, and quality. Considering the characteristics of different tasks, we adopted MRC-style instructions to optimize output formats and implemented undersampling to address data imbalance. To enhance efficiency and model performance, we explored LLM fine-tuning methods that decouple tasks and applied similarity comparison to refine model outputs. Additionally, we utilized noisy embedding fine-tuning to mitigate overfitting. Our approach achieved the top ranking in the NLPCC 2024 Shared Task 4.
引用
收藏
页码:342 / 352
页数:11
相关论文
共 50 条
  • [41] Unveiling the Power of Large Language Models: A Comparative Study of Retrieval-Augmented Generation, Fine-Tuning, and Their Synergistic Fusion for Enhanced Performance
    Budakoglu, Gulsum
    Emekci, Hakan
    IEEE ACCESS, 2025, 13 : 30936 - 30951
  • [42] Enhancing healthcare resource allocation through large language models
    Wan, Fang
    Wang, Kezhi
    Wang, Tao
    Qin, Hu
    Fondrevelle, Julien
    Duclos, Antoine
    SWARM AND EVOLUTIONARY COMPUTATION, 2025, 94
  • [43] Enhancing Large Language Models Through External Domain Knowledge
    Welz, Laslo
    Lanquillon, Carsten
    ARTIFICIAL INTELLIGENCE IN HCI, PT III, AI-HCI 2024, 2024, 14736 : 135 - 146
  • [44] adaptMLLM: Fine-Tuning Multilingual Language Models on Low-Resource Languages with Integrated LLM Playgrounds
    Lankford, Seamus
    Afli, Haithem
    Way, Andy
    INFORMATION, 2023, 14 (12)
  • [45] CALLM: Enhancing Clinical Interview Analysis Through Data Augmentation With Large Language Models
    Wu, Yuqi
    Mao, Kaining
    Zhang, Yanbo
    Chen, Jie
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (12) : 7531 - 7542
  • [46] Enhancing Complex Linguistic Tasks Resolution Through Fine-Tuning LLMs, RAG and Knowledge Graphs (Short Paper)
    Bianchini, Filippo
    Calamo, Marco
    De Luzi, Francesca
    Macri, Mattia
    Mecella, Massimo
    ADVANCED INFORMATION SYSTEMS ENGINEERING WORKSHOPS, CAISE 2024, 2024, 521 : 147 - 155
  • [47] Matching tasks to objectives: Fine-tuning and prompt-tuning strategies for encoder-decoder pre-trained language models
    Pouramini, Ahmad
    Faili, Hesham
    APPLIED INTELLIGENCE, 2024, 54 (20) : 9783 - 9810
  • [48] OpenMedLM: prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models
    Maharjan, Jenish
    Garikipati, Anurag
    Singh, Navan Preet
    Cyrus, Leo
    Sharma, Mayank
    Ciobanu, Madalina
    Barnes, Gina
    Thapa, Rahul
    Mao, Qingqing
    Das, Ritankar
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [49] OpenMedLM: prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models
    Jenish Maharjan
    Anurag Garikipati
    Navan Preet Singh
    Leo Cyrus
    Mayank Sharma
    Madalina Ciobanu
    Gina Barnes
    Rahul Thapa
    Qingqing Mao
    Ritankar Das
    Scientific Reports, 14 (1)
  • [50] Domain-specific large language models for fault diagnosis of heating, ventilation, and air conditioning systems by labeled-data-supervised fine-tuning
    Zhang, Jian
    Zhang, Chaobo
    Lu, Jie
    Zhao, Yang
    APPLIED ENERGY, 2025, 377