Enhancing Chinese Essay Discourse Logic Evaluation Through Optimized Fine-Tuning of Large Language Models

被引:0
|
作者
Song, Jinwang [1 ]
Song, Yanxin [1 ]
Zhou, Guangyu [1 ]
Fu, Wenhui [1 ]
Zhang, Kunli [1 ]
Zan, Hongying [1 ]
机构
[1] Zhengzhou Univ, Zhengzhou, Peoples R China
关键词
Essay Evaluation; Large Language Models; Natural Language Processing;
D O I
10.1007/978-981-97-9443-0_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the high complexity and diversity of writing, automated essay evaluation systems face significant challenges. Large language models (LLMs), representing the latest peak in NLP technology for semantic understanding, hold immense potential for advancing essay evaluation systems. In the NLPCC 2024 Shared Task 4 Chinese Essay Discourse Logic Evaluation and Integration, we investigated improving LLMs' capabilities in evaluating essay logic, coherence, and quality. Considering the characteristics of different tasks, we adopted MRC-style instructions to optimize output formats and implemented undersampling to address data imbalance. To enhance efficiency and model performance, we explored LLM fine-tuning methods that decouple tasks and applied similarity comparison to refine model outputs. Additionally, we utilized noisy embedding fine-tuning to mitigate overfitting. Our approach achieved the top ranking in the NLPCC 2024 Shared Task 4.
引用
收藏
页码:342 / 352
页数:11
相关论文
共 50 条
  • [31] Fine-tuning language models to recognize semantic relations
    Roussinov, Dmitri
    Sharoff, Serge
    Puchnina, Nadezhda
    LANGUAGE RESOURCES AND EVALUATION, 2023, 57 (04) : 1463 - 1486
  • [32] Fine-tuning language models to recognize semantic relations
    Dmitri Roussinov
    Serge Sharoff
    Nadezhda Puchnina
    Language Resources and Evaluation, 2023, 57 : 1463 - 1486
  • [33] Fine-Tuning Language Models with Just Forward Passes
    Malladi, Sadhika
    Gao, Tianyu
    Nichani, Eshaan
    Damian, Alex
    Lee, Jason D.
    Chen, Danqi
    Arora, Sanjeev
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] A Comparative Analysis of Instruction Fine-Tuning Large Language Models for Financial Text Classification
    Fatemi, Sorouralsadat
    Hu, Yuheng
    Mousavi, Maryam
    ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS, 2025, 16 (01)
  • [35] How fine can fine-tuning be? Learning efficient language models
    Radiya-Dixit, Evani
    Wang, Xin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2435 - 2442
  • [36] Prompt Engineering or Fine-Tuning? A Case Study on Phishing Detection with Large Language Models
    Trad, Fouad
    Chehab, Ali
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (01): : 367 - 384
  • [37] Efficient Fine-Tuning Large Language Models for Knowledge-Aware Response Planning
    Minh Nguyen
    Kishan, K. C.
    Toan Nguyen
    Chadha, Ankit
    Thuy Vu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 593 - 611
  • [38] Leveraging error-assisted fine-tuning large language models for manufacturing excellence
    Xia, Liqiao
    Li, Chengxi
    Zhang, Canbin
    Liu, Shimin
    Zheng, Pai
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2024, 88
  • [39] Fine-Tuning Large Language Models for Radiation Oncology, A Specialized Health Care Domain
    Wang, P.
    Liu, Z.
    Li, Y.
    Holmes, J.
    Shu, P.
    Zhang, L.
    Li, X.
    Li, Q.
    Vora, S. A.
    Patel, S. H.
    Sio, T. T. W.
    Liu, T.
    Liu, W.
    INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2024, 120 (02): : E664 - E664
  • [40] Fine-Tuning Large Language Models for Radiation Oncology, a Highly Specialized Healthcare Domain
    Wang, P.
    Liu, Z.
    Li, Y.
    Holmes, J. M.
    Shu, P.
    Zhang, L.
    Li, X.
    Li, Q.
    Vora, S. A.
    Patel, S. H.
    Sio, T. T.
    Liu, T.
    Liu, W.
    MEDICAL PHYSICS, 2024, 51 (09) : 6590 - 6590