Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

被引:0
|
作者
Chen, Zixiang [1 ]
Deng, Yihe [1 ]
Yuan, Huizhuo [1 ]
Ji, Kaixuan [1 ]
Gu, Quanquan [1 ]
机构
[1] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90095 USA
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING | 2024年 / 235卷
关键词
GAME;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Harnessing the power of human-annotated data through Supervised Fine-Tuning (SFT) is pivotal for advancing Large Language Models (LLMs). In this paper, we delve into the prospect of growing a strong LLM out of a weak one without the need for acquiring additional human-annotated data. We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN), which starts from a supervised fine-tuned model. At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself. More specifically, the LLM generates its own training data from its previous iterations, refining its policy by discerning these self-generated responses from those obtained from human-annotated data. Our method progressively elevates the LLM from a nascent model to a formidable one, unlocking the full potential of human-annotated demonstration data for SFT. Theoretically, we prove that the global optimum to the training objective function of our method is achieved only when the LLM policy aligns with the target data distribution. Empirically, we evaluate our method on several benchmark datasets including the HuggingFace Open LLM Leaderboard, MT-Bench, and datasets from Big-Bench. Our results show that SPIN can significantly improve the LLM's performance across a variety of benchmarks and even outperform models trained through direct preference optimization (DPO) supplemented with extra GPT-4 preference data. This sheds light on the promise of self-play, enabling the achievement of human-level performance in LLMs without the need for expert opponents. Codes are available at https://github.com/uclaml/SPIN.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting
    Chen, Sanyuan
    Hou, Yutai
    Cui, Yiming
    Che, Wanxiang
    Liu, Ting
    Yu, Xiangzhan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 7870 - 7881
  • [32] Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy Construction
    Chen, Boqi
    Yi, Fandi
    Varro, Daniel
    2023 ACM/IEEE INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION, MODELS-C, 2023, : 588 - 596
  • [33] On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
    Korbak, Tomasz
    Elsahar, Hady
    Kruszewski, German
    Dymetman, Marc
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [34] Enhanced Discriminative Fine-Tuning of Large Language Models for Chinese Text Classification
    Song, Jinwang
    Zan, Hongying
    Zhang, Kunli
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 168 - 174
  • [35] Language Models Fine-Tuning for Automatic Format Reconstruction of SEC Financial Filings
    Lombardo, Gianfranco
    Trimigno, Giuseppe
    Pellegrino, Mattia
    Cagnoni, Stefano
    IEEE ACCESS, 2024, 12 : 31249 - 31261
  • [36] CSAFT: Continuous Semantic Augmentation Fine-Tuning for Legal Large Language Models
    Li, Bo
    Fan, Shuang
    Huang, Jin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V, 2024, 15020 : 293 - 307
  • [37] Health Care Language Models and Their Fine-Tuning for Information Extraction: Scoping Review
    Nunes, Miguel
    Bone, Joao
    Ferreira, Joao C.
    Elvas, Luis B.
    JMIR MEDICAL INFORMATICS, 2024, 12
  • [38] Personalized Large Language Models through Parameter Efficient Fine-Tuning Techniques
    Braga, Marco
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 3076 - 3076
  • [39] Fine-tuning protein language models boosts predictions across diverse tasks
    Schmirler, Robert
    Heinzinger, Michael
    Rost, Burkhard
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [40] Fine-tuning language models to find agreement among humans with diverse preferences
    Bakker, Michiel A.
    Chadwick, Martin J.
    Sheahan, Hannah R.
    Tessler, Michael Henry
    Campbell-Gillingham, Lucy
    Balaguer, Jan
    McAleese, Nat
    Glaese, Amelia
    Aslanides, John
    Botvinick, Matthew M.
    Summerfield, Christopher
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,