Reparameterization-Based Parameter-Efficient Fine-Tuning Methods for Large Language Models: A Systematic Survey

被引:0
|
作者
Chen, Zezhou [1 ,2 ]
Liu, Zhaoxiang [1 ,2 ]
Wang, Kai [1 ,2 ]
Lian, Shiguo [1 ,2 ]
机构
[1] China Unicom, Ai Innovat Ctr, Beijing 100013, Peoples R China
[2] China Unicom, Unicom Digital Technol, Beijing 100013, Peoples R China
关键词
Large Language Models; Parameter-Efficient Fine-Tuning; Reparameterization;
D O I
10.1007/978-981-97-9437-9_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid advancement of Large Language Models (LLMs) has revolutionized both academia and industry, leveraging Transformer architectures and pre-training objectives to achieve unprecedented performance. To fully exploit the potential of LLMs, fine-tuning LLMs on specific downstream tasks is essential. However, traditional full fine-tuning methods pose significant computational challenges, prompting the emergence of Parameter-Efficient Fine-Tuning (PEFT) methods, especially reparameterization-based PEFT methods. In this survey, we delve into reparameterization-based PEFT methods, which aim to fine-tune LLMs with reduced computational costs while preserving their knowledge. We systematically analyze their design principles and divide these methods into six categories. We analyze the training parameter complexity, GPU memory consumption, training time costs, accuracy and limitations of each method. We summarize challenges within the reparameterization-based PEFT methods and propose future directions.
引用
收藏
页码:107 / 118
页数:12
相关论文
共 50 条
  • [1] Parameter-efficient fine-tuning in large language models: a survey of methodologies
    Luping Wang
    Sheng Chen
    Linnan Jiang
    Shu Pan
    Runze Cai
    Sen Yang
    Fei Yang
    Artificial Intelligence Review, 58 (8)
  • [2] Parameter-efficient fine-tuning of large language models using semantic knowledge tuning
    Prottasha, Nusrat Jahan
    Mahmud, Asif
    Sobuj, Md. Shohanur Islam
    Bhat, Prakash
    Kowsher, Md
    Yousefi, Niloofar
    Garibay, Ozlem Ozmen
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [3] Characterizing Communication in Distributed Parameter-Efficient Fine-Tuning for Large Language Models
    Alnaasan, Nawras
    Huang, Horng-Ruey
    Shafi, Aamir
    Subramoni, Hari
    Panda, Dhabaleswar K.
    2024 IEEE SYMPOSIUM ON HIGH-PERFORMANCE INTERCONNECTS, HOTI 2024, 2024, : 11 - 19
  • [4] Democratizing protein language models with parameter-efficient fine-tuning
    Sledzieski, Samuel
    Kshirsagar, Meghana
    Baek, Minkyung
    Dodhia, Rahul
    Ferres, Juan Lavista
    Berger, Bonnie
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (26)
  • [5] Parameter-efficient fine-tuning of large-scale pre-trained language models
    Ning Ding
    Yujia Qin
    Guang Yang
    Fuchao Wei
    Zonghan Yang
    Yusheng Su
    Shengding Hu
    Yulin Chen
    Chi-Min Chan
    Weize Chen
    Jing Yi
    Weilin Zhao
    Xiaozhi Wang
    Zhiyuan Liu
    Hai-Tao Zheng
    Jianfei Chen
    Yang Liu
    Jie Tang
    Juanzi Li
    Maosong Sun
    Nature Machine Intelligence, 2023, 5 : 220 - 235
  • [6] LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
    Hu, Zhiqiang
    Wang, Lei
    Lan, Yihuai
    Xu, Wanyu
    Lim, Ee-Peng
    Bing, Lidong
    Xu, Xing
    Poria, Soujanya
    Lee, Roy Ka-Wei
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5254 - 5276
  • [7] Parameter-efficient fine-tuning of large-scale pre-trained language models
    Ding, Ning
    Qin, Yujia
    Yang, Guang
    Wei, Fuchao
    Yang, Zonghan
    Su, Yusheng
    Hu, Shengding
    Chen, Yulin
    Chan, Chi-Min
    Chen, Weize
    Yi, Jing
    Zhao, Weilin
    Wang, Xiaozhi
    Liu, Zhiyuan
    Zheng, Hai-Tao
    Chen, Jianfei
    Liu, Yang
    Tang, Jie
    Li, Juanzi
    Sun, Maosong
    NATURE MACHINE INTELLIGENCE, 2023, 5 (03) : 220 - +
  • [8] Parameter-Efficient Fine-Tuning Large Speech Model Based on LoRA
    Ou, Ling
    Feng, Gen
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 36 - 41
  • [9] On the Effectiveness of Parameter-Efficient Fine-Tuning
    Fu, Zihao
    Yang, Haoran
    So, Anthony Man-Cho
    Lam, Wai
    Bing, Lidong
    Collier, Nigel
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 12799 - 12807
  • [10] Parameter-Efficient Fine-Tuning of Large Pretrained Models for Instance Segmentation Tasks
    Baker, Nermeen Abou
    Rohrschneider, David
    Handmann, Uwe
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (04): : 2783 - 2807