LLM Fine-Tuning: Concepts, Opportunities, and Challenges

被引:2
作者
Wu, Xiao-Kun [1 ]
Chen, Min [2 ]
Li, Wanyi [3 ]
Wang, Rui [4 ]
Lu, Limeng [3 ]
Liu, Jia [4 ]
Hwang, Kai [5 ]
Hao, Yixue [4 ]
Pan, Yanru [3 ]
Meng, Qingguo [6 ]
Huang, Kaibin [7 ]
Hu, Long [4 ]
Guizani, Mohsen [8 ]
Chao, Naipeng [9 ]
Fortino, Giancarlo [10 ]
Lin, Fei [11 ]
Tian, Yonglin [12 ]
Niyato, Dusit [13 ]
Wang, Fei-Yue [12 ]
机构
[1] Renmin Univ China, Sch Journalism & Commun, Beijing 100872, Peoples R China
[2] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[3] South China Univ Technol, Sch Journalism & Commun, Guangzhou 510006, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[5] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[6] Tsinghua Univ, Sch Publ Policy & Management, Beijing 100084, Peoples R China
[7] Univ Hong Kong, Dept Elect & Elect Engn, Hong Kong 999077, Peoples R China
[8] Mohamed bin Zayed Univ Artificial Intelligence, Machine Learning Dept, POB 131818, Abu Dhabi, U Arab Emirates
[9] Shenzhen Univ, Sch Media & Commun, Shenzhen 518060, Peoples R China
[10] Univ Calabria, Dept Informat Modeling Elect & Syst, I-87036 Arcavacata Di Rende, Italy
[11] Macao Univ Sci & Technol, Fac Innovat Engn, Dept Engn Sci, Macau 999078, Peoples R China
[12] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[13] Nanyang Technol Univ, Coll Comp & Data Sci, Singapore 639798, Singapore
关键词
large language models (LLM); fine-tuning; hermeneutics; comprehension; tutorial fine-tuning; human-AI co-evolution;
D O I
10.3390/bdcc9040087
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a foundation of large language models, fine-tuning drives rapid progress, broad applicability, and profound impacts on human-AI collaboration, surpassing earlier technological advancements. This paper provides a comprehensive overview of large language model (LLM) fine-tuning by integrating hermeneutic theories of human comprehension, with a focus on the essential cognitive conditions that underpin this process. Drawing on Gadamer's concepts of Vorverst & auml;ndnis, Distanciation, and the Hermeneutic Circle, the paper explores how LLM fine-tuning evolves from initial learning to deeper comprehension, ultimately advancing toward self-awareness. It examines the core principles, development, and applications of fine-tuning techniques, emphasizing its growing significance across diverse field and industries. The paper introduces a new term, "Tutorial Fine-Tuning (TFT)", which annotates a process of intensive tuition given by a "tutor" to a small number of "students", to define the latest round of LLM fine-tuning advancements. By addressing key challenges associated with fine-tuning, including ensuring adaptability, precision, credibility and reliability, this paper explores potential future directions for the co-evolution of humans and AI. By bridging theoretical perspectives with practical implications, this work provides valuable insights into the ongoing development of LLMs, emphasizing their potential to achieve higher levels of cognitive and operational intelligence.
引用
收藏
页数:20
相关论文
共 106 条
[1]  
[Anonymous], 1977, Philosophical hermeneutics
[2]  
[Anonymous], 2025, DeepSeek vs., OpenAI, xAI, and Anthropic: A Comparative Evaluation by FlagEval
[3]  
Ash MitchellG., 1998, Gestalt Psychology in German Culture, 1890-1967: Holism and the Quest for Objectivity
[4]  
Bender E.M., 2020, PROC 58 ANN M ASS CO, P5185, DOI [10.18653/v1/2020.acl-main.463, DOI 10.18653/V1/2020.ACL-MAIN.463]
[5]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623
[6]  
Bill D., 2023, Fine-tuning a llm using reinforcement learning from human feedback for a therapy chatbot application
[7]  
Brazinskas A, 2022, Arxiv, DOI arXiv:2205.02170
[8]  
Broughel J., 2025, The Tradeoffs Between Energy Efficiency, Consumer Preferences, and Economic Growth
[9]  
Brown TB, 2020, ADV NEUR IN, V33
[10]   Artificial consciousness: Utopia or real possibility? [J].
Buttazzo, G .
COMPUTER, 2001, 34 (07) :24-+