LLM Fine-Tuning: Concepts, Opportunities, and Challenges

被引:2
作者
Wu, Xiao-Kun [1 ]
Chen, Min [2 ]
Li, Wanyi [3 ]
Wang, Rui [4 ]
Lu, Limeng [3 ]
Liu, Jia [4 ]
Hwang, Kai [5 ]
Hao, Yixue [4 ]
Pan, Yanru [3 ]
Meng, Qingguo [6 ]
Huang, Kaibin [7 ]
Hu, Long [4 ]
Guizani, Mohsen [8 ]
Chao, Naipeng [9 ]
Fortino, Giancarlo [10 ]
Lin, Fei [11 ]
Tian, Yonglin [12 ]
Niyato, Dusit [13 ]
Wang, Fei-Yue [12 ]
机构
[1] Renmin Univ China, Sch Journalism & Commun, Beijing 100872, Peoples R China
[2] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[3] South China Univ Technol, Sch Journalism & Commun, Guangzhou 510006, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[5] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[6] Tsinghua Univ, Sch Publ Policy & Management, Beijing 100084, Peoples R China
[7] Univ Hong Kong, Dept Elect & Elect Engn, Hong Kong 999077, Peoples R China
[8] Mohamed bin Zayed Univ Artificial Intelligence, Machine Learning Dept, POB 131818, Abu Dhabi, U Arab Emirates
[9] Shenzhen Univ, Sch Media & Commun, Shenzhen 518060, Peoples R China
[10] Univ Calabria, Dept Informat Modeling Elect & Syst, I-87036 Arcavacata Di Rende, Italy
[11] Macao Univ Sci & Technol, Fac Innovat Engn, Dept Engn Sci, Macau 999078, Peoples R China
[12] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[13] Nanyang Technol Univ, Coll Comp & Data Sci, Singapore 639798, Singapore
关键词
large language models (LLM); fine-tuning; hermeneutics; comprehension; tutorial fine-tuning; human-AI co-evolution;
D O I
10.3390/bdcc9040087
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a foundation of large language models, fine-tuning drives rapid progress, broad applicability, and profound impacts on human-AI collaboration, surpassing earlier technological advancements. This paper provides a comprehensive overview of large language model (LLM) fine-tuning by integrating hermeneutic theories of human comprehension, with a focus on the essential cognitive conditions that underpin this process. Drawing on Gadamer's concepts of Vorverst & auml;ndnis, Distanciation, and the Hermeneutic Circle, the paper explores how LLM fine-tuning evolves from initial learning to deeper comprehension, ultimately advancing toward self-awareness. It examines the core principles, development, and applications of fine-tuning techniques, emphasizing its growing significance across diverse field and industries. The paper introduces a new term, "Tutorial Fine-Tuning (TFT)", which annotates a process of intensive tuition given by a "tutor" to a small number of "students", to define the latest round of LLM fine-tuning advancements. By addressing key challenges associated with fine-tuning, including ensuring adaptability, precision, credibility and reliability, this paper explores potential future directions for the co-evolution of humans and AI. By bridging theoretical perspectives with practical implications, this work provides valuable insights into the ongoing development of LLMs, emphasizing their potential to achieve higher levels of cognitive and operational intelligence.
引用
收藏
页数:20
相关论文
共 106 条
[11]  
Chen JA, 2023, Arxiv, DOI arXiv:2301.01821
[12]   Cognitive Computing: Architecture,Technologies and Intelligent Applications [J].
Chen, Min ;
Herrera, Francisco ;
Hwang, Kai .
IEEE ACCESS, 2018, 6 :19774-19783
[13]  
Chen W, 2023, Arxiv, DOI arXiv:2310.15205
[14]  
Christiano PF, 2017, ADV NEUR IN, V30
[15]  
Chung HW, 2024, J MACH LEARN RES, V25
[16]  
Clark A., 1993, Mind and Language, V8, P487, DOI 10.1111/j.1468-0017.1993.tb00299.x
[17]  
DeepSeek-AI, 2025, PREPRINT
[18]  
Devlin J, 2019, Arxiv, DOI [arXiv:1810.04805, DOI 10.48550/ARXIV.1810.04805]
[19]  
Ding N, 2023, Arxiv, DOI arXiv:2311.11696
[20]  
Elshin D., 2024, P 9 C MACH TRANSL MI, P247