Fine-Tuning of Distil-BERT for Continual Learning in Text Classification: An Experimental Analysis

被引:0
|
作者
Shah, Sahar [1 ]
Manzoni, Sara Lucia [1 ]
Zaman, Farooq [2 ]
Es Sabery, Fatima [3 ]
Epifania, Francesco [4 ]
Zoppis, Italo Francesco [1 ]
机构
[1] Univ Milano Bicocca, Dept Informat Syst & Commun, Milan, Italy
[2] Informat Technol Univ, Dept Comp Sci, Lahore, Pakistan
[3] Hassan II Univ, Lab Econ & Logist Performance, Fac Law Econ & Social Sci Mohammedia, Casablanca, Morocco
[4] Social Things srl, Milan, Italy
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Continual learning; natural language processing; text classification; fine-tuning; Distil-BERT;
D O I
10.1109/ACCESS.2024.3435537
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Continual learning (CL) with bidirectional encoder representation from transformer (BERT) and its variant Distil-BERT, have shown remarkable performance in various natural language processing (NLP) tasks, such as text classification (TC). However, the model degrading factors like catastrophic forgetting (CF), accuracy, task dependent architecture ruined its popularity for complex and intelligent tasks. This research article proposes an innovative approach to address the challenges of CL in TC tasks. The objectives are to enable the model to learn continuously without forgetting previously acquired knowledge and perfectly avoid CF. To achieve this, a task-independent model architecture is introduced, allowing training of multiple tasks on the same model, thereby improving overall performance in CL scenarios. The framework incorporates two auxiliary tasks, namely next sentence prediction and task identifier prediction, to capture both the task-generic and task-specific contextual information. The Distil-BERT model, enhanced with two linear layers, categorizes the output representation into a task-generic space and a task-specific space. The proposed methodology is evaluated on diverse sets of TC tasks, including Yahoo, Yelp, Amazon, DB-Pedia, and AG-News. The experimental results demonstrate impressive performance across multiple tasks in terms of F1 score, model accuracy, model evaluation loss, learning rate, and training loss of the model. For the Yahoo task, the proposed model achieved an F1 score of 96.84 %, accuracy of 95.85 %, evaluation loss of 0.06, learning rate of 0.00003144. In the Yelp task, our model achieved an F1 score of 96.66 %, accuracy of 97.66 %, evaluation loss of 0.06, and similarly minimized training losses by achieving the learning rate of 0.00003189. For the Amazon task, the F1 score was 95.82 %, the observed accuracy is 97.83 %, evaluation loss was 0.06, and training losses were effectively minimized by securing the learning rate of 0.00003144. In the DB-Pedia task, we achieved an F1 score of 96.20 %, accuracy of 95.21 %, evaluation loss of 0.08, with learning rate 0.0001972 and rapidly minimized training losses due to the limited number of epochs and instances. In the AG-News task, our model obtained an F1 score of 94.78 %, accuracy of 92.76 %, evaluation loss of 0.06, and fixed the learning rate to 0.0001511. These results highlight the exceptional performance of our model in various TC tasks, with gradual reduction in training losses over time, indicating effective learning and retention of knowledge.
引用
收藏
页码:104964 / 104982
页数:19
相关论文
共 50 条
  • [31] Evaluation of Parameter Fine-Tuning with Transfer Learning for Osteoporosis Classification in Knee Radiograph
    Abubakar, Usman Bello
    Boukar, Moussa Mahamat
    Adeshina, Steve
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (08) : 246 - 252
  • [32] Road-Type Classification through Deep Learning Networks Fine-Tuning
    Saleh, Yaser
    Otoum, Nesreen
    JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT, 2020, 19 (01)
  • [33] Improving unbalanced image classification through fine-tuning method of reinforcement learning
    Wang, Jin-Qiang
    Guo, Lan
    Jiang, Yuanbo
    Zhang, Shengjie
    Zhou, Qingguo
    APPLIED SOFT COMPUTING, 2024, 163
  • [34] Bagging and Boosting Fine-Tuning for Ensemble Learning
    Zhao C.
    Peng R.
    Wu D.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1728 - 1742
  • [35] Forest Image Classification Based on Fine-Tuning CaffeNet
    Zhang G.
    Li Y.
    Wang H.
    Zhou H.
    Linye Kexue/Scientia Silvae Sinicae, 2020, 56 (10): : 121 - 128
  • [36] Improved transfer learning of CNN through fine-tuning and classifier ensemble for scene classification
    Thirumaladevi, S.
    Swamy, K. Veera
    Sailaja, M.
    SOFT COMPUTING, 2022, 26 (12) : 5617 - 5636
  • [37] Hyperparameter elegance: fine-tuning text analysis with enhanced genetic algorithm hyperparameter landscape
    Tripathy, Gyananjaya
    Sharaff, Aakanksha
    KNOWLEDGE AND INFORMATION SYSTEMS, 2024, 66 (11) : 6761 - 6783
  • [38] Aerial Scene Classification through Fine-Tuning with Adaptive Learning Rates and Label Smoothing
    Petrovska, Biserka
    Atanasova-Pacemska, Tatjana
    Corizzo, Roberto
    Mignone, Paolo
    Lameski, Petre
    Zdravevski, Eftim
    APPLIED SCIENCES-BASEL, 2020, 10 (17):
  • [39] Sequential targeting: A continual learning approach for data imbalance in text classification
    Jang, Joel
    Kim, Yoonjeon
    Choi, Kyoungho
    Suh, Sungho
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 179
  • [40] SUPERFORMER: Continual learning superposition method for text classification
    Zeman, Marko
    Pucer, Jana Faganeli
    Kononenko, Igor
    Bosnic, Zoran
    NEURAL NETWORKS, 2023, 161 : 418 - 436