Fine-Tuning of Distil-BERT for Continual Learning in Text Classification: An Experimental Analysis

被引:0
|
作者
Shah, Sahar [1 ]
Manzoni, Sara Lucia [1 ]
Zaman, Farooq [2 ]
Es Sabery, Fatima [3 ]
Epifania, Francesco [4 ]
Zoppis, Italo Francesco [1 ]
机构
[1] Univ Milano Bicocca, Dept Informat Syst & Commun, Milan, Italy
[2] Informat Technol Univ, Dept Comp Sci, Lahore, Pakistan
[3] Hassan II Univ, Lab Econ & Logist Performance, Fac Law Econ & Social Sci Mohammedia, Casablanca, Morocco
[4] Social Things srl, Milan, Italy
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Continual learning; natural language processing; text classification; fine-tuning; Distil-BERT;
D O I
10.1109/ACCESS.2024.3435537
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Continual learning (CL) with bidirectional encoder representation from transformer (BERT) and its variant Distil-BERT, have shown remarkable performance in various natural language processing (NLP) tasks, such as text classification (TC). However, the model degrading factors like catastrophic forgetting (CF), accuracy, task dependent architecture ruined its popularity for complex and intelligent tasks. This research article proposes an innovative approach to address the challenges of CL in TC tasks. The objectives are to enable the model to learn continuously without forgetting previously acquired knowledge and perfectly avoid CF. To achieve this, a task-independent model architecture is introduced, allowing training of multiple tasks on the same model, thereby improving overall performance in CL scenarios. The framework incorporates two auxiliary tasks, namely next sentence prediction and task identifier prediction, to capture both the task-generic and task-specific contextual information. The Distil-BERT model, enhanced with two linear layers, categorizes the output representation into a task-generic space and a task-specific space. The proposed methodology is evaluated on diverse sets of TC tasks, including Yahoo, Yelp, Amazon, DB-Pedia, and AG-News. The experimental results demonstrate impressive performance across multiple tasks in terms of F1 score, model accuracy, model evaluation loss, learning rate, and training loss of the model. For the Yahoo task, the proposed model achieved an F1 score of 96.84 %, accuracy of 95.85 %, evaluation loss of 0.06, learning rate of 0.00003144. In the Yelp task, our model achieved an F1 score of 96.66 %, accuracy of 97.66 %, evaluation loss of 0.06, and similarly minimized training losses by achieving the learning rate of 0.00003189. For the Amazon task, the F1 score was 95.82 %, the observed accuracy is 97.83 %, evaluation loss was 0.06, and training losses were effectively minimized by securing the learning rate of 0.00003144. In the DB-Pedia task, we achieved an F1 score of 96.20 %, accuracy of 95.21 %, evaluation loss of 0.08, with learning rate 0.0001972 and rapidly minimized training losses due to the limited number of epochs and instances. In the AG-News task, our model obtained an F1 score of 94.78 %, accuracy of 92.76 %, evaluation loss of 0.06, and fixed the learning rate to 0.0001511. These results highlight the exceptional performance of our model in various TC tasks, with gradual reduction in training losses over time, indicating effective learning and retention of knowledge.
引用
收藏
页码:104964 / 104982
页数:19
相关论文
共 50 条
  • [21] SelfCCL: Curriculum Contrastive Learning by Transferring Self-Taught Knowledge for Fine-Tuning BERT
    Dehghan, Somaiyeh
    Amasyali, Mehmet Fatih
    APPLIED SCIENCES-BASEL, 2023, 13 (03):
  • [22] Classification of Focal Liver Lesions Using Deep Learning with Fine-Tuning
    Wang, Weibin
    Iwamoto, Yutaro
    Han, Xianhua
    Chen, Yen-Wei
    Chen, Qingqing
    Liang, Dong
    Lin, Lanfen
    Hu, Hongjie
    Zhang, Qiaowei
    PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON DIGITAL MEDICINE AND IMAGE PROCESSING (DMIP 2018), 2018, : 56 - 60
  • [23] High Accuracy Arrhythmia Classification using Transfer Learning with Fine-Tuning
    Aphale, Sayli
    Jha, Anshul
    John, Eugene
    2022 IEEE 13TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2022, : 480 - 487
  • [24] Transfer Learning With Adaptive Fine-Tuning
    Vrbancic, Grega
    Podgorelec, Vili
    IEEE ACCESS, 2020, 8 (08): : 196197 - 196211
  • [25] Fine-Tuning BERT Models for Multiclass Amharic News Document Categorization
    Endalie, Demeke
    COMPLEXITY, 2025, 2025 (01)
  • [26] Prediction of Author's Profile Basing on Fine-Tuning BERT Model
    Bsir B.
    Khoufi N.
    Zrigui M.
    Informatica (Slovenia), 2024, 48 (01): : 69 - 78
  • [27] A transformer fine-tuning strategy for text dialect identification
    Humayun, Mohammad Ali
    Yassin, Hayati
    Shuja, Junaid
    Alourani, Abdullah
    Abas, Pg Emeroylariffion
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (08) : 6115 - 6124
  • [28] How to Fine-Tune BERT for Text Classification?
    Sun, Chi
    Qiu, Xipeng
    Xu, Yige
    Huang, Xuanjing
    CHINESE COMPUTATIONAL LINGUISTICS, CCL 2019, 2019, 11856 : 194 - 206
  • [29] A transformer fine-tuning strategy for text dialect identification
    Mohammad Ali Humayun
    Hayati Yassin
    Junaid Shuja
    Abdullah Alourani
    Pg Emeroylariffion Abas
    Neural Computing and Applications, 2023, 35 : 6115 - 6124
  • [30] Optimizing Airline Review Sentiment Analysis: A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning
    Roumeliotis, Konstantinos I.
    Tselikas, Nikolaos D.
    Nasiopoulos, Dimitrios K.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 2769 - 2792