Prediction of Author's Profile Basing on Fine-Tuning BERT Model

被引:0
|
作者
Bsir B. [1 ,2 ]
Khoufi N. [3 ]
Zrigui M. [1 ,2 ]
机构
[1] ISITCom, University of Sousse, Hammam Sousse
[2] Laboratory in Algebra, Numbers Theory and Intelligent Systems, University of Monastir, Monastir
[3] ANLP Research Group, FSEGS, Sfax
来源
Informatica (Slovenia) | 2024年 / 48卷 / 01期
关键词
Author profiling (AP); BERT; deep learning; fine tuning; NLP; PAN 2018 Corpus dataset; Self-attention Transformers; Transformer-model;
D O I
10.31449/inf.v48i1.4839
中图分类号
学科分类号
摘要
The task of author profiling consists in specifying the infer-demographic features of the social networks' users by studying their published content or the interactions between them. In the literature, many research works were conducted to enhance the accuracy of the techniques used in this process. In fact, the existing methods can be divided into two types: simple linear models and complex deep neural network models. Among them, the transformer-based model exhibited the highest efficiency in NLP analysis in several languages (English, German, French, Turk, Arabic, etc.). Despite their good performance, these approaches do not cover author profiling analysis and, thus, should be further enhanced. So, we propose in this paper a new deep learning strategy by training a customized transformer-model to learn the optimal features of our dataset. In this direction, we fine-tune the model by using the transfer learning approach to improve the results with random initialization. We have achieved about 79% of accuracy by modifying model to apply the retraining process using PAN 2018 authorship dataset. © 2024 Slovene Society Informatika. All rights reserved.
引用
收藏
页码:69 / 78
页数:9
相关论文
共 50 条
  • [21] Fine-Tuning BERT on Coarse-Grained Labels: Exploring Hidden States for Fine-Grained Classification
    Anjum, Aftab
    Krestel, Ralf
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, PT I, NLDB 2024, 2024, 14762 : 1 - 15
  • [22] Emotion detection in psychological texts by fine-tuning BERT using emotion–cause pair extraction
    Kumar A.
    Jain A.K.
    International Journal of Speech Technology, 2022, 25 (03) : 727 - 743
  • [23] Knowledge-based BERT word embedding fine-tuning for emotion recognition
    Zhu, Zixiao
    Mao, Kezhi
    NEUROCOMPUTING, 2023, 552
  • [24] Short Answer Questions Generation by Fine-Tuning BERT and GPT-2
    Tsai, Danny C. L.
    Chang, Willy J. W.
    Yang, Stephen J. H.
    29TH INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION (ICCE 2021), VOL II, 2021, : 508 - 514
  • [25] ALBERT-based fine-tuning model for cyberbullying analysis
    Tripathy, Jatin Karthik
    Chakkaravarthy, S. Sibi
    Satapathy, Suresh Chandra
    Sahoo, Madhulika
    Vaidehi, V.
    MULTIMEDIA SYSTEMS, 2022, 28 (06) : 1941 - 1949
  • [26] ALBERT-based fine-tuning model for cyberbullying analysis
    Jatin Karthik Tripathy
    S. Sibi Chakkaravarthy
    Suresh Chandra Satapathy
    Madhulika Sahoo
    V. Vaidehi
    Multimedia Systems, 2022, 28 : 1941 - 1949
  • [27] Fine-Tuning of Distil-BERT for Continual Learning in Text Classification: An Experimental Analysis
    Shah, Sahar
    Manzoni, Sara Lucia
    Zaman, Farooq
    Es Sabery, Fatima
    Epifania, Francesco
    Zoppis, Italo Francesco
    IEEE ACCESS, 2024, 12 : 104964 - 104982
  • [28] Compressing BERT for Binary Text Classification via Adaptive Truncation before Fine-Tuning
    Zhang, Xin
    Fan, Jing
    Hei, Mengzhe
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [29] On Friederich's New Fine-Tuning Argument
    Metcalf, Thomas
    FOUNDATIONS OF PHYSICS, 2021, 51 (02)
  • [30] On Friederich’s New Fine-Tuning Argument
    Thomas Metcalf
    Foundations of Physics, 2021, 51