Cyberbullying Text Identification: A Deep Learning and Transformer-based Language Modeling Approach

被引:0
|
作者
Saifullah K. [1 ]
Khan M.I. [1 ]
Jamal S. [2 ]
Sarker I.H. [3 ]
机构
[1] Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chittagong
[2] Dept. of Information Technology, Georgia Southern University, Statesboro, GA
[3] Centre for Securing Digital Futures, School of Science, Edith Cowan University, Perth, 6027, WA
关键词
Cyberbullying; deep learning; fine tuning; harmful messages; large language modeling; natural language processing (NLP); OOV; transformers models;
D O I
10.4108/EETINIS.V11I1.4703
中图分类号
学科分类号
摘要
In the contemporary digital age, social media platforms like Facebook, Twitter, and YouTube serve as vital channels for individuals to express ideas and connect with others. Despite fostering increased connectivity, these platforms have inadvertently given rise to negative behaviors, particularly cyberbullying. While extensive research has been conducted on high-resource languages such as English, there is a notable scarcity of resources for low-resource languages like Bengali, Arabic, Tamil, etc., particularly in terms of language modeling. This study addresses this gap by developing a cyberbullying text identification system called BullyFilterNeT tailored for social media texts, considering Bengali as a test case. The intelligent BullyFilterNeT system devised overcomes Out-of-Vocabulary (OOV) challenges associated with non-contextual embeddings and addresses the limitations of context-aware feature representations. To facilitate a comprehensive understanding, three non-contextual embedding models GloVe, FastText, and Word2Vec are developed for feature extraction in Bengali. These embedding models are utilized in the classification models, employing three statistical models (SVM, SGD, Libsvm), and four deep learning models (CNN, VDCNN, LSTM, GRU). Additionally, the study employs six transformer-based language models: mBERT, bELECTRA, IndicBERT, XML-RoBERTa, DistilBERT, and BanglaBERT, respectively to overcome the limitations of earlier models. Remarkably, BanglaBERT-based BullyFilterNeT achieves the highest accuracy of 88.04% in our test set, underscoring its effectiveness in cyberbullying text identification in the Bengali language. Copyright © 2024 K. Saifullah et al., licensed to EAI. This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.
引用
收藏
页码:1 / 12
页数:11
相关论文
共 50 条
  • [41] Transformer-based deep learning architecture for time series forecasting
    Nayak, G. H. Harish
    Alam, Md Wasi
    Avinash, G.
    Kumar, Rajeev Ranjan
    Ray, Mrinmoy
    Barman, Samir
    Singh, K. N.
    Naik, B. Samuel
    Alam, Nurnabi Meherul
    Pal, Prasenjit
    Rathod, Santosha
    Bisen, Jaiprakash
    SOFTWARE IMPACTS, 2024, 22
  • [42] Influence of Language Proficiency on the Readability of Review Text and Transformer-based Models for Determining Language Proficiency
    Sazzed, Salim
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 881 - 886
  • [43] Deep Learning-Based Cyberbullying Detection in Kurdish Language
    Badawi, Soran
    COMPUTER JOURNAL, 2024, 67 (07): : 2548 - 2558
  • [44] Enriching Transformer-Based Embeddings for Emotion Identification in an Agglutinative Language: Turkish
    Uymaz, Hande Aka
    Metin, Senem Kumova
    IT PROFESSIONAL, 2023, 25 (04) : 67 - 73
  • [45] Text Detection of Transformer Based on Deep Learning Algorithm
    Cheng, Yu
    Wan, Yiru
    Sima, Yingjie
    Zhang, Yinmei
    Hu, Sanying
    Wu, Shu
    TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2022, 29 (03): : 861 - 866
  • [46] Causal and Masked Language Modeling of Java']Javanese Language using Transformer-based Architectures
    Wongso, Wilson
    Setiawan, David Samuel
    Suhartono, Derwin
    13TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER SCIENCE AND INFORMATION SYSTEMS (ICACSIS 2021), 2021, : 29 - 35
  • [47] Transcribing paralinguistic acoustic cues to target language text in transformer-based speech-to-text translation
    Tokuyama, Hirotaka
    Sakti, Sakriani
    Sudoh, Katsuhito
    Nakamura, Satoshi
    Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2021, 5 : 3976 - 3980
  • [48] Transcribing Paralinguistic Acoustic Cues to Target Language Text in Transformer-based Speech-to-Text Translation
    Tokuyama, Hirotaka
    Sakti, Sakriani
    Sudoh, Katsuhito
    Nakamura, Satoshi
    INTERSPEECH 2021, 2021, : 2262 - 2266
  • [49] TIRec: Transformer-based Invoice Text Recognition
    Chen, Yanlan
    2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 175 - 180
  • [50] Practical Transformer-based Multilingual Text Classification
    Wang, Cindy
    Banko, Michele
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2021, 2021, : 121 - 129