Fine-Tuning Pre-Trained Model for Consumer Fraud Detection from Consumer Reviews

被引:0
|
作者
Tang, Xingli [1 ]
Li, Keqi [1 ]
Huang, Liting [1 ]
Zhou, Hui [1 ]
Ye, Chunyang [1 ]
机构
[1] Hainan Univ, Haikou, Hainan, Peoples R China
关键词
Consumer fraud detection; Consumer reviews; Regulation;
D O I
10.1007/978-3-031-39821-6_38
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Consumer fraud is a significant problem that requires accurate and prompt detection. However, existing approaches such as periodic government inspections and consumer reports are inefficient and cumbersome. This paper proposes a novel approach named CFD-BERT, to detect consumer fraud automatically based on the group intelligence from consumer reviews. By applying the correlation between consumer reviews and official regulations to accurately mine consumer fraud patterns, and fine-tuning a pretrained model BERT to better model their semantics, which can detect fraudulent behaviors. Experimental evaluations using real-world datasets confirms the effectiveness of CFD-BERT in fraud detection. To explore its potential application and usefulness in real world scenarios, an empirical study was conducted with CFD-BERT on 143,587 reviews from the last three months. The results confirmed that CFD-BERT can serve as an auxiliary tool to provide early warnings to relevant regulators and consumers.
引用
收藏
页码:451 / 456
页数:6
相关论文
共 50 条
  • [31] Enhancing Machine-Generated Text Detection: Adversarial Fine-Tuning of Pre-Trained Language Models
    Hee Lee, Dong
    Jang, Beakcheol
    IEEE ACCESS, 2024, 12 : 65333 - 65340
  • [32] Fine-tuning pre-trained voice conversion model for adding new target speakers with limited data
    Koshizuka, Takeshi
    Ohmura, Hidefumi
    Katsurada, Kouichi
    INTERSPEECH 2021, 2021, : 1339 - 1343
  • [33] Novel Fine-Tuning Strategy on Pre-trained Protein Model Enhances ACP Functional Type Classification
    Wang, Shaokai
    Ma, Bin
    BIOINFORMATICS RESEARCH AND APPLICATIONS, PT I, ISBRA 2024, 2024, 14954 : 371 - 382
  • [34] An efficient ptychography reconstruction strategy through fine-tuning of large pre-trained deep learning model
    Pan, Xinyu
    Wang, Shuo
    Zhou, Zhongzheng
    Zhou, Liang
    Liu, Peng
    Li, Chun
    Wang, Wenhui
    Zhang, Chenglong
    Dong, Yuhui
    Zhang, Yi
    ISCIENCE, 2023, 26 (12)
  • [35] Poster: Attempts on detecting Alzheimer's disease by fine-tuning pre-trained model with Gaze Data
    Nagasawa, Junichi
    Nakata, Yuichi
    Hiroe, Mamoru
    Zheng, Yujia
    Kawaguchi, Yutaka
    Maegawa, Yuji
    Hojo, Naoki
    Takiguchi, Tetsuya
    Nakayama, Minoru
    Uchimura, Maki
    Sonoda, Yuma
    Kowa, Hisatomo
    Nagamatsu, Takashi
    PROCEEDINGS OF THE 2024 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2024, 2024,
  • [36] Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation
    Tayaranian, Mohammadreza
    Ghaffari, Alireza
    Tahaei, Marzieh S.
    Rezagholizadeh, Mehdi
    Asgharian, Masoud
    Nia, Vahid Partovi
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1912 - 1921
  • [37] Efficient Fine-Tuning for Low-Resource Tibetan Pre-trained Language Models
    Zhou, Mingjun
    Daiqing, Zhuoma
    Qun, Nuo
    Nyima, Tashi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 410 - 422
  • [38] Fine-Tuning BERT-Based Pre-Trained Models for Arabic Dependency Parsing
    Al-Ghamdi, Sharefah
    Al-Khalifa, Hend
    Al-Salman, Abdulmalik
    APPLIED SCIENCES-BASEL, 2023, 13 (07):
  • [39] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
    Alt, Christoph
    Huebner, Marc
    Hennig, Leonhard
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1388 - 1398
  • [40] Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts
    Jiang, Gangwei
    Jiang, Caigao
    Xue, Sigiao
    Zhang, James Y.
    Zhou, Jun
    Lian, Defu
    Wei, Ying
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12081 - 12095