Fine-Tuning Pre-Trained Model for Consumer Fraud Detection from Consumer Reviews

被引:0
|
作者
Tang, Xingli [1 ]
Li, Keqi [1 ]
Huang, Liting [1 ]
Zhou, Hui [1 ]
Ye, Chunyang [1 ]
机构
[1] Hainan Univ, Haikou, Hainan, Peoples R China
关键词
Consumer fraud detection; Consumer reviews; Regulation;
D O I
10.1007/978-3-031-39821-6_38
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Consumer fraud is a significant problem that requires accurate and prompt detection. However, existing approaches such as periodic government inspections and consumer reports are inefficient and cumbersome. This paper proposes a novel approach named CFD-BERT, to detect consumer fraud automatically based on the group intelligence from consumer reviews. By applying the correlation between consumer reviews and official regulations to accurately mine consumer fraud patterns, and fine-tuning a pretrained model BERT to better model their semantics, which can detect fraudulent behaviors. Experimental evaluations using real-world datasets confirms the effectiveness of CFD-BERT in fraud detection. To explore its potential application and usefulness in real world scenarios, an empirical study was conducted with CFD-BERT on 143,587 reviews from the last three months. The results confirmed that CFD-BERT can serve as an auxiliary tool to provide early warnings to relevant regulators and consumers.
引用
收藏
页码:451 / 456
页数:6
相关论文
共 50 条
  • [1] Fine-Tuning Pre-Trained Model to Extract Undesired Behaviors from App Reviews
    Zhang, Wenyu
    Wang, Xiaojuan
    Lai, Shanyan
    Ye, Chunyang
    Zhou, Hui
    2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 1125 - 1134
  • [2] Pruning Pre-trained Language ModelsWithout Fine-Tuning
    Jiang, Ting
    Wang, Deqing
    Zhuang, Fuzhen
    Xie, Ruobing
    Xia, Feng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
  • [3] Span Fine-tuning for Pre-trained Language Models
    Bao, Rongzhou
    Zhang, Zhuosheng
    Zhao, Hai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
  • [4] Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
    Liao, Baohao
    Tan, Shaomu
    Monz, Christof
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs
    Zhang, Zeren
    Li, Xingjian
    Hong, Tao
    Wang, Tianyang
    Ma, Jinwen
    Xiong, Haoyi
    Xu, Cheng-Zhong
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 293 - 308
  • [6] Waste Classification by Fine-Tuning Pre-trained CNN and GAN
    Alsabei, Amani
    Alsayed, Ashwaq
    Alzahrani, Manar
    Al-Shareef, Sarah
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (08): : 65 - 70
  • [7] Fine-Tuning Pre-Trained Language Models with Gaze Supervision
    Deng, Shuwen
    Prasse, Paul
    Reich, David R.
    Scheffer, Tobias
    Jager, Lena A.
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 217 - 224
  • [8] Disfluencies and Fine-Tuning Pre-trained Language Models for Detection of Alzheimer's Disease
    Yuan, Jiahong
    Bian, Yuchen
    Cai, Xingyu
    Huang, Jiaji
    Ye, Zheng
    Church, Kenneth
    INTERSPEECH 2020, 2020, : 2162 - 2166
  • [9] Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource
    Kit, Yuheng
    Mokji, Musa Mohd
    IEEE ACCESS, 2022, 10 : 107056 - 107065
  • [10] HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation
    Yuan, Hongyi
    Yuan, Zheng
    Tan, Chuanqi
    Huang, Fei
    Huang, Songfang
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 3246 - 3264