Fine-Tuning Pre-Trained Model for Consumer Fraud Detection from Consumer Reviews

被引:0
|
作者
Tang, Xingli [1 ]
Li, Keqi [1 ]
Huang, Liting [1 ]
Zhou, Hui [1 ]
Ye, Chunyang [1 ]
机构
[1] Hainan Univ, Haikou, Hainan, Peoples R China
关键词
Consumer fraud detection; Consumer reviews; Regulation;
D O I
10.1007/978-3-031-39821-6_38
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Consumer fraud is a significant problem that requires accurate and prompt detection. However, existing approaches such as periodic government inspections and consumer reports are inefficient and cumbersome. This paper proposes a novel approach named CFD-BERT, to detect consumer fraud automatically based on the group intelligence from consumer reviews. By applying the correlation between consumer reviews and official regulations to accurately mine consumer fraud patterns, and fine-tuning a pretrained model BERT to better model their semantics, which can detect fraudulent behaviors. Experimental evaluations using real-world datasets confirms the effectiveness of CFD-BERT in fraud detection. To explore its potential application and usefulness in real world scenarios, an empirical study was conducted with CFD-BERT on 143,587 reviews from the last three months. The results confirmed that CFD-BERT can serve as an auxiliary tool to provide early warnings to relevant regulators and consumers.
引用
收藏
页码:451 / 456
页数:6
相关论文
共 50 条
  • [21] Improving automatic cyberbullying detection in social network environments by fine-tuning a pre-trained sentence transformer language model
    Gutierrez-Batista, Karel
    Gomez-Sanchez, Jesica
    Fernandez-Basso, Carlos
    SOCIAL NETWORK ANALYSIS AND MINING, 2024, 14 (01)
  • [22] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning
    Chen, Hanjie
    Zheng, Guoqing
    Awadallah, Ahmed Hassan
    Ji, Yangfeng
    PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
  • [23] Fraud detection in online consumer reviews
    Hu, Nan
    Liu, Ling
    Sambamurthy, Vallabh
    DECISION SUPPORT SYSTEMS, 2011, 50 (03) : 614 - 626
  • [24] Revisiting k-NN for Fine-Tuning Pre-trained Language Models
    Li, Lei
    Chen, Jing
    Tian, Botzhong
    Zhang, Ningyu
    CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 327 - 338
  • [25] Fine-tuning the hyperparameters of pre-trained models for solving multiclass classification problems
    Kaibassova, D.
    Nurtay, M.
    Tau, A.
    Kissina, M.
    COMPUTER OPTICS, 2022, 46 (06) : 971 - 979
  • [26] Improving Pre-Trained Weights through Meta-Heuristics Fine-Tuning
    de Rosa, Gustavo H.
    Roder, Mateus
    Papa, Joao Paulo
    dos Santos, Claudio F. G.
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [27] Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization
    Xie, Sang Michael
    Ma, Tengyu
    Liang, Percy
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [28] Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
    Zhang, Haojie
    Li, Ge
    Li, Jia
    Zhang, Zhongjin
    Zhu, Yuqi
    Jin, Zhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [29] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models
    Liu, Xueqing
    Wang, Chi
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300
  • [30] Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection
    Uppaal, Rheeya
    Hu, Junjie
    Li, Yixuan
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12813 - 12832