Fine-Tuning Pre-Trained Model to Extract Undesired Behaviors from App Reviews

被引:0
|
作者
Zhang, Wenyu [1 ]
Wang, Xiaojuan [1 ]
Lai, Shanyan [1 ]
Ye, Chunyang [1 ]
Zhou, Hui [1 ]
机构
[1] Hainan Univ, Haikou, Hainan, Peoples R China
来源
2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS | 2022年
关键词
User Comment; Undesired Behavior; App Market amplifiers;
D O I
10.1109/QRS57517.2022.00115
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Mobile application markets usually enact policies to describe in detail the minimum requirements that an application should comply with. User comments on mobile applications contain a large amount of information that can be used to find out APP's violations of market policies in a cost-effective way. Existing state-of-the-art methods match user comments with the violations of market policies based on well-designed syntax rules, which however cannot well capture the semantics of user comments and cannot be generalized to the scenarios not covered by the rules. To address this issue, we propose an innovative method, UBC-BERT, to detect undesired behavior from user comments based on their semantics. By incorporating sentence embeddings with attention, we train a classification model for 21 groups of undesirable behaviors based on the fine-tuning of a pre-trained model BERT-BASE. The experimental results show that our solution outperforms the baseline solutions in terms of a higher precision(up to 60.5% more).
引用
收藏
页码:1125 / 1134
页数:10
相关论文
共 50 条
  • [1] Fine-Tuning Pre-Trained Model for Consumer Fraud Detection from Consumer Reviews
    Tang, Xingli
    Li, Keqi
    Huang, Liting
    Zhou, Hui
    Ye, Chunyang
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2023, PT II, 2023, 14147 : 451 - 456
  • [2] Pruning Pre-trained Language ModelsWithout Fine-Tuning
    Jiang, Ting
    Wang, Deqing
    Zhuang, Fuzhen
    Xie, Ruobing
    Xia, Feng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
  • [3] Span Fine-tuning for Pre-trained Language Models
    Bao, Rongzhou
    Zhang, Zhuosheng
    Zhao, Hai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
  • [4] Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
    Liao, Baohao
    Tan, Shaomu
    Monz, Christof
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs
    Zhang, Zeren
    Li, Xingjian
    Hong, Tao
    Wang, Tianyang
    Ma, Jinwen
    Xiong, Haoyi
    Xu, Cheng-Zhong
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 293 - 308
  • [6] Waste Classification by Fine-Tuning Pre-trained CNN and GAN
    Alsabei, Amani
    Alsayed, Ashwaq
    Alzahrani, Manar
    Al-Shareef, Sarah
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (08): : 65 - 70
  • [7] Fine-Tuning Pre-Trained Language Models with Gaze Supervision
    Deng, Shuwen
    Prasse, Paul
    Reich, David R.
    Scheffer, Tobias
    Jager, Lena A.
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 217 - 224
  • [8] Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource
    Kit, Yuheng
    Mokji, Musa Mohd
    IEEE ACCESS, 2022, 10 : 107056 - 107065
  • [9] HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation
    Yuan, Hongyi
    Yuan, Zheng
    Tan, Chuanqi
    Huang, Fei
    Huang, Songfang
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 3246 - 3264
  • [10] Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
    Zhou, Ziqi
    Li, Minghui
    Liu, Wei
    Hu, Shengshan
    Zhang, Yechao
    Wang, Wei
    Xue, Lulu
    Zhang, Leo Yu
    Yao, Dezhong
    Jin, Hai
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 3015 - 3033