Balancing Privacy and Attack Utility: Calibrating Sample Difficulty for Membership Inference Attacks in Transfer Learning

被引:2
作者
Liu, Shuwen [1 ]
Qian, Yongfeng [1 ]
Hao, Yixue [2 ]
机构
[1] China Univ Geosci, Beijing, Peoples R China
[2] Huazhong Univ Sci & Technol, Wuhan, Hubei, Peoples R China
来源
2024 54TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS-SUPPLEMENTAL VOLUME, DSN-S 2024 | 2024年
关键词
membership inference attack; data poisoning attack; difficulty calibration;
D O I
10.1109/DSN-S60304.2024.00046
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The growing prominence of transfer learning in domains such as healthcare and finance highlights its efficacy in enhancing machine learning models. However, conventional membership inference attacks (MIA) often struggle to perform well when applied to transfer learning models trained under normal fit. To address this challenge, we propose a novel approach called PC-MIA. This approach involves generating multiple poisoned reference models using toxic samples. These poisoned models are then utilized to calibrate the difficulty of samples and reveal their true hardness, thereby enhancing the accuracy of MIA. Through empirical evaluations conducted on real-world datasets and employing diverse model architectures, our approach demonstrates its ability to significantly improve the accuracy of membership inference.
引用
收藏
页码:159 / 160
页数:2
相关论文
共 3 条
[1]  
Chen YD, 2022, ADV NEUR IN
[2]  
Watson L, 2022, Arxiv, DOI arXiv:2111.08440
[3]   Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting [J].
Yeom, Samuel ;
Giacomelli, Irene ;
Fredrikson, Matt ;
Jha, Somesh .
IEEE 31ST COMPUTER SECURITY FOUNDATIONS SYMPOSIUM (CSF 2018), 2018, :268-282