A Symmetric Projection Space and Adversarial Training Framework for Privacy-Preserving Machine Learning with Improved Computational Efficiency

被引:0
作者
Li, Qianqian [1 ]
Zhou, Shutian [1 ]
Zeng, Xiangrong [1 ]
Shi, Jiaqi [1 ,2 ]
Lin, Qianye [1 ,3 ]
Huang, Chenjia [1 ,4 ]
Yue, Yuchen [1 ,5 ]
Jiang, Yuyao [1 ]
Lv, Chunli [1 ]
机构
[1] China Agr Univ, Beijing 100083, Peoples R China
[2] Beijing Foreign Studies Univ, Beijing 100089, Peoples R China
[3] Univ Int Business & Econ, Beijing 100029, Peoples R China
[4] China Univ Polit Sci & Law, Beijing 102249, Peoples R China
[5] Peking Univ, Beijing 100871, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 06期
基金
中国国家自然科学基金;
关键词
privacy protection; data security; adversarial training; computational efficiency; high-dimensional data compression; SECURITY;
D O I
10.3390/app15063275
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This paper proposes a data security training framework based on symmetric projection space and adversarial training, aimed at addressing the issues of privacy leakage and computational efficiency encountered by current privacy protection technologies when processing sensitive data. By designing a new projection loss function and combining autoencoders with adversarial training, the proposed method effectively balances privacy protection and model utility. Experimental results show that, for financial time-series data tasks, the model using the projection loss achieves a precision of 0.95, recall of 0.91, and accuracy of 0.93, significantly outperforming the traditional cross-entropy loss. In image data tasks, the projection loss yields a precision of 0.93, recall of 0.90, accuracy of 0.91, and mAP@50 and mAP@75 of 0.91 and 0.90, respectively, demonstrating its strong advantage in complex tasks. Furthermore, experiments on different hardware platforms (Raspberry Pi, Jetson, and NVIDIA 3080 GPU) show that the proposed method performs well on low-computation devices and exhibits significant advantages on high-performance GPUs, particularly in terms of computational efficiency, demonstrating good scalability and efficiency. The experimental results validate the superiority of the proposed method in terms of data privacy protection and computational efficiency.
引用
收藏
页数:26
相关论文
共 50 条
[11]   Secure, privacy-preserving and federated machine learning in medical imaging [J].
Kaissis, Georgios A. ;
Makowski, Marcus R. ;
Ruckert, Daniel ;
Braren, Rickmer F. .
NATURE MACHINE INTELLIGENCE, 2020, 2 (06) :305-311
[12]  
Knott B, 2021, ADV NEUR IN, V34
[13]  
Kwon M, 2022, Arxiv, DOI [arXiv:2210.10960, 10.48550/arXiv:2210.10960]
[14]   A review of applications in federated learning [J].
Li, Li ;
Fan, Yuxi ;
Tse, Mike ;
Lin, Kuo-Yi .
COMPUTERS & INDUSTRIAL ENGINEERING, 2020, 149
[15]   Confidential Federated Learning for Heterogeneous Platforms against Client-Side Privacy Leakages [J].
Li, Qiushi ;
Zhang, Yan .
PROCEEDINGS OF THE ACM TURING AWARD CELEBRATION CONFERENCE-CHINA 2024, ACM-TURC 2024, 2024, :239-241
[16]  
Li QS, 2024, Arxiv, DOI arXiv:2404.04098
[17]   Privacy-Preserving DNN Training with Prefetched Meta-Keys on Heterogeneous Neural Network Accelerators [J].
Li, Qiushi ;
Ren, Ju ;
Zhang, Yan ;
Song, Chengru ;
Liao, Yiqiao ;
Zhang, Yaoxue .
2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
[18]  
Liang YZ, 2023, Arxiv, DOI [arXiv:2307.12255, DOI 10.48550/ARXIV.2307.12255]
[19]   Privacy and Security Issues in Deep Learning: A Survey [J].
Liu, Ximeng ;
Xie, Lehui ;
Wang, Yaopeng ;
Zou, Jian ;
Xiong, Jinbo ;
Ying, Zuobin ;
Vasilakos, Athanasios V. .
IEEE ACCESS, 2021, 9 :4566-4593
[20]  
Maus NT, 2022, ADV NEUR IN