A privacy preservation framework for feedforward-designed convolutional neural networks

被引:11
作者
Li, De [1 ,2 ]
Wang, Jinyan [1 ,2 ]
Li, Qiyu [2 ]
Hu, Yuhang [2 ]
Li, Xianxian [1 ,2 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin, Peoples R China
[2] Guangxi Normal Univ, Sch Comp Sci & Engn, Guilin, Peoples R China
基金
中国国家自然科学基金;
关键词
Differential privacy; Convolutional neural networks; Feedforward-designed; Feature selection; Over-fitting; MODEL;
D O I
10.1016/j.neunet.2022.08.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A feedforward-designed convolutional neural network (FF-CNN) is an interpretable neural network with low training complexity. Unlike a neural network trained using backpropagation (BP) algorithms and optimizers (e.g., stochastic gradient descent (SGD) and Adam), a FF-CNN obtains the model parameters in one feed-forward calculation based on two methods of data statistics: subspace approximation with adjusted bias and least squares regression. Currently, models based on FF-CNN training methods have achieved outstanding performance in the fields of image classification and point cloud data processing. In this study, we analyze and verify that there is a risk of user privacy leakage during the training process of FF-CNN and existing privacy-preserving methods for model gradients or loss functions do not apply to FF-CNN models. Therefore, we propose a securely forward-designed convolutional neural network algorithm (SFF-CNN) to protect the privacy and security of data providers for the FF-CNN model. Firstly, we propose the DPSaab algorithm to add the corresponding noise to the one-stage Saab transform in the FF-CNN design for improved protection performance. Secondly, because noise addition brings the risk of model over-fitting and further increases the possibility of privacy leakage, we propose the SJS algorithm to filter the input features of the fully connected model layer. Finally, we theoretically prove that the proposed algorithm satisfies differential privacy and experimentally demonstrate that the proposed algorithm has strong privacy protection. The proposed algorithm outperforms the compared deep learning privacy-preserving algorithms in terms of utility and robustness. (C) 2022 Published by Elsevier Ltd.
引用
收藏
页码:14 / 27
页数:14
相关论文
共 55 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2017, Applications and Techniques in Information Security, DOI DOI 10.1007/978-981-10-5421-1_9
[3]  
[Anonymous], 2007, ACM T KNOWL DISCOV D, DOI [10.1145/1217299.1217302, DOI 10.1145/1217299.1217302]
[4]  
Chaudhuri Kamalika, 2008, ADV NEURAL INFORM PR, V21, DOI DOI 10.12720/JAIT.6.3.88-95
[5]  
Chen H.-S., 2021, 2021 IEEE INT C MULT, P1, DOI [10.1109/ICME51207.2021.9428361, DOI 10.1109/ICME51207.2021.9428361]
[6]  
Chen YR, 2020, IEEE IMAGE PROC, P3294, DOI 10.1109/ICIP40778.2020.9191012
[7]   PixelHop: A successive subspace learning (SSL) method for object recognition [J].
Chen, Yueru ;
Kuo, C. -C. Jay .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 70
[8]  
Chen YR, 2019, IEEE IMAGE PROC, P365, DOI [10.1109/ICIP.2019.8802926, 10.1109/icip.2019.8802926]
[9]  
Chen YR, 2019, IEEE IMAGE PROC, P3796, DOI [10.1109/ICIP.2019.8803610, 10.1109/icip.2019.8803610]
[10]  
Davody A, 2021, Arxiv, DOI arXiv:2006.10919