A privacy preservation framework for feedforward-designed convolutional neural networks

被引:11
作者
Li, De [1 ,2 ]
Wang, Jinyan [1 ,2 ]
Li, Qiyu [2 ]
Hu, Yuhang [2 ]
Li, Xianxian [1 ,2 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin, Peoples R China
[2] Guangxi Normal Univ, Sch Comp Sci & Engn, Guilin, Peoples R China
基金
中国国家自然科学基金;
关键词
Differential privacy; Convolutional neural networks; Feedforward-designed; Feature selection; Over-fitting; MODEL;
D O I
10.1016/j.neunet.2022.08.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A feedforward-designed convolutional neural network (FF-CNN) is an interpretable neural network with low training complexity. Unlike a neural network trained using backpropagation (BP) algorithms and optimizers (e.g., stochastic gradient descent (SGD) and Adam), a FF-CNN obtains the model parameters in one feed-forward calculation based on two methods of data statistics: subspace approximation with adjusted bias and least squares regression. Currently, models based on FF-CNN training methods have achieved outstanding performance in the fields of image classification and point cloud data processing. In this study, we analyze and verify that there is a risk of user privacy leakage during the training process of FF-CNN and existing privacy-preserving methods for model gradients or loss functions do not apply to FF-CNN models. Therefore, we propose a securely forward-designed convolutional neural network algorithm (SFF-CNN) to protect the privacy and security of data providers for the FF-CNN model. Firstly, we propose the DPSaab algorithm to add the corresponding noise to the one-stage Saab transform in the FF-CNN design for improved protection performance. Secondly, because noise addition brings the risk of model over-fitting and further increases the possibility of privacy leakage, we propose the SJS algorithm to filter the input features of the fully connected model layer. Finally, we theoretically prove that the proposed algorithm satisfies differential privacy and experimentally demonstrate that the proposed algorithm has strong privacy protection. The proposed algorithm outperforms the compared deep learning privacy-preserving algorithms in terms of utility and robustness. (C) 2022 Published by Elsevier Ltd.
引用
收藏
页码:14 / 27
页数:14
相关论文
共 55 条
[21]  
Huang Z, 2020, INT CONF ACOUST SPEE, P6854, DOI [10.1109/ICASSP40776.2020.9053973, 10.1109/icassp40776.2020.9053973]
[22]  
Kadam P., 2021, arXiv
[23]   Interpretable convolutional neural networks via feedforward design [J].
Kuo, C-C. Jay ;
Zhang, Min ;
Li, Siyang ;
Duan, Jiali ;
Chen, Yueru .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 :346-359
[24]   Differential Privacy Preservation in Interpretable Feedforward-Designed Convolutional Neural Networks [J].
Li, De ;
Wang, Jinyan ;
Tan, Zhou ;
Li, Xianxian ;
Hu, Yuhang .
2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, :631-638
[25]  
Li NH, 2007, PROC INT CONF DATA, P81
[26]  
Li QB, 2020, AAAI CONF ARTIF INTE, V34, P784
[27]   Differentially private ensemble learning for classification [J].
Li, Xianxian ;
Liu, Jing ;
Liu, Songfeng ;
Wang, Jinyan .
NEUROCOMPUTING, 2021, 430 :34-46
[28]  
Liu XF, 2021, Arxiv, DOI arXiv:2101.05131
[29]   SPEECH EMOTION RECOGNITION USING QUATERNION CONVOLUTIONAL NEURAL NETWORKS [J].
Muppidi, Aneesh ;
Radfar, Martin .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :6309-6313
[30]   Preserving differential privacy in convolutional deep belief networks [J].
NhatHai Phan ;
Wu, Xintao ;
Dou, Dejing .
MACHINE LEARNING, 2017, 106 (9-10) :1681-1704