Differential Privacy High-Dimensional Data Publishing Based on Feature Selection and Clustering

被引:2
作者
Chu, Zhiguang [1 ,2 ]
He, Jingsha [1 ]
Zhang, Xiaolei [2 ]
Zhang, Xing [2 ]
Zhu, Nafei [1 ]
机构
[1] Beijing Univ Technol, Sch Software Engn, Beijing 100124, Peoples R China
[2] Key Lab Secur Network & Data Ind Internet Liaoning, Jinzhou 121000, Peoples R China
关键词
high-dimensional data; feature selection; random forest; clustering; differential privacy; PREDICTION; ALGORITHM;
D O I
10.3390/electronics12091959
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a social information product, the privacy and usability of high-dimensional data are the core issues in the field of privacy protection. Feature selection is a commonly used dimensionality reduction processing technique for high-dimensional data. Some feature selection methods only process some of the features selected by the algorithm and do not take into account the information associated with the selected features, resulting in the usability of the final experimental results not being high. This paper proposes a hybrid method based on feature selection and a cluster analysis to solve the data utility and privacy problems of high-dimensional data in the actual publishing process. The proposed method is divided into three stages: (1) screening features; (2) analyzing the clustering of features; and (3) adaptive noise. This paper uses the Wisconsin Breast Cancer Diagnostic (WDBC) database from UCI's Machine Learning Library. Using classification accuracy to evaluate the performance of the proposed method, the experiments show that the original data are processed by the algorithm in this paper while protecting the sensitive data information while retaining the contribution of the data to the diagnostic results.
引用
收藏
页数:16
相关论文
共 47 条
[1]  
Amaricai A, 2017, STUD INFORM CONTROL, V26, P43
[2]  
[Anonymous], 2009, ACM S THEORY COMPUT
[3]   Brain-Computer Interface for wheelchair control operations: An approach based on Fast Fourier Transform and On-Line Sequential Extreme Learning Machine [J].
Ansari, Md Fahim ;
Edla, Damodar Reddy ;
Dodia, Shubham ;
Kuppili, Venkatanareshbabu .
CLINICAL EPIDEMIOLOGY AND GLOBAL HEALTH, 2019, 7 (03) :274-278
[4]  
Billard L, 2017, WIRES COMPUT STAT, V9, DOI 10.1002/wics.1405
[5]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[6]  
Bugliesi M., 2006, Encyclopedia of Cryptography and Security, P1
[7]   Feature Selection Based on Density Peak Clustering Using Information Distance Measure [J].
Cai, Jie ;
Chao, Shilong ;
Yang, Sheng ;
Wang, Shulin ;
Luo, Jiawei .
INTELLIGENT COMPUTING THEORIES AND APPLICATION, ICIC 2017, PT II, 2017, 10362 :125-131
[8]   A clustering-based feature selection framework for handwritten Indic script classification [J].
Chatterjee, Iman ;
Ghosh, Manosij ;
Sing, Pawan Kumar ;
Sarkar, Ram ;
Nasipuri, Mita .
EXPERT SYSTEMS, 2019, 36 (06)
[9]   Combining clustering of variables and feature selection using random forests [J].
Chavent, Marie ;
Genuer, Robin ;
Saracco, Jerome .
COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2021, 50 (02) :426-445
[10]  
Chiang YH, 2019, IEEE SYS MAN CYBERN, P1678, DOI [10.1109/SMC.2019.8914538, 10.1109/smc.2019.8914538]