ACTIVE PRIVACY-UTILITY TRADE-OFF AGAINST A HYPOTHESIS TESTING ADVERSARY

被引:6
作者
Erdemir, Ecenaz [1 ]
Dragotti, Pier Luigi [1 ]
Gunduz, Deniz [1 ]
机构
[1] Imperial Coll London, Dept Elect & Elect Engn, London, England
来源
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) | 2021年
基金
欧洲研究理事会;
关键词
Privacy; hypothesis testing; active learning; actor-critic deep reinforcement learning;
D O I
10.1109/ICASSP39728.2021.9414608
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We consider a user releasing her data containing some personal information in return of a service. We model user's personal information as two correlated random variables, one of them, called the secret variable, is to be kept private, while the other, called the useful variable, is to be disclosed for utility. We consider active sequential data release, where at each time step the user chooses from among a finite set of release mechanisms, each revealing some information about the user's personal information, i.e., the true hypotheses, albeit with different statistics. The user manages data release in an online fashion such that maximum amount of information is revealed about the latent useful variable, while the confidence for the sensitive variable is kept below a predefined level. For the utility, we consider both the probability of correct detection of the useful variable and the mutual information (MI) between the useful variable and released data. We formulate both problems as a Markov decision process (MDP), and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (RL).
引用
收藏
页码:2660 / 2664
页数:5
相关论文
共 25 条
[1]  
Calmon FD, 2012, ANN ALLERTON CONF, P1401, DOI 10.1109/Allerton.2012.6483382
[2]   Quantifying Differential Privacy under Temporal Correlations [J].
Cao, Yang ;
Yoshikawa, Masatoshi ;
Xiao, Yonghui ;
Xiong, Li .
2017 IEEE 33RD INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2017), 2017, :821-832
[3]  
Erdemir E., 2020, PRIVACY DYNAMICAL SY
[4]   Privacy-Aware Location Sharing with Deep Reinforcement Learning [J].
Erdemir, Ecenaz ;
Dragotti, Pier Luigi ;
Gunduz, Deniz .
2019 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2019,
[5]   Privacy-Aware Time-Series Data Sharing With Deep Reinforcement Learning [J].
Erdemir, Ecenaz ;
Dragotti, Pier Luigi ;
Gunduz, Deniz .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :389-401
[6]  
Erdemir E, 2019, INT CONF ACOUST SPEE, P2687, DOI [10.1109/ICASSP.2019.8682381, 10.1109/icassp.2019.8682381]
[7]   A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients [J].
Grondman, Ivo ;
Busoniu, Lucian ;
Lopes, Gabriel A. D. ;
Babuska, Robert .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2012, 42 (06) :1291-1307
[8]  
Issa Ibrahim, 2016, 2016 Annual Conference on Information Science and Systems (CISS), P234, DOI 10.1109/CISS.2016.7460507
[9]  
Kartik D, 2018, ANN ALLERTON CONF, P741, DOI 10.1109/ALLERTON.2018.8636086
[10]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001