InfoCensor: An Information-Theoretic Framework against Sensitive Attribute Inference and Demographic Disparity

被引:5
作者
Zheng, Tianhang [1 ]
Li, Baochun [1 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
来源
ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY | 2022年
关键词
Information Theory; Attribute Inference; Demographic Disparity;
D O I
10.1145/3488932.3517402
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning sits at the forefront of many on-going advances in a variety of learning tasks. Despite its supremacy in accuracy under benign environments, Deep learning suffers from adversarial vulnerability and privacy leakage (e.g., sensitive attribute inference) in adversarial environments. Also, many deep learning systems exhibit discriminatory behaviors against certain groups of subjects (e.g., demographic disparity). In this paper, we propose a unified information-theoretic framework to defend against sensitive attribute inference and mitigate demographic disparity in deep learning for the model partitioning scenario, by minimizing two mutual information terms. We prove that as one mutual information term decreases, an upper bound on the chance for any adversary to infer the sensitive attribute from model representations will decrease. Also, the extent of demographic disparity is bounded by the other mutual information term. Since direct optimization on the mutual information is intractable, we also propose a tractable Gaussian mixture based method and a gumbel-softmax trick based method for estimating the two mutual information terms. Extensive evaluations in a variety of application domains, including computer vision and natural language processing, demonstrate our framework's overall better performance than the existing baselines.
引用
收藏
页码:437 / 451
页数:15
相关论文
共 40 条
[1]  
[Anonymous], HLTH HERITAGE
[2]   Three naive Bayes approaches for discrimination-free classification [J].
Calders, Toon ;
Verwer, Sicco .
DATA MINING AND KNOWLEDGE DISCOVERY, 2010, 21 (02) :277-292
[3]  
Chi JF, 2018, Arxiv, DOI arXiv:1812.02863
[4]  
Coavoux M, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P1
[5]  
Creager E, 2019, PR MACH LEARN RES, V97
[6]   Some upper bounds for relative entropy and applications [J].
Dragomir, SS ;
Scholz, ML ;
Sunde, J .
COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2000, 39 (9-10) :91-100
[7]  
Edwards H, 2016, Arxiv, DOI arXiv:1511.05897
[8]  
Goldfeld Z, 2019, PR MACH LEARN RES, V97
[9]   Distributed learning of deep neural network over multiple agents [J].
Gupta, Otkrist ;
Raskar, Ramesh .
JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2018, 116 :1-8
[10]   Model Inversion Attacks Against Collaborative Inference [J].
He, Zecheng ;
Zhang, Tianwei ;
Lee, Ruby B. .
35TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSA), 2019, :148-162