Model-Protected Multi-Task Learning

被引:12
作者
Liang, Jian [1 ,2 ]
Liu, Ziqi [3 ]
Zhou, Jiayu [4 ]
Jiang, Xiaoqian [5 ]
Zhang, Changshui [1 ,2 ]
Wang, Fei [6 ]
机构
[1] Tsinghua Univ THUAI, Inst Artificial Intelligence, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Automat, Beijing Natl Res Ctr Informat Sci & Technol BNRis, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[3] Ant Financial Serv Grp, AI Dept, Hangzhou 310013, Zhejiang, Peoples R China
[4] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
[5] Univ Texas Hlth Sci Ctr Houston, Sch Biomed Informat, Houston, TX 77030 USA
[6] Weill Cornell Med Coll, Dept Populat Hlth Sci, New York, NY 10065 USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
Task analysis; Covariance matrices; Privacy; Security; Data models; Resource management; Multi-task learning; model protection; differential privacy; covariance matrix; low-rank subspace learning; REGRESSION;
D O I
10.1109/TPAMI.2020.3015859
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together. In contrast, in single-task learning (STL) each individual task is learned independently. MTL often leads to better trained models because they can leverage the commonalities among related tasks. However, because MTL algorithms can "leak" information from different models across different tasks, MTL poses a potential security risk. Specifically, an adversary may participate in the MTL process through one task and thereby acquire the model information for another task. The previously proposed privacy-preserving MTL methods protect data instances rather than models, and some of them may underperform in comparison with STL methods. In this paper, we propose a privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix. We study two popular MTL approaches for instantiation, namely, learning the low-rank and group-sparse patterns of the model matrix. Our algorithms can be guaranteed not to underperform compared with STL methods. We build our methods based upon tools for differential privacy, and privacy guarantees, utility bounds are provided, and heterogeneous privacy budgets are considered. The experiments demonstrate that our algorithms outperform the baseline methods constructed by existing privacy-preserving MTL methods on the proposed model-protection problem.
引用
收藏
页码:1002 / 1019
页数:18
相关论文
共 78 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Almeida M.B., 2013, Proceedings of ACL, P196
[3]  
Amit Yonatan, 2007, P 24 INT C MACHINE L, P17
[4]  
Ando RK, 2005, J MACH LEARN RES, V6, P1817
[5]  
[Anonymous], 2013, Nonparametric StatisticalMethods
[6]  
[Anonymous], 2007, Advances in Neural Information Processing Systems
[7]  
[Anonymous], 1998, MANUSCRIPT PRE UNPUB
[8]  
[Anonymous], 2017, ARXIV170708114V2
[9]  
[Anonymous], 2016, arXiv
[10]  
[Anonymous], 2013, P 30 INT C MACHINE L