Model-Protected Multi-Task Learning

被引:10
|
作者
Liang, Jian [1 ,2 ]
Liu, Ziqi [3 ]
Zhou, Jiayu [4 ]
Jiang, Xiaoqian [5 ]
Zhang, Changshui [1 ,2 ]
Wang, Fei [6 ]
机构
[1] Tsinghua Univ THUAI, Inst Artificial Intelligence, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Automat, Beijing Natl Res Ctr Informat Sci & Technol BNRis, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[3] Ant Financial Serv Grp, AI Dept, Hangzhou 310013, Zhejiang, Peoples R China
[4] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
[5] Univ Texas Hlth Sci Ctr Houston, Sch Biomed Informat, Houston, TX 77030 USA
[6] Weill Cornell Med Coll, Dept Populat Hlth Sci, New York, NY 10065 USA
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
Task analysis; Covariance matrices; Privacy; Security; Data models; Resource management; Multi-task learning; model protection; differential privacy; covariance matrix; low-rank subspace learning; REGRESSION;
D O I
10.1109/TPAMI.2020.3015859
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together. In contrast, in single-task learning (STL) each individual task is learned independently. MTL often leads to better trained models because they can leverage the commonalities among related tasks. However, because MTL algorithms can "leak" information from different models across different tasks, MTL poses a potential security risk. Specifically, an adversary may participate in the MTL process through one task and thereby acquire the model information for another task. The previously proposed privacy-preserving MTL methods protect data instances rather than models, and some of them may underperform in comparison with STL methods. In this paper, we propose a privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix. We study two popular MTL approaches for instantiation, namely, learning the low-rank and group-sparse patterns of the model matrix. Our algorithms can be guaranteed not to underperform compared with STL methods. We build our methods based upon tools for differential privacy, and privacy guarantees, utility bounds are provided, and heterogeneous privacy budgets are considered. The experiments demonstrate that our algorithms outperform the baseline methods constructed by existing privacy-preserving MTL methods on the proposed model-protection problem.
引用
收藏
页码:1002 / 1019
页数:18
相关论文
共 50 条
  • [1] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    Memetic Computing, 2020, 12 : 355 - 369
  • [2] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [3] Multi-Task Clustering with Model Relation Learning
    Zhang, Xiaotong
    Zhang, Xianchao
    Liu, Han
    Luo, Jiebo
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3132 - 3140
  • [4] Multi-Task Model and Feature Joint Learning
    Li, Ya
    Tian, Xinmei
    Liu, Tongliang
    Tao, Dacheng
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3643 - 3649
  • [5] Hierarchical Gaussian Processes model for multi-task learning
    Li, Ping
    Chen, Songcan
    PATTERN RECOGNITION, 2018, 74 : 134 - 144
  • [6] Unsupervised learning of multi-task deep variational model
    Tan, Lu
    Li, Ling
    Liu, Wan-Quan
    An, Sen-Jian
    Munyard, Kylie
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 87
  • [7] MULTI-TASK LEARNING WITH LOCALIZED GENERALIZATION ERROR MODEL
    Li, Wendi
    Zhu, Yi
    Wang, Ting
    Ng, Wing W. Y.
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), 2019, : 380 - 387
  • [8] A Regression Model Tree Algorithm by Multi-task Learning
    Jo, Seeun
    Jun, Chi-Hyuck
    INDUSTRIAL ENGINEERING AND MANAGEMENT SYSTEMS, 2021, 20 (02): : 163 - 171
  • [9] Multi-Task Learning Model for Kazakh Query Understanding
    Haisa, Gulizada
    Altenbek, Gulila
    SENSORS, 2022, 22 (24)
  • [10] Task-Aware Dynamic Model Optimization for Multi-Task Learning
    Choi, Sujin
    Jin, Hyundong
    Kim, Eunwoo
    IEEE ACCESS, 2023, 11 : 137709 - 137717