Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning

被引:0
|
作者
Liu, Chong [1 ]
Zhu, Yuqing [1 ]
Chaudhuri, Kamalika [2 ]
Wang, Yu-Xiang [1 ]
机构
[1] UC Santa Barbara, Santa Barbara, CA 93106 USA
[2] Univ Calif San Diego, La Jolla, CA USA
来源
24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS) | 2021年 / 130卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Private Aggregation of Teacher Ensembles (PATE) framework is one of the most promising recent approaches in differentially private learning. Existing theoretical analysis shows that PATE consistently learns any VC-classes in the realizable setting, but falls short in explaining its success in more general cases where the error rate of the optimal classifier is bounded away from zero. We fill in this gap by introducing the Tsybakov Noise Condition (TNC) and establish stronger and more interpretable learning bounds. These bounds provide new insights into when PATE works and improve over existing results even in the narrower realizable setting. We also investigate the compelling idea of using active learning for saving privacy budget. The novel components in the proofs include a more refined analysis of the majority voting classifier - which could be of independent interest and an observation that the synthetic "student" learning problem is nearly realizable by construction under the Tsybakov noise condition.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Boosting mono-jet searches with model-agnostic machine learning
    Thorben Finke
    Michael Krämer
    Maximilian Lipp
    Alexander Mück
    Journal of High Energy Physics, 2022
  • [42] General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models
    Molnar, Christoph
    Koenig, Gunnar
    Herbinger, Julia
    Freiesleben, Timo
    Dandl, Susanne
    Scholbeck, Christian A.
    Casalicchio, Giuseppe
    Grosse-Wentrup, Moritz
    Bischl, Bernd
    XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, 2022, 13200 : 39 - 68
  • [43] Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision
    Obamuyide, Abiola
    Vlachos, Andreas
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5873 - 5879
  • [44] OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning
    Zhang, Hantian
    Chu, Xu
    Asudeh, Abolfazl
    Navathe, Shamkant B.
    SIGMOD '21: PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2021, : 2076 - 2088
  • [45] On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning
    Fallah, Alireza
    Georgiev, Kristian
    Mokhtari, Aryan
    Ozdaglar, Asuman
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [46] MODEL-AGNOSTIC ADVERSARIAL EXAMPLE DETECTION THROUGH LOGIT DISTRIBUTION LEARNING
    Wang, Yaopeng
    Xie, Lehui
    Liu, Ximeng
    Yin, Jia-Li
    Zheng, Tingjie
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3617 - 3621
  • [47] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
    Finn, Chelsea
    Abbeel, Pieter
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [48] Toward Learning Model-Agnostic Explanations for Deep Learning-Based Signal Modulation Classifiers
    Tian, Yunzhe
    Xu, Dongyue
    Tong, Endong
    Sun, Rui
    Chen, Kang
    Li, Yike
    Baker, Thar
    Niu, Wenjia
    Liu, Jiqiang
    IEEE TRANSACTIONS ON RELIABILITY, 2024, 73 (03) : 1529 - 1543
  • [49] A Compressed Model-Agnostic Meta-Learning Model Based on Pruning for Disease Diagnosis
    Hu, Xiangjun
    Ding, Xiuxiu
    Bai, Dongpeng
    Zhang, Qingchen
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2023, 32 (02)
  • [50] Agnostic active learning
    Balcan, Maria-Florina
    Beygelzimer, Alina
    Langford, John
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2009, 75 (01) : 78 - 89