Mixed-Privacy Forgetting in Deep Networks

被引:56
|
作者
Golatkar, Aditya [1 ,2 ]
Achille, Alessandro [1 ]
Ravichandran, Avinash [1 ]
Polito, Marzia [1 ]
Soatto, Stefano [1 ]
机构
[1] Amazon Web Serv, Seattle, WA 98109 USA
[2] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.00085
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We show that the influence of a subset of the training samples can be removed - or "forgotten" - from the weights of a network trained on large-scale image classification tasks, and we provide strong computable bounds on the amount of remaining information after forgetting. Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting, where we know that a "core" subset of the training samples does not need to be forgotten. While this variation of the problem is conceptually simple, we show that working in this setting significantly improves the accuracy and guarantees of forgetting methods applied to vision classification tasks. Moreover, our method allows efficient removal of all information contained in non-core data by simply setting to zero a subset of the weights with minimal loss in performance. We achieve these results by replacing a standard deep network with a suitable linear approximation. With opportune changes to the network architecture and training procedure, we show that such linear approximation achieves comparable performance to the original network and that the forgetting problem becomes quadratic and can be solved efficiently even for large models. Unlike previous forgetting methods on deep networks, ours can achieve close to the state-of-the-art accuracy on large scale vision tasks. In particular, we show that our method allows forgetting without having to trade off the model accuracy.
引用
收藏
页码:792 / 801
页数:10
相关论文
共 50 条
  • [31] Enlightened remembering and the paradox of forgetting: from Dante to data privacy
    O'Callaghan, Patrick
    LAW AND HUMANITIES, 2023, 17 (02): : 210 - 227
  • [32] Privacy-Preserving Deep Reinforcement Learning in Vehicle Ad Hoc Networks
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    IEEE CONSUMER ELECTRONICS MAGAZINE, 2022, 11 (06) : 41 - 48
  • [33] VeriDIP: Verifying Ownership of Deep Neural Networks Through Privacy Leakage Fingerprints
    Hu, Aoting
    Lu, Zhigang
    Xie, Renjie
    Xue, Minhui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2568 - 2584
  • [34] A Neuron Noise-Injection Technique for Privacy Preserving Deep Neural Networks
    Adesuyi, Tosin A.
    Kim, Byeong Man
    OPEN COMPUTER SCIENCE, 2020, 10 (01) : 137 - 152
  • [35] A layer-wise Perturbation based Privacy Preserving Deep Neural Networks
    Adesuyi, Tosin A.
    Kim, Byeong Man
    2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), 2019, : 389 - 394
  • [36] Privacy-Preserving Computation Offloading for Parallel Deep Neural Networks Training
    Mao, Yunlong
    Hong, Wenbo
    Wang, Heng
    Li, Qun
    Zhong, Sheng
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (07) : 1777 - 1788
  • [37] Deep Neural Networks in Medical Imaging: Privacy Preservation, Image Generation and Applications
    Stoian, Diana Ioana
    Leonte, Horia Andrei
    Vizitiu, Anamaria
    Suciu, Constantin
    Itu, Lucian Mihai
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [38] DNA Privacy: Analyzing Malicious DNA Sequences Using Deep Neural Networks
    Bae, Ho
    Min, Seonwoo
    Choi, Hyun-Soo
    Yoon, Sungroh
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2022, 19 (02) : 888 - 898
  • [39] Overcoming catastrophic forgetting in neural networks
    Kirkpatricka, James
    Pascanu, Razvan
    Rabinowitz, Neil
    Veness, Joel
    Desjardins, Guillaume
    Rusu, Andrei A.
    Milan, Kieran
    Quan, John
    Ramalho, Tiago
    Grabska-Barwinska, Agnieszka
    Hassabis, Demis
    Clopath, Claudia
    Kumaran, Dharshan
    Hadsell, Raia
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) : 3521 - 3526
  • [40] Active forgetting in the learning by RBF networks
    Nakayama, H
    Hattori, A
    KNOWLEDGE-BASED INTELLIGENT INFORMATION ENGINEERING SYSTEMS & ALLIED TECHNOLOGIES, PTS 1 AND 2, 2001, 69 : 1488 - 1492