Differential privacy for learning vector quantization

被引:7
|
作者
Brinkrolf, Johannes [1 ]
Goepfert, Christina [1 ]
Hammer, Barbara [1 ]
机构
[1] Bielefeld Univ, CITEC Ctr Excellence, Bielefeld, Germany
关键词
Differential privacy; Learning vector quantization;
D O I
10.1016/j.neucom.2018.11.095
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prototype-based machine learning methods such as learning vector quantisation (LVQ) offer flexible classification tools, which represent a classification in terms of typical prototypes. This representation leads to a particularly intuitive classification scheme, since prototypes can be inspected by a human partner in the same way as data points. Yet, it bears the risk of revealing private information included in the training data, since individual information of a single training data point can significantly influence the location of a prototype. In this contribution, we investigate the question how to algorithmically extend LVQ such that it provably obeys privacy constraints as offered by the notion of so-called differential privacy. More precisely, we demonstrate the sensitivity of LVQ to single data points and hence the need of its extension to private variants in case of possibly sensitive training data. We investigate three technologies which have been proposed in the context of differential privacy, and we extend these technologies to LVQ schemes. We investigate the effectiveness and efficiency of these schemes for various data sets, and we evaluate their scalability and robustness as regards the choice of meta-parameters and characteristics of training sets. Interestingly, one algorithm, which has been proposed in the literature due to its beneficial mathematical properties, does not scale well with data dimensionality, while two alternative techniques, which are based on simpler principles, display good results in practical settings. (C) 2019 Published by Elsevier B.V.
引用
收藏
页码:125 / 136
页数:12
相关论文
共 50 条
  • [1] Effects of Quantization on Federated Learning with Local Differential Privacy
    Kim, Muah
    Gunlu, Onur
    Schaefer, Rafael F.
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 921 - 926
  • [2] The Effect of Quantization in Federated Learning: ARenyi Differential Privacy Perspective
    Kang, Tianqu
    Liu, Lumin
    He, Hengtao
    Zhang, Jun
    Song, S. H.
    Letaief, Khaled B.
    2024 IEEE INTERNATIONAL MEDITERRANEAN CONFERENCE ON COMMUNICATIONS AND NETWORKING, MEDITCOM 2024, 2024, : 233 - 238
  • [3] Learning vector quantization
    Kohonen, T.
    Neural Networks, 1988, 1 (1 SUPPL)
  • [4] Convergence of Stochastic Vector Quantization and Learning Vector Quantization with Bregman Divergences
    Mavridis, Christos N.
    Baras, John S.
    IFAC PAPERSONLINE, 2020, 53 (02): : 2214 - 2219
  • [5] Alternative learning vector quantization
    Wu, KL
    Yang, MS
    PATTERN RECOGNITION, 2006, 39 (03) : 351 - 362
  • [6] Learning Vector Quantization networks
    Matera, F
    SUBSTANCE USE & MISUSE, 1998, 33 (02) : 271 - 282
  • [7] Generalized learning vector quantization
    Sato, A
    Yamada, K
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 8: PROCEEDINGS OF THE 1995 CONFERENCE, 1996, 8 : 423 - 429
  • [8] Regression Learning Vector Quantization
    Grbovic, Mihajlo
    Vucetic, Slobodan
    2009 9TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, 2009, : 788 - 793
  • [9] Soft learning vector quantization
    Seo, S
    Obermayer, K
    NEURAL COMPUTATION, 2003, 15 (07) : 1589 - 1604
  • [10] Learning vector quantization: A review
    Karayiannis, Nicolaos B.
    International Journal of Smart Engineering System Design, 1997, 1 (01): : 33 - 58