Trusted AI in Multiagent Systems: An Overview of Privacy and Security for Distributed Learning

被引:16
作者
Ma, Chuan [1 ,2 ]
Li, Jun [3 ]
Wei, Kang [3 ,4 ]
Liu, Bo [5 ]
Ding, Ming [6 ]
Yuan, Long [7 ]
Han, Zhu [8 ,9 ]
Vincent Poor, H. [10 ]
机构
[1] Zhejiang Lab, Hangzhou 311121, Peoples R China
[2] Southeast Univ, Key Lab Comp Network & Informat Integrat, Minist Educ, Nanjing 211189, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210096, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[5] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
[6] CSIRO, Data61, Sydney, NSW 2015, Australia
[7] Nanjing Univ Sci & Technol, Sch Comp Sci, Nanjing 210096, Peoples R China
[8] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77004 USA
[9] Kyung Hee Univ, Dept Comp Sci & Engn, Seoul 446701, South Korea
[10] Princeton Univ, Dept Elect & Comp Engn, Princeton, NJ 08544 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Distributed machine learning (ML); federated learning (FL); multiagent systems; privacy; security; trusted artificial intelligence (AI); FINITE-TIME CONSENSUS; NEURAL-NETWORKS; DE-ANONYMIZATION; ATTACKS; MODEL; CHALLENGES; FRAMEWORK; SERVICES;
D O I
10.1109/JPROC.2023.3306773
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs. Then, the processed information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one and brings about significant benefits. However, this new distributed ML paradigm raises new risks in terms of privacy and security issues. In this article, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e., we consider the following levels: 1) the level of preprocessed data; 2) the level of learning models; 3) the level of extracted knowledge; and 4) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of current state-of-the-art attack mechanisms and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.
引用
收藏
页码:1097 / 1132
页数:36
相关论文
共 324 条
  • [1] Abad G., 2021, ARXIV
  • [2] On the Protection of Private Information in Machine Learning Systems: Two Recent Approches (Invited Paper)
    Abadi, Martin
    Erlingsson, Ulfar
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Papernot, Nicolas
    Talwar, Kunal
    Zhang, Li
    [J]. 2017 IEEE 30TH COMPUTER SECURITY FOUNDATIONS SYMPOSIUM (CSF), 2017, : 1 - 6
  • [3] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [4] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [5] Abuadbba Sharif, 2020, ASIA CCS '20: Proceedings of the 15th ACM Asia Conference on Computer and Communications Security, P305, DOI 10.1145/3320269.3384740
  • [6] A Survey on Homomorphic Encryption Schemes: Theory and Implementation
    Acar, Abbas
    Aksu, Hidayet
    Uluagac, A. Selcuk
    Conti, Mauro
    [J]. ACM COMPUTING SURVEYS, 2018, 51 (04)
  • [7] Adams D, 2018, PROCEEDINGS OF THE FOURTEENTH SYMPOSIUM ON USABLE PRIVACY AND SECURITY, P427
  • [8] Agarwal N, 2018, ADV NEUR IN, V31
  • [9] Ahmad S., 2012, Int J Comput Appl, V46, P26
  • [10] Alistarh D, 2018, ADV NEUR IN, V31