EdDSA Shield: Fortifying Machine Learning Against Data Poisoning Threats in Continual Learning

被引:0
|
作者
Nageswari, Akula [1 ]
Sanjeevulu, Vasundra [2 ]
机构
[1] Jawaharlal Nehru Technol Univ Ananthapur, Ananthapuramu, India
[2] JNTUA Coll Engn, Ananthapuramu, India
来源
PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON DATA SCIENCE, MACHINE LEARNING AND APPLICATIONS, VOL 1, ICDSMLA 2023 | 2025年 / 1273卷
关键词
Continual learning; Machine learning; EdDSA; Data poisoning; Defense; CONCEPT DRIFT;
D O I
10.1007/978-981-97-8031-0_107
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning in machine learning systems requires models to adapt and evolve based on new data and experiences. However, this dynamic nature also introduces a vulnerability to data poisoning attacks, wheremaliciously crafted input can lead to misleading model updates. In this research, we propose a novel approach utilizing theEdDSAencryption system to safeguard the integrity of data streams in continual learning scenarios. By leveraging EdDSA, we establish a robust defense against data poisoning attempts, maintaining the model's trustworthiness and performance over time. Through extensive experimentation on diverse datasets and continual learning scenarios, we demonstrate the efficacy of our proposed approach. The results indicate a significant reduction in susceptibility to data poisoning attacks, even in the presence of sophisticated adversaries.
引用
收藏
页码:1018 / 1028
页数:11
相关论文
共 50 条
  • [1] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [2] DATA POISONING ATTACK AIMING THE VULNERABILITY OF CONTINUAL LEARNING
    Han, Gyojin
    Choi, Jaehyun
    Hong, Hyeong Gwon
    Kim, Junmo
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1905 - 1909
  • [3] Targeted Data Poisoning Attacks Against Continual Learning Neural Networks
    Li, Huayu
    Ditzler, Gregory
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [4] Machine Learning Security Against Data Poisoning: Are We There Yet?
    Cina, Antonio Emanuele
    Grosse, Kathrin
    Demontis, Ambra
    Biggio, Battista
    Roli, Fabio
    Pelillo, Marcello
    COMPUTER, 2024, 57 (03) : 26 - 34
  • [5] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [6] Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?
    Oprea, Alina
    Singhal, Anoop
    Vassilev, Apostol
    COMPUTER, 2022, 55 (11) : 94 - 99
  • [7] Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
    Cina, Antonio Emanuele
    Grosse, Kathrin
    Demontis, Ambra
    Vascon, Sebastiano
    Zellinger, Werner
    Moser, Bernhard A.
    Oprea, Alina
    Biggio, Battista
    Pelillo, Marcello
    Roli, Fabio
    ACM COMPUTING SURVEYS, 2023, 55 (13S)
  • [8] A Flexible Poisoning Attack Against Machine Learning
    Jiang, Wenbo
    Li, Hongwei
    Liu, Sen
    Ren, Yanzhi
    He, Miao
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [9] Ethics of Adversarial Machine Learning and Data Poisoning
    Laurynas Adomaitis
    Rajvardhan Oak
    Digital Society, 2023, 2 (1):
  • [10] A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2021, 15 (02) : 215 - 240