Deep Reinforcement Learning for Partially Observable Data Poisoning Attack in Crowdsensing Systems

被引:104
作者
Li, Mohan [1 ]
Sun, Yanbin [1 ]
Lu, Hui [1 ]
Maharjan, Sabita [2 ,3 ]
Tian, Zhihong [1 ]
机构
[1] Guangzhou Univ, Cyberspace Inst Adv Technol, Guangzhou 510006, Peoples R China
[2] Simula Metropolitan Ctr Digital Engn, Ctr Resilient Networks & Applicat, N-0167 Oslo, Norway
[3] Univ Oslo, Dept Informat, N-0316 Oslo, Norway
来源
IEEE INTERNET OF THINGS JOURNAL | 2020年 / 7卷 / 07期
关键词
Crowdsensing; Internet of Things; Data models; Reinforcement learning; Sensor systems; Mobile handsets; Crowdsensing systems; data poisoning attack; deep reinforcement learning; truth discovery; MOBILE; INTERNET;
D O I
10.1109/JIOT.2019.2962914
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Crowdsensing systems collect various types of data from sensors embedded on mobile devices owned by individuals. These individuals are commonly referred to as workers that complete tasks published by crowdsensing systems. Because of the relative lack of control over worker identities, crowdsensing systems are susceptible to data poisoning attacks which interfering with data analysis results by injecting fake data conflicting with ground truth. Frameworks like TruthFinder can resolve data conflicts by evaluating the trustworthiness of the data providers. These frameworks somehow make crowdsensing systems more robust since they can limit the impact of dirty data by reducing the value of unreliable workers. However, previous work has shown that TruthFinder may also be affected by the data poisoning attack when the malicious workers have access to global information. In this article, we focus on partially observable data poisoning attacks in crowdsensing systems. We show that even if the malicious workers only have access to local information, they can find effective data poisoning attack strategies to interfere with crowdsensing systems with TruthFinder. First, we formally model the problem of partially observable data poisoning attack against crowdsensing systems. Then, we propose a data poisoning attack method based on deep reinforcement learning, which helps malicious workers jeopardize with TruthFinder while hiding themselves. Based on the method, the malicious workers can learn from their attack attempts and evolve the poisoning strategies continuously. Finally, we conduct experiments on real-life data sets to verify the effectiveness of the proposed method.
引用
收藏
页码:6266 / 6278
页数:13
相关论文
共 50 条
  • [41] High-Performance UAV Crowdsensing: A Deep Reinforcement Learning Approach
    Wei, Kaimin
    Huang, Kai
    Wu, Yongdong
    Li, Zhetao
    He, Hongliang
    Zhang, Jilian
    Chen, Jinpeng
    Guo, Song
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) : 18487 - 18499
  • [42] Fuzzy Reinforcement Learning Control for Decentralized Partially Observable Markov Decision Processes
    Sharma, Rajneesh
    Spaan, Matthijs T. J.
    IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ 2011), 2011, : 1422 - 1429
  • [43] A reinforcement learning scheme for a partially-observable multi-agent game
    Ishii, S
    Fujita, H
    Mitsutake, M
    Yamazaki, T
    Matsuda, J
    Matsuno, Y
    MACHINE LEARNING, 2005, 59 (1-2) : 31 - 54
  • [44] Partially observable environment estimation with uplift inference for reinforcement learning based recommendation
    Wenjie Shang
    Qingyang Li
    Zhiwei Qin
    Yang Yu
    Yiping Meng
    Jieping Ye
    Machine Learning, 2021, 110 : 2603 - 2640
  • [45] A Reinforcement Learning Scheme for a Partially-Observable Multi-Agent Game
    Shin Ishii
    Hajime Fujita
    Masaoki Mitsutake
    Tatsuya Yamazaki
    Jun Matsuda
    Yoichiro Matsuno
    Machine Learning, 2005, 59 : 31 - 54
  • [46] Partially observable environment estimation with uplift inference for reinforcement learning based recommendation
    Shang, Wenjie
    Li, Qingyang
    Qin, Zhiwei
    Yu, Yang
    Meng, Yiping
    Ye, Jieping
    MACHINE LEARNING, 2021, 110 (09) : 2603 - 2640
  • [47] Adaptive Compensation for Robotic Joint Failures Using Partially Observable Reinforcement Learning
    Pham, Tan-Hanh
    Aikins, Godwyll
    Truong, Tri
    Nguyen, Kim-Doang
    ALGORITHMS, 2024, 17 (10)
  • [48] Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning
    Hammoud, Mohamad Abed El Rahman
    Raboudi, Naila
    Titi, Edriss S.
    Knio, Omar
    Hoteit, Ibrahim
    JOURNAL OF ADVANCES IN MODELING EARTH SYSTEMS, 2024, 16 (08)
  • [49] Learning key steps to attack deep reinforcement learning agents
    Yu, Chien-Min
    Chen, Ming-Hsin
    Lin, Hsuan-Tien
    MACHINE LEARNING, 2023, 112 (05) : 1499 - 1522
  • [50] Learning key steps to attack deep reinforcement learning agents
    Chien-Min Yu
    Ming-Hsin Chen
    Hsuan-Tien Lin
    Machine Learning, 2023, 112 : 1499 - 1522