SENTINEL: Securing Indoor Localization Against Adversarial Attacks With Capsule Neural Networks

被引:0
|
作者
Gufran, Danish [1 ]
Anandathirtha, Pooja [1 ]
Pasricha, Sudeep [1 ]
机构
[1] Colorado State Univ, Dept Elect & Comp Engn, Ft Collins, CO 80523 USA
基金
美国国家科学基金会;
关键词
Location awareness; Training; Fluctuations; Working environment noise; Neural networks; Fingerprint recognition; Real-time systems; Indoor environment; Wireless fidelity; Resilience; Adversarial attacks; adversarial training; capsule neural networks; device heterogeneity; evil twin attacks; man-in-the-middle attacks; rogue access points (APs); Wi-Fi received signal strength (RSS) fingerprinting; ALGORITHM;
D O I
10.1109/TCAD.2024.3446717
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing demand for edge device-powered location-based services in indoor environments, Wi-Fi received signal strength (RSS) fingerprinting has become popular, given the unavailability of GPS indoors. However, achieving robust and efficient indoor localization faces several challenges, due to RSS fluctuations from dynamic changes in indoor environments and heterogeneity of edge devices, leading to diminished localization accuracy. While advances in machine learning (ML) have shown promise in mitigating these phenomena, it remains an open problem. Additionally, emerging threats from adversarial attacks on ML-enhanced indoor localization systems, especially those introduced by malicious or rogue access points (APs), can deceive ML models to further increase localization errors. To address these challenges, we present SENTINEL, a novel embedded ML framework utilizing modified capsule neural networks to bolster the resilience of indoor localization solutions against adversarial attacks, device heterogeneity, and dynamic RSS fluctuations. We also introduce RSSRogueLoc, a novel dataset capturing the effects of rogue APs from several real-world indoor environments. Experimental evaluations demonstrate that SENTINEL achieves significant improvements, with up to $3.5\times $ reduction in mean error and $3.4\times $ reduction in worst-case error compared to state-of-the-art frameworks using simulated adversarial attacks. SENTINEL also achieves improvements of up to $2.8\times $ in mean error and $2.7\times $ in worst-case error compared to state-of-the-art frameworks when evaluated with the real-world RSSRogueLoc dataset.
引用
收藏
页码:4021 / 4032
页数:12
相关论文
共 50 条
  • [1] Secure Indoor Localization Against Adversarial Attacks Using DCGAN
    Yan, Qingli
    Xiong, Wang
    Wang, Hui-Ming
    IEEE COMMUNICATIONS LETTERS, 2025, 29 (01) : 130 - 134
  • [2] Securing Networks Against Adversarial Domain Name System Tunneling Attacks Using Hybrid Neural Networks
    Ness, Stephanie
    IEEE Access, 2025, 13 : 46697 - 46709
  • [3] Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
    El-Allami, Rida
    Marchisio, Alberto
    Shafique, Muhammad
    Alouani, Ihsen
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 774 - 779
  • [4] SANGRIA: Stacked Autoencoder Neural Networks With Gradient Boosting for Indoor Localization
    Gufran, Danish
    Tiku, Saideep
    Pasricha, Sudeep
    IEEE EMBEDDED SYSTEMS LETTERS, 2024, 16 (02) : 142 - 145
  • [5] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [6] SeVuc: A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks
    Marchisio, Alberto
    Nanfa, Giorgio
    Khalid, Faiq
    Hanif, Muhammad Abdullah
    Martina, Maurizio
    Shafique, Muhammad
    MICROPROCESSORS AND MICROSYSTEMS, 2023, 96
  • [7] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [8] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [9] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [10] RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks
    Marchisio, Alberto
    De Marco, Antonio
    Colucci, Alessio
    Martina, Maurizio
    Shafique, Muhammad
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,