Vulnerability of Clean-Label Poisoning Attack for Object Detection in Maritime Autonomous Surface Ships

被引:2
|
作者
Lee, Changui [1 ]
Lee, Seojeong [2 ]
机构
[1] Korea Conform Labs, Software Testing Ctr, Chang Won 51395, South Korea
[2] Korea Maritime & Ocean Univ, Div Marine Syst Engn, Pusan 49112, South Korea
关键词
object detection; cyberthreat; risk scenario; clean-label poisoning attack; poison frog;
D O I
10.3390/jmse11061179
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Artificial intelligence (AI) will play an important role in realizing maritime autonomous surface ships (MASSs). However, as a double-edged sword, this new technology brings forth new threats. The purpose of this study is to raise awareness among stakeholders regarding the potential security threats posed by AI in MASSs. To achieve this, we propose a hypothetical attack scenario in which a clean-label poisoning attack was executed on an object detection model, which resulted in boats being misclassified as ferries, thus preventing the detection of pirates approaching a boat. We used the poison frog algorithm to generate poisoning instances, and trained a YOLOv5 model with both clean and poisoned data. Despite the high accuracy of the model, it misclassified boats as ferries owing to the poisoning of the target instance. Although the experiment was conducted under limited conditions, we confirmed vulnerabilities in the object detection algorithm. This misclassification could lead to inaccurate AI decision making and accidents. The hypothetical scenario proposed in this study emphasizes the vulnerability of object detection models to clean-label poisoning attacks, and the need for mitigation strategies against security threats posed by AI in the maritime industry.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Clean-label poisoning attack with perturbation causing dominant features ✩
    Zhang, Chen
    Tang, Zhuo
    Li, Kenli
    INFORMATION SCIENCES, 2023, 644
  • [2] Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
    Aghakhani, Hojjat
    Meng, Dongyu
    Wang, Yu-Xiang
    Kruegel, Christopher
    Vigna, Giovanni
    2021 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2021), 2021, : 159 - 178
  • [3] Clean-label backdoor attack and defense: An examination of language model vulnerability
    Zhao, Shuai
    Xu, Xiaoyu
    Xiao, Luwei
    Wen, Jinming
    Tuan, Luu Anh
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 265
  • [4] A Textual Clean-Label Backdoor Attack Strategy against Spam Detection
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    2021 14TH INTERNATIONAL CONFERENCE ON SECURITY OF INFORMATION AND NETWORKS (SIN 2021), 2021,
  • [5] Practical clean-label backdoor attack against static malware detection
    Zhan, Dazhi
    Xu, Kun
    Liu, Xin
    Han, Tong
    Pan, Zhisong
    Guo, Shize
    COMPUTERS & SECURITY, 2025, 150
  • [6] Targeted Clean-Label Poisoning Attacks on Federated Learning
    Patel, Ayushi
    Singh, Priyanka
    RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 : 231 - 243
  • [7] Clean-label poisoning attacks on federated learning for IoT
    Yang, Jie
    Zheng, Jun
    Baker, Thar
    Tang, Shuai
    Tan, Yu-an
    Zhang, Quanxin
    EXPERT SYSTEMS, 2023, 40 (05)
  • [8] CCBA: Code Poisoning-Based Clean-Label Covert Backdoor Attack Against DNNs
    Yang, Xubo
    Li, Linsen
    Hua, Cunqing
    Yao, Changhao
    DIGITAL FORENSICS AND CYBER CRIME, PT 1, ICDF2C 2023, 2024, 570 : 179 - 192
  • [9] Maritime Autonomous Surface Ships
    Xu, Haitong
    Moreira, Lucia
    Xiang, Xianbo
    Soares, C. Guedes
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (06)
  • [10] Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
    Zhu, Chen
    Huang, W. Ronny
    Shafahi, Ali
    Li, Hengduo
    Taylor, Gavin
    Studer, Christoph
    Goldstein, Tom
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97