On the black-box explainability of object detection models for safe and trustworthy industrial applications

被引:0
|
作者
Andres, Alain [1 ,2 ]
Martinez-Seras, Aitor [1 ]
Lana, Ibai [1 ,2 ]
Del Ser, Javier [1 ,3 ]
机构
[1] TECNALIA, Basque Res & Technol Alliance BRTA, Mikeletegi Pasealekua 2, Donostia San Sebastian 20009, Spain
[2] Univ Deusto, Donostia San Sebastian 20012, Spain
[3] Univ Basque Country, UPV EHU, Bilbao 48013, Spain
关键词
Explainable Artificial Intelligence; Safe Artificial Intelligence; Trustworthy Artificial Intelligence; Object detection; Single-stage object detection; Industrial robotics; ARTIFICIAL-INTELLIGENCE;
D O I
10.1016/j.rineng.2024.103498
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods to address this issue, but many existing techniques are model-specific and designed for classification making them less effective for object detection and difficult for non-specialists to interpret. In this work focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used.
引用
收藏
页数:14
相关论文
共 37 条
  • [11] Object-Aware Transfer-Based Black-Box Adversarial Attack on Object Detector
    Leng, Zhuo
    Cheng, Zesen
    Wei, Pengxu
    Chen, Jie
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 278 - 289
  • [12] Introspective False Negative Prediction for Black-Box Object Detectors in Autonomous Driving
    Yang, Qinghua
    Chen, Hui
    Chen, Zhe
    Su, Junzhe
    SENSORS, 2021, 21 (08)
  • [13] Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models
    Kuk, Michal
    Bobek, Szymon
    Nalepa, Grzegorz J.
    COMPUTATIONAL SCIENCE - ICCS 2022, PT III, 2022, 13352 : 668 - 675
  • [14] Tell Me More: Black Box Explainability for APT Detection on System Provenance Graphs
    Welter, Felix
    Wilkens, Florian
    Fischer, Mathias
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3817 - 3823
  • [15] A discrete cosine transform-based query efficient attack on black-box object detectors
    Kuang, Xiaohui
    Gao, Xianfeng
    Wang, Lianfang
    Zhao, Gang
    Ke, Lishan
    Zhang, Quanxin
    INFORMATION SCIENCES, 2021, 546 : 596 - 607
  • [16] PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries
    Kaczmarek-Majer, Katarzyna
    Casalino, Gabriella
    Castellano, Giovanna
    Dominiak, Monika
    Hryniewicz, Olgierd
    Kaminska, Olga
    Vessio, Gennaro
    Diaz-Rodriguez, Natalia
    INFORMATION SCIENCES, 2022, 614 : 374 - 399
  • [17] Fuzzy Rule-Based Local Surrogate Models for Black-Box Model Explanation
    Zhu, Xiubin
    Wang, Dan
    Pedrycz, Witold
    Li, Zhiwu
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2023, 31 (06) : 2056 - 2064
  • [18] Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models
    Ryo, Masahiro
    Angelov, Boyan
    Mammola, Stefano
    Kass, Jamie M.
    Benito, Blas M.
    Hartig, Florian
    ECOGRAPHY, 2021, 44 (02) : 199 - 205
  • [19] Towards cross-task universal perturbation against black-box object detectors in autonomous driving
    Zhang, Quanxin
    Zhao, Yuhang
    Wang, Yajie
    Baker, Thar
    Zhang, Jian
    Hu, Jingjing
    COMPUTER NETWORKS, 2020, 180 (180)
  • [20] Object Detection for Industrial Applications: Training Strategies for AI-Based Depalletizer
    Buongiorno, Domenico
    Caramia, Donato
    Di Ruscio, Luca
    Longo, Nicola
    Panicucci, Simone
    Di Stefano, Giovanni
    Bevilacqua, Vitoantonio
    Brunetti, Antonio
    APPLIED SCIENCES-BASEL, 2022, 12 (22):