On the black-box explainability of object detection models for safe and trustworthy industrial applications

被引:0
|
作者
Andres, Alain [1 ,2 ]
Martinez-Seras, Aitor [1 ]
Lana, Ibai [1 ,2 ]
Del Ser, Javier [1 ,3 ]
机构
[1] TECNALIA, Basque Res & Technol Alliance BRTA, Mikeletegi Pasealekua 2, Donostia San Sebastian 20009, Spain
[2] Univ Deusto, Donostia San Sebastian 20012, Spain
[3] Univ Basque Country, UPV EHU, Bilbao 48013, Spain
关键词
Explainable Artificial Intelligence; Safe Artificial Intelligence; Trustworthy Artificial Intelligence; Object detection; Single-stage object detection; Industrial robotics; ARTIFICIAL-INTELLIGENCE;
D O I
10.1016/j.rineng.2024.103498
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods to address this issue, but many existing techniques are model-specific and designed for classification making them less effective for object detection and difficult for non-specialists to interpret. In this work focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used.
引用
收藏
页数:14
相关论文
共 37 条
  • [1] Black-box Transferable Attack Method for Object Detection Based on GAN
    Lu Y.-X.
    Liu Z.-Y.
    Luo Y.-G.
    Deng S.-Y.
    Jiang T.
    Ma J.-Y.
    Dong Y.-P.
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (07): : 3531 - 3550
  • [2] Rethinking explainability: toward a postphenomenology of black-box artificial intelligence in medicine
    Friedrich, Annie B.
    Mason, Jordan
    Malone, Jay R.
    ETHICS AND INFORMATION TECHNOLOGY, 2022, 24 (01)
  • [3] Rethinking explainability: toward a postphenomenology of black-box artificial intelligence in medicine
    Annie B. Friedrich
    Jordan Mason
    Jay R. Malone
    Ethics and Information Technology, 2022, 24
  • [4] Explaining Black-Box Models for Biomedical Text Classification
    Moradi, Milad
    Samwald, Matthias
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (08) : 3112 - 3120
  • [5] GSM-HM: Generation of Saliency Maps for Black-Box Object Detection Model Based on Hierarchical Masking
    Yan, Yicheng
    Li, Xianfeng
    Zhan, Ying
    Sun, Lianpeng
    Zhu, Jinjun
    IEEE ACCESS, 2022, 10 : 98268 - 98277
  • [6] Imitated Detectors: Stealing Knowledge of Black-box Object Detectors
    Liang, Siyuan
    Liang, Aishan
    Liang, Jiawei
    Li, Longkang
    Bai, Yang
    Cao, Xiaochun
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4839 - 4847
  • [7] Explainable AI: To Reveal the Logic of Black-Box Models
    Chinu, Urvashi
    Bansal, Urvashi
    NEW GENERATION COMPUTING, 2024, 42 (01) : 53 - 87
  • [8] Understanding Black-Box Attacks Against Object Detectors from a User's Perspective
    Midtlid, Kim Andre
    Asheim, Johannes
    Li, Jingyue
    QUALITY OF INFORMATION AND COMMUNICATIONS TECHNOLOGY, QUATIC 2022, 2022, 1621 : 266 - 280
  • [9] A Large-Scale Multiple-objective Method for Black-box Attack Against Object Detection
    Liang, Siyuan
    Li, Longkang
    Fan, Yanbo
    Jia, Xiaojun
    Li, Jingzhi
    Wu, Baoyuan
    Cao, Xiaochun
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 619 - 636
  • [10] Reproducibility and explainability in digital pathology: The need to make black-box artificial intelligence systems more transparent
    Faa, Gavino
    Fraschini, Matteo
    Barberini, Luigi
    JOURNAL OF PUBLIC HEALTH RESEARCH, 2024, 13 (04)