On the black-box explainability of object detection models for safe and trustworthy industrial applications

被引:0
作者
Andres, Alain [1 ,2 ]
Martinez-Seras, Aitor [1 ]
Lana, Ibai [1 ,2 ]
Del Ser, Javier [1 ,3 ]
机构
[1] TECNALIA, Basque Res & Technol Alliance BRTA, Mikeletegi Pasealekua 2, Donostia San Sebastian 20009, Spain
[2] Univ Deusto, Donostia San Sebastian 20012, Spain
[3] Univ Basque Country, UPV EHU, Bilbao 48013, Spain
关键词
Explainable Artificial Intelligence; Safe Artificial Intelligence; Trustworthy Artificial Intelligence; Object detection; Single-stage object detection; Industrial robotics; ARTIFICIAL-INTELLIGENCE;
D O I
10.1016/j.rineng.2024.103498
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods to address this issue, but many existing techniques are model-specific and designed for classification making them less effective for object detection and difficult for non-specialists to interpret. In this work focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used.
引用
收藏
页数:14
相关论文
共 37 条
  • [31] A Differential-Evolution-Based Approach to Extract Univariate Decision Trees From Black-Box Models Using Tabular Data
    Rivera-Lopez, Rafael
    Ceballos, Hector G.
    IEEE ACCESS, 2024, 12 : 169850 - 169868
  • [32] Feature-Based Object Detection and Pose Estimation Based on 3D Cameras and CAD Models for Industrial Robot Applications
    Seppala, Tuomas
    Saukkoriipi, Janne
    Lohi, Taneli
    Soutukorva, Samuli
    Heikkila, Tapio
    Koskinen, Jukka
    2022 18TH IEEE/ASME INTERNATIONAL CONFERENCE ON MECHATRONIC AND EMBEDDED SYSTEMS AND APPLICATIONS (MESA 2022), 2022,
  • [33] Multimodal Understanding: Investigating the Capabilities of Large Multimodal Models for Object Detection in XR Applications
    Arnold, Rahel
    Schuldt, Heiko
    PROCEEDINGS OF THE 2ND WORKSHOP ON LARGE GENERATIVE MODELS MEET MULTIMODAL APPLICATIONS, LGM(CUBE)A 2024, 2024, : 26 - 35
  • [34] Detection of COVID-19 in X-ray Images Using Densely Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box Model
    Ali, Sikandar
    Hussain, Ali
    Bhattacharjee, Subrata
    Athar, Ali
    Abdullah, Abdullah
    Kim, Hee-Cheol
    SENSORS, 2022, 22 (24)
  • [35] Building Robust Industrial Applicable Object Detection Models using Transfer Learning and Single Pass Deep Learning Architectures
    Puttemans, Steven
    Callemein, Timothy
    Goedeme, Toon
    PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2018), VOL 5: VISAPP, 2018, : 209 - 217
  • [36] Fusion and Self-adaptation of Color and Gradient Based Models for Object Detection and Localization in Applications of Service Robots
    Dong, Li
    Yu, Xinguo
    Li, Liyuan
    Hoe, Jerry Kah Eng
    SOCIAL ROBOTICS, ICSR 2010, 2010, 6414 : 392 - 400
  • [37] Advancing water quality assessment and prediction using machine learning models, coupled with explainable artificial intelligence (XAI) techniques like shapley additive explanations (SHAP) for interpreting the black-box nature
    Makumbura, Randika K.
    Mampitiya, Lakindu
    Rathnayake, Namal
    Meddage, D. P. P.
    Henna, Shagufta
    Dang, Tuan Linh
    Hoshino, Yukinobu
    Rathnayake, Upaka
    RESULTS IN ENGINEERING, 2024, 23