On the black-box explainability of object detection models for safe and trustworthy industrial applications

被引:0
|
作者
Andres, Alain [1 ,2 ]
Martinez-Seras, Aitor [1 ]
Lana, Ibai [1 ,2 ]
Del Ser, Javier [1 ,3 ]
机构
[1] TECNALIA, Basque Res & Technol Alliance BRTA, Mikeletegi Pasealekua 2, Donostia San Sebastian 20009, Spain
[2] Univ Deusto, Donostia San Sebastian 20012, Spain
[3] Univ Basque Country, UPV EHU, Bilbao 48013, Spain
关键词
Explainable Artificial Intelligence; Safe Artificial Intelligence; Trustworthy Artificial Intelligence; Object detection; Single-stage object detection; Industrial robotics; ARTIFICIAL-INTELLIGENCE;
D O I
10.1016/j.rineng.2024.103498
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods to address this issue, but many existing techniques are model-specific and designed for classification making them less effective for object detection and difficult for non-specialists to interpret. In this work focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Interpretable Companions for Black-Box Models
    Pan, Danqing
    Wang, Tong
    Hara, Satoshi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2444 - 2453
  • [22] Causal Interpretations of Black-Box Models
    Zhao, Qingyuan
    Hastie, Trevor
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2021, 39 (01) : 272 - 281
  • [23] Black-box modeling of a complex industrial process
    Horváth, G
    Pataki, B
    Strausz, G
    ECBS '99, IEEE CONFERENCE AND WORKSHOP ON ENGINEERING OF COMPUTER-BASED SYSTEMS, PROCEEDINGS, 1999, : 60 - 66
  • [24] A methodology of estimating hybrid black-box - prior knowledge models of an industrial processes
    Stachura, Marcin
    Janiszowski, Krzysztof
    2011 16TH INTERNATIONAL CONFERENCE ON METHODS AND MODELS IN AUTOMATION AND ROBOTICS, 2011, : 11 - 15
  • [25] COMBINATORIAL EXPERIMENTATION ON MULTILOOP OBJECT - BLACK-BOX BY PRESCHOOLERS
    PODDYAKOV, AN
    VOPROSY PSIKHOLOGII, 1990, (05) : 65 - 71
  • [26] OneMax in Black-Box Models with Several Restrictions
    Carola Doerr
    Johannes Lengler
    Algorithmica, 2017, 78 : 610 - 640
  • [27] ONEMAX in Black-Box Models with Several Restrictions
    Doerr, Carola
    Lengler, Johannes
    ALGORITHMICA, 2017, 78 (02) : 610 - 640
  • [28] Testing Framework for Black-box AI Models
    Aggarwal, Aniya
    Shaikh, Samiulla
    Hans, Sandeep
    Haldar, Swastik
    Ananthanarayanan, Rema
    Saha, Diptikalyan
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2021), 2021, : 81 - 84
  • [29] Verification of GUI Applications: A Black-Box Approach
    Arlt, Stephan
    Ermis, Evren
    Feo-Arenis, Sergio
    Podelski, Andreas
    LEVERAGING APPLICATIONS OF FORMAL METHODS, VERIFICATION AND VALIDATION: TECHNOLOGIES FOR MASTERING CHANGE, PT I, 2014, 8802 : 236 - 252
  • [30] Auditing black-box models for indirect influence
    Adler, Philip
    Falk, Casey
    Friedler, Sorelle A.
    Nix, Tionney
    Rybeck, Gabriel
    Scheidegger, Carlos
    Smith, Brandon
    Venkatasubramanian, Suresh
    KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) : 95 - 122