Explainable Artificial Intelligence for Interpretable Data Minimization

被引:0
|
作者
Becker, Maximilian [1 ]
Toprak, Emrah [1 ]
Beyerer, Juergen [2 ]
机构
[1] Karlsruhe Inst Technol, Vis & Fus Lab, Karlsruhe, Germany
[2] Fraunhofer IOSB, Karlsruhe, Germany
关键词
XAI; Data Minimization; Counterfactual Explanations;
D O I
10.1109/ICDMW60847.2023.00119
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Black box models such as deep neural networks are increasingly being deployed in high-stakes fields, including justice, health, and finance. Furthermore, they require a huge amount of data, and such data often contains personal information. However, the principle of data minimization in the European Union's General Data Protection Regulation requires collecting only the data that is essential to fulfilling a particular purpose. Implementing data minimization for black box models can be difficult because it involves identifying the minimum set of variables that are relevant to the model's prediction, which may not be apparent without access to the model's inner workings. In addition, users are often reluctant to share all their personal information. We propose an interactive system to reduce the amount of personal data by determining the minimal set of features required for a correct prediction using explainable artificial intelligence techniques. Our proposed method can inform the user whether the provided variables contain enough information for the model to make accurate predictions or if additional variables are necessary. This humancentered approach can enable providers to minimize the amount of personal data collected for analysis and may increase the user's trust and acceptance of the system.
引用
收藏
页码:885 / 893
页数:9
相关论文
共 50 条
  • [41] Explainable Artificial Intelligence for Cybersecurity
    Sharma, Deepak Kumar
    Mishra, Jahanavi
    Singh, Aeshit
    Govil, Raghav
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103
  • [42] Explainable Artificial Intelligence: A Survey
    Dosilovic, Filip Karlo
    Brcic, Mario
    Hlupic, Nikica
    2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2018, : 210 - 215
  • [43] DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence
    Wani, Niyaz Ahmad
    Kumar, Ravinder
    Bedi, Jatin
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 243
  • [44] Fast and interpretable prediction of seismic kinematics of flexible retaining walls in sand through explainable artificial intelligence
    Pistolesi, Francesco
    Baldassini, Michele
    Volpe, Evelina
    Focacci, Francesco
    Cattoni, Elisabetta
    COMPUTERS AND GEOTECHNICS, 2025, 179
  • [45] Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data
    Liu, Peng
    Wang, Lizhe
    Li, Jun
    REMOTE SENSING, 2023, 15 (23)
  • [46] Orchestrating explainable artificial intelligence for multimodal and longitudinal data in medical imaging
    de Mortanges, Aurelie Pahud
    Luo, Haozhe
    Shu, Shelley Zixin
    Kamath, Amith
    Suter, Yannick
    Shelan, Mohamed
    Pollinger, Alexander
    Reyes, Mauricio
    NPJ DIGITAL MEDICINE, 2024, 7 (01):
  • [47] Applying Explainable Artificial Intelligence Techniques on Linked Open Government Data
    Kalampokis, Evangelos
    Karamanou, Areti
    Tarabanis, Konstantinos
    ELECTRONIC GOVERNMENT, EGOV 2021, 2021, 12850 : 247 - 258
  • [48] Application of Artificial Intelligence in Healthcare: The Need for More Interpretable Artificial Intelligence
    Tavares, Jorge
    ACTA MEDICA PORTUGUESA, 2024, 37 (06) : 411 - 414
  • [49] Should artificial intelligence be interpretable to humans?
    Schwartz, Matthew D.
    NATURE REVIEWS PHYSICS, 2022, 4 (12) : 741 - 742
  • [50] Memristive Explainable Artificial Intelligence Hardware
    Song, Hanchan
    Park, Woojoon
    Kim, Gwangmin
    Choi, Moon Gu
    In, Jae Hyun
    Rhee, Hakseung
    Kim, Kyung Min
    ADVANCED MATERIALS, 2024, 36 (25)