SoK: Explainable Machine Learning for Computer Security Applications

被引:10
|
作者
Nadeem, Azqa [1 ]
Vos, Daniel [1 ]
Cao, Clinton [1 ]
Pajola, Luca [2 ]
Dieck, Simon [1 ]
Baumgartner, Robert [1 ]
Verwer, Sicco [1 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
[2] Univ Padua, Padua, Italy
关键词
XAI; Machine learning; Cyber security; AI;
D O I
10.1109/EuroSP57164.2023.00022
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Explainable Artificial Intelligence (XAI) aims to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, who utilize XAI for 4 distinct objectives within an ML pipeline, namely 1) XAI-enabled user assistance, 2) XAI-enabled model verification, 3) explanation verification & robustness, and 4) offensive use of explanations. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows user studies for explanation evaluation are conducted in only 14% of the cases. The security literature sometimes also fails to disentangle the role of the various stakeholders, e.g., by providing explanations to model users and designers while also exposing them to adversaries. Additionally, the role of model designers is particularly minimized in the security literature. To this end, we present an illustrative tutorial for model designers, demonstrating how XAI can help with model verification. We also discuss scenarios where interpretability by design may be a better alternative. The systematization and the tutorial enable us to challenge several assumptions, and present open problems that can help shape the future of XAI research within cybersecurity.
引用
收藏
页码:221 / 240
页数:20
相关论文
共 50 条
  • [1] SoK: Explainable Machine Learning in Adversarial Environments
    Noppel, Maximilian
    Wressnegger, Christian
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2441 - 2459
  • [2] SoK: Security and Privacy in Machine Learning
    Papernot, Nicolas
    McDaniel, Patrick
    Sinha, Arunesh
    Wellman, Michael P.
    2018 3RD IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2018), 2018, : 399 - 414
  • [3] Machine learning for computer security
    Chan, Philip K.
    Lippmann, Richard P.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2006, 7 : 2669 - 2672
  • [4] Pitfalls in Machine Learning for Computer Security
    Arp, Daniel
    Quiring, Erwin
    Pendlebury, Feargus
    Warnecke, Alexander
    Pierazzi, Fabio
    Wressnegger, Christian
    Cavallaro, Lorenzo
    Rieck, Konrad
    COMMUNICATIONS OF THE ACM, 2024, 67 (11) : 104 - 112
  • [5] Explainable Machine Learning for Malware Detection on Android Applications
    Palma, Catarina
    Ferreira, Artur
    Figueiredo, Mario
    INFORMATION, 2024, 15 (01)
  • [6] Applications of Machine Learning in Hardware Security
    Halak, Basel
    Mispan, Mohd Syafiq
    2022 2ND INTERNATIONAL CONFERENCE OF SMART SYSTEMS AND EMERGING TECHNOLOGIES (SMARTTECH 2022), 2022, : 212 - 213
  • [7] Lessons Learned on Machine Learning for Computer Security
    Arp, Daniel
    Quiring, Erwin
    Pendlebury, Feargus
    Warnecke, Alexander
    Pierazzi, Fabio
    Wressnegger, Christian
    Cavallaro, Lorenzo
    Rieck, Konrad
    IEEE SECURITY & PRIVACY, 2023, 21 (05) : 72 - 77
  • [8] Machine Learning in Computer Security Is Difficult to Fix
    Biggio, Battista
    COMMUNICATIONS OF THE ACM, 2024, 67 (11) : 103 - 103
  • [9] Explainable Machine Learning
    Garcke, Jochen
    Roscher, Ribana
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2023, 5 (01): : 169 - 170
  • [10] Predictive and Explainable Machine Learning for Industrial Internet of Things Applications
    Christou, Ioannis T.
    Kefalakis, Nikos
    Zalonis, Andreas
    Soldatos, John
    16TH ANNUAL INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SENSOR SYSTEMS (DCOSS 2020), 2020, : 213 - 218