Explaining quantum circuits with Shapley values: towards explainable quantum machine learning

被引:0
|
作者
Heese, Raoul [1 ]
Gerlach, Thore [2 ]
Muecke, Sascha [3 ]
Mueller, Sabine [1 ]
Jakobs, Matthias [3 ]
Piatkowski, Nico [2 ]
机构
[1] Fraunhofer ITWM, Fraunhofer Pl 1, D-67663 Kaiserslautern, Germany
[2] Fraunhofer IAIS, Schloss Birlinghoven 1, D-53757 St Augustin, Germany
[3] TU Dortmund, August Schmidt Str 1, D-44227 Dortmund, Germany
关键词
Quantum machine learning; Explainable machine learning; Shapley values; NEURAL-NETWORKS;
D O I
10.1007/s42484-025-00254-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Methods of artificial intelligence (AI) and especially machine learning (ML) have been growing ever more complex, and at the same time have more and more impact on people's lives. This leads to explainable AI (XAI) manifesting itself as an important research field that helps humans to better comprehend ML systems. In parallel, quantum machine learning (QML) is emerging with the ongoing improvement of quantum computing hardware combined with its increasing availability via cloud services. QML enables quantum-enhanced ML in which quantum mechanics is exploited to facilitate ML tasks, typically in the form of quantum-classical hybrid algorithms that combine quantum and classical resources. Quantum gates constitute the building blocks of gate-based quantum hardware and form circuits that can be used for quantum computations. For QML applications, quantum circuits are typically parameterized and their parameters are optimized classically such that a suitably defined objective function is minimized. Inspired by XAI, we raise the question of the explainability of such circuits by quantifying the importance of (groups of) gates for specific goals. To this end, we apply the well-established concept of Shapley values. The resulting attributions can be interpreted as explanations for why a specific circuit works well for a given task, improving the understanding of how to construct parameterized (or variational) quantum circuits, and fostering their human interpretability in general. An experimental evaluation on simulators and two superconducting quantum hardware devices demonstrates the benefits of the proposed framework for classification, generative modeling, transpilation, and optimization. Furthermore, our results shed some light on the role of specific gates in popular QML approaches.
引用
收藏
页数:33
相关论文
共 50 条
  • [31] Quantum machine learning
    Allcock, Jonathan
    Zhang, Shengyu
    NATIONAL SCIENCE REVIEW, 2019, 6 (01) : 26 - +
  • [32] Quantum machine learning
    Biamonte, Jacob
    Wittek, Peter
    Pancotti, Nicola
    Rebentrost, Patrick
    Wiebe, Nathan
    Lloyd, Seth
    NATURE, 2017, 549 (7671) : 195 - 202
  • [33] Quantum machine learning
    von Lilienfeld, O. Anatole
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2017, 253
  • [34] Quantum machine learning
    Jacob Biamonte
    Peter Wittek
    Nicola Pancotti
    Patrick Rebentrost
    Nathan Wiebe
    Seth Lloyd
    Nature, 2017, 549 : 195 - 202
  • [35] Quantum machine learning
    Jonathan Allcock
    Shengyu Zhang
    National Science Review, 2019, 6 (01) : 26 - 28
  • [36] Quantum machine learning
    Lloyd, Seth
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2018, 256
  • [37] Quantum machine learning
    von Lilienfeld, O. Anatole
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2018, 255
  • [38] Explainable Heart Disease Prediction Using Ensemble-Quantum Machine Learning Approach
    Abdulsalam, Ghada
    Meshoul, Souham
    Shaiba, Hadil
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 36 (01): : 761 - 779
  • [39] QRLaXAI: quantum representation learning and explainable AI
    Don, Asitha Kottahachchi Kankanamge
    Khalil, Ibrahim
    QUANTUM MACHINE INTELLIGENCE, 2025, 7 (01)
  • [40] Explainable representation learning of small quantum states
    Frohnert, Felix
    van Nieuwenburg, Evert
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (01):