Interpretable Machine Learning for Psychological Research: Opportunities and Pitfalls

被引:17
|
作者
Henninger, Mirka [1 ,2 ]
Debelak, Rudolf [1 ]
Rothacher, Yannick [1 ]
Strobl, Carolin [1 ]
机构
[1] Univ Zurich, Inst Psychol, Zurich, Eswatini
[2] Univ Zurich, Inst Psychol, CH-8050 Zurich, Switzerland
关键词
interpretation techniques; machine learning; neural network; random forest; correlated predictors; interaction detection; VARIABLE IMPORTANCE; CLASSIFICATION TREES; NEURAL-NETWORKS; PREDICTION; INFERENCE; MODELS;
D O I
10.1037/met0000560
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships.
引用
收藏
页数:36
相关论文
共 50 条
  • [21] Machine Learning in Environmental Research: Common Pitfalls and Best Practices
    Zhu, Jun-Jie
    Yang, Meiqi
    Ren, Zhiyong Jason
    ENVIRONMENTAL SCIENCE & TECHNOLOGY, 2023, 57 (46) : 17671 - 17689
  • [22] An Interpretable Graph-Based Mapping of Trustworthy Machine Learning Research
    Derzsy, Noemi
    Majumdar, Subhabrata
    Malik, Rajat
    COMPLEX NETWORKS XII, 2021, : 73 - 85
  • [23] Machine learning in human movement biomechanics: Best practices, common pitfalls, and new opportunities
    Halilaj, Eni
    Rajagopal, Apoorva
    Fiterau, Madalina
    Hicks, Jennifer L.
    Hastie, Trevor J.
    Delp, Scott L.
    JOURNAL OF BIOMECHANICS, 2018, 81 : 1 - 11
  • [24] Machine Learning and Psychological Research: The Unexplored Effect of Measurement
    Jacobucci, Ross
    Grimm, Kevin J.
    PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2020, 15 (03) : 809 - 816
  • [25] Interpretable machine learning in bioinformatics Introduction
    Cho, Young-Rae
    Kang, Mingon
    METHODS, 2020, 179 : 1 - 2
  • [26] Interpretable machine learning with reject option
    Brinkrolf, Johannes
    Hammer, Barbara
    AT-AUTOMATISIERUNGSTECHNIK, 2018, 66 (04) : 283 - 290
  • [27] A Survey of Interpretable Machine Learning Methods
    Wang, Yan
    Tuerhong, Gulanbaier
    2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI, 2022, : 232 - 237
  • [28] Interpretable discovery of semiconductors with machine learning
    Hitarth Choubisa
    Petar Todorović
    Joao M. Pina
    Darshan H. Parmar
    Ziliang Li
    Oleksandr Voznyy
    Isaac Tamblyn
    Edward H. Sargent
    npj Computational Materials, 9
  • [29] Interpretable machine learning for perturbation biology
    Shen, Judy
    Yuan, Bo
    Luna, Augustin
    Korkut, Anil
    Marks, Debora
    Ingraham, John
    Sander, Chris
    CANCER RESEARCH, 2020, 80 (16)
  • [30] Conceptual challenges for interpretable machine learning
    David S. Watson
    Synthese, 2022, 200