Interpretable Machine Learning for Psychological Research: Opportunities and Pitfalls

被引:17
|
作者
Henninger, Mirka [1 ,2 ]
Debelak, Rudolf [1 ]
Rothacher, Yannick [1 ]
Strobl, Carolin [1 ]
机构
[1] Univ Zurich, Inst Psychol, Zurich, Eswatini
[2] Univ Zurich, Inst Psychol, CH-8050 Zurich, Switzerland
关键词
interpretation techniques; machine learning; neural network; random forest; correlated predictors; interaction detection; VARIABLE IMPORTANCE; CLASSIFICATION TREES; NEURAL-NETWORKS; PREDICTION; INFERENCE; MODELS;
D O I
10.1037/met0000560
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships.
引用
收藏
页数:36
相关论文
共 50 条
  • [42] Artificial Intelligence and Machine Learning for Improving Glycemic Control in Diabetes: Best Practices, Pitfalls, and Opportunities
    Jacobs, Peter G.
    Herrero, Pau
    Facchinetti, Andrea
    Vehi, Josep
    Kovatchev, Boris
    Breton, Marc D.
    Cinar, Ali
    Nikita, Konstantina S.
    Doyle, Francis J., III
    Bondia, Jorge
    Battelino, Tadej
    Castle, Jessica R.
    Zarkogianni, Konstantia
    Narayan, Rahul
    Mosquera-Lopez, Clara
    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2024, 17 : 19 - 41
  • [43] The compatibility of theoretical frameworks with machine learning analyses in psychological research
    Elhai, Jon D.
    Montag, Christian
    CURRENT OPINION IN PSYCHOLOGY, 2020, 36 : 83 - 88
  • [44] Research and analysis of psychological data based on machine learning methods
    Chen G.
    Lv W.
    Ma J.
    Liang Y.
    International Journal of Wireless and Mobile Computing, 2022, 22 (01) : 1 - 8
  • [45] Opportunities and pitfalls in chemical sensor research.
    Janata, J
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2003, 226 : U28 - U28
  • [46] Translational research in neuroendocrine tumors: pitfalls and opportunities
    J Capdevila
    O Casanovas
    R Salazar
    D Castellano
    A Segura
    P Fuster
    J Aller
    R García-Carbonero
    P Jimenez-Fonseca
    E Grande
    J P Castaño
    Oncogene, 2017, 36 : 1899 - 1907
  • [47] Opportunities and pitfalls of registry data for clinical research
    Psoter, Kevin J.
    Rosenfeld, Margaret
    PAEDIATRIC RESPIRATORY REVIEWS, 2013, 14 (03) : 141 - 145
  • [48] Translational research in neuroendocrine tumors: pitfalls and opportunities
    Capdevila, J.
    Casanovas, O.
    Salazar, R.
    Castellano, D.
    Segura, A.
    Fuster, P.
    Aller, J.
    Garcia-Carbonero, R.
    Jimenez-Fonseca, P.
    Grande, E.
    Castano, J. P.
    ONCOGENE, 2017, 36 (14) : 1899 - 1907
  • [49] Three pitfalls to avoid in machine learning
    Riley, Patrick
    NATURE, 2019, 572 (7767) : 27 - 29
  • [50] Interpretable Machine Learning with Gradual Argumentation Frameworks
    Spieler, Jonathan
    Potyka, Nico
    Staab, Steffen
    COMPUTATIONAL MODELS OF ARGUMENT, COMMA 2022, 2022, 353 : 373 - 374