Beyond model interpretability: socio-structural explanations in machine learning

被引:0
|
作者
Smart, Andrew [1 ]
Kasirzadeh, Atoosa [2 ]
机构
[1] Google Res, San Francisco, CA 94105 USA
[2] Univ Edinburgh, Edinburgh, Scotland
关键词
Machine learning; Interpretability; Explainability; Social structures; Social structural explanations; Responsible AI; RACIAL BIAS; HEALTH;
D O I
10.1007/s00146-024-02056-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
What is it to interpret the outputs of an opaque machine learning model? One approach is to develop interpretable machine learning techniques. These techniques aim to show how machine learning models function by providing either model-centric local or global explanations, which can be based on mechanistic interpretations (revealing the inner working mechanisms of models) or non-mechanistic approximations (showing input feature-output data relationships). In this paper, we draw on social philosophy to argue that interpreting machine learning outputs in certain normatively salient domains could require appealing to a third type of explanation that we call "socio-structural" explanation. The relevance of this explanation type is motivated by the fact that machine learning models are not isolated entities but are embedded within and shaped by social structures. Socio-structural explanations aim to illustrate how social structures contribute to and partially explain the outputs of machine learning models. We demonstrate the importance of socio-structural explanations by examining a racially biased healthcare allocation algorithm. Our proposal highlights the need for transparency beyond model interpretability: understanding the outputs of machine learning systems could require a broader analysis that extends beyond the understanding of the machine learning model itself.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Prediction of HHV of fuel by Machine learning Algorithm: Interpretability analysis using Shapley Additive Explanations (SHAP)
    Timilsina, Manish Sharma
    Sen, Subhadip
    Uprety, Bibek
    Patel, Vashishtha B.
    Sharma, Prateek
    Sheth, Pratik N.
    FUEL, 2024, 357
  • [33] Assessing Heuristic Machine Learning Explanations with Model Counting
    Narodytska, Nina
    Shrotri, Aditya
    Meel, Kuldeep S.
    Ignatiev, Alexey
    Marques-Silva, Joao
    THEORY AND APPLICATIONS OF SATISFIABILITY TESTING - SAT 2019, 2019, 11628 : 267 - 278
  • [34] Personal recovery and socio-structural disadvantage: A critical conceptual review
    Karadzhov, Dimitar
    HEALTH, 2023, 27 (02): : 201 - 225
  • [35] Longitudinal Pathways to Influenza Vaccination Vary With Socio-Structural Disadvantages
    Farkhad, Bita Fayaz
    Karan, Alexander
    Albarracin, Dolores
    ANNALS OF BEHAVIORAL MEDICINE, 2022, 56 (05) : 472 - 483
  • [36] SOCIO-STRUCTURAL CHANGES IN RELATION TO RURAL OUT-MIGRATION
    LIJFERIN.JH
    SOCIOLOGIA RURALIS, 1974, 14 (1-2) : 3 - 14
  • [37] Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning
    Krishnan M.
    Philosophy & Technology, 2020, 33 (3) : 487 - 502
  • [38] Structural performance assessment of GFRP elastic gridshells by machine learning interpretability methods
    KOOKALANI Soheila
    CHENG Bin
    TORRES Jose Luis Chavez
    Frontiers of Structural and Civil Engineering, 2022, 16 (10) : 1249 - 1266
  • [39] Structural performance assessment of GFRP elastic gridshells by machine learning interpretability methods
    Kookalani, Soheila
    Cheng, Bin
    Torres, Jose Luis Chavez
    FRONTIERS OF STRUCTURAL AND CIVIL ENGINEERING, 2022, 16 (10) : 1249 - 1266
  • [40] One Core Task of Interpretability in Machine Learning - Expansion of Structural Equation Modeling
    Fei, Nina
    Yang, Youlong
    Bai, Xuying
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2020, 34 (01)