Where, why, and how is bias learned in medical image analysis models? A study of bias encoding within convolutional networks using synthetic data

被引:0
|
作者
Stanley, Emma A. M. [1 ,2 ,3 ,4 ]
Souza, Raissa [1 ,2 ,3 ,4 ]
Wilms, Matthias [2 ,3 ,4 ,5 ,6 ]
Forkert, Nils D. [2 ,3 ,4 ,7 ]
机构
[1] Univ Calgary, Biomed Engn Grad Program, Calgary, AB, Canada
[2] Univ Calgary, Dept Radiol, Calgary, AB, Canada
[3] Univ Calgary, Hotchkiss Brain Inst, Calgary, AB, Canada
[4] Univ Calgary, Alberta Childrens Hosp, Res Inst, Calgary, AB, Canada
[5] Univ Calgary, Dept Pediat, Calgary, AB, Canada
[6] Univ Calgary, Dept Community Hlth Sci, Calgary, AB, Canada
[7] Univ Calgary, Dept Clin Neurosci, Calgary, AB, Canada
来源
EBIOMEDICINE | 2025年 / 111卷
基金
加拿大自然科学与工程研究理事会;
关键词
fi cial intelligence; Algorithmic bias; Synthetic data; RECOGNITION; RACE;
D O I
10.1016/j.ebiom.2024.105501
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background Understanding the mechanisms of algorithmic bias is highly challenging due to the complexity uncertainty of how various unknown sources of bias impact deep learning models trained with medical images. study aims to bridge this knowledge gap by studying where, why, and how biases from medical images are encoded in these models. Methods We systematically studied layer-wise bias encoding in a convolutional neural network for classification using synthetic brain magnetic resonance imaging data with known disease and bias effects. quantified the degree to which disease-related information, as well as morphology-based and intensity biases were represented within the learned features of the model. Findings Although biases were encoded throughout the model, a stronger encoding did not necessarily lead model using these biases as a shortcut for disease classification. We also observed that intensity-based effects greater influence on shortcut learning compared to morphology-based effects when multiple biases were present. Interpretation We believe that these results constitute an important fi rst step towards a deeper understanding algorithmic bias in deep learning models trained using medical imaging data. This study also showcases the benefits of utilising controlled, synthetic bias scenarios for objectively studying the mechanisms of shortcut learning. Funding Alberta Innovates, Natural Sciences and Engineering Research Council of Canada, Killam Trusts, Parkinson Association of Alberta, River Fund at Calgary Foundation, Canada Research Chairs Program. Copyright (c) 2024 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:13
相关论文
共 4 条
  • [1] Enhancing interpretability and bias control in deep learning models for medical image analysis using generative AI
    Minutti-Martinez, Carlos
    Escalante-Ramirez, Boris
    Olveres, Jimena
    OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS VIII, 2024, 12998
  • [2] A Machine Learning Application for Medical Image Analysis Using Deep Convolutional Neural Networks (CNNs) and Transfer Learning Models for Pneumonia Detection
    Shirwaikar, Rudresh
    Anitha, V.
    Rao, Vuda Sreenivasa
    Kaushal, Ashish Kumar
    Kakad, Shital
    Khan, Mohammad Ahmar
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (05) : 2316 - 2324
  • [3] How Bias Reduction Is Affected by Covariate Choice, Unreliability, and Mode of Data Analysis: Results From Two Types of Within-Study Comparisons
    Cook, Thomas D.
    Steiner, Peter M.
    Pohl, Steffi
    MULTIVARIATE BEHAVIORAL RESEARCH, 2009, 44 (06) : 828 - 847
  • [4] How to under power your real-world data study: Examples and ways to avoid them using quantitative bias analysis
    Venkatesan, Sudhir
    Gray, Christen M.
    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, 2023, 32 : 61 - 62