Understanding and mitigating the impact of race with adversarial autoencoders

被引:0
|
作者
Sarullo, Kathryn [1 ]
Swamidass, S. Joshua [2 ]
机构
[1] Washington Univ, Dept Comp Sci, St. Louis, MO 63130 USA
[2] Washington Univ, Dept Pathol & Immunol, Sch Med St. Louis, St. Louis, MO USA
来源
COMMUNICATIONS MEDICINE | 2024年 / 4卷 / 01期
关键词
D O I
10.1038/s43856-024-00627-3
中图分类号
R-3 [医学研究方法]; R3 [基础医学];
学科分类号
1001 ;
摘要
BackgroundArtificial intelligence carries the risk of exacerbating some of our most challenging societal problems, but it also has the potential of mitigating and addressing these problems. The confounding effects of race on machine learning is an ongoing subject of research. This study aims to mitigate the impact of race on data-derived models, using an adversarial variational autoencoder (AVAE). In this study, race is a self-reported feature. Race is often excluded as an input variable, however, due to the high correlation between race and several other variables, race is still implicitly encoded in the data.MethodsWe propose building a model that (1) learns a low dimensionality latent spaces, (2) employs an adversarial training procedure that ensure its latent space does not encode race, and (3) contains necessary information for reconstructing the data. We train the autoencoder to ensure the latent space does not indirectly encode race.ResultsIn this study, AVAE successfully removes information about race from the latent space (AUC ROC = 0.5). In contrast, latent spaces constructed using other approaches still allow the reconstruction of race with high fidelity. The AVAE's latent space does not encode race but conveys important information required to reconstruct the dataset. Furthermore, the AVAE's latent space does not predict variables related to race (R2 = 0.003), while a model that includes race does (R2 = 0.08).ConclusionsThough we constructed a race-independent latent space, any variable could be similarly controlled. We expect AVAEs are one of many approaches that will be required to effectively manage and understand bias in ML. Computer models used in healthcare can sometimes be biased based on race, leading to unfair outcomes. Our study focuses on understanding and reducing the impact of self-reported race in computer models that learn from data. We use a model called an Adversarial Variational Autoencoder (AVAE), which helps ensure that the models don't accidentally use race in their calculations. The AVAE technique creates a simplified version of the data, called a latent space, that leaves out race information but keeps other important details needed for accurate predictions. Our results show that this approach successfully removes race information from the models while still allowing them to work well. This method is one of many steps needed to address bias in computer learning and ensure fairer outcomes. Our findings highlight the importance of developing tools that can manage and understand bias, contributing to more equitable and trustworthy technology. Sarullo and Swamidass use an adversarial variational autoencoder (AVAE) to remove race information from computer models while retaining essential data for accurate predictions, effectively reducing bias. This approach highlights the importance of developing tools to manage bias, ensuring fairer and more trustworthy technology.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Sonar feature representation with autoencoders and generative adversarial networks
    Linhardt, Timothy
    Sen Gupta, Ananya
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2023, 153 (03):
  • [42] Unsupervised Domain Adaptation with Coupled Generative Adversarial Autoencoders
    Wang, Xiaoqing
    Wang, Xiangjun
    APPLIED SCIENCES-BASEL, 2018, 8 (12):
  • [43] Mitigating Unwanted Biases with Adversarial Learning
    Zhang, Brian Hu
    Lemoine, Blake
    Mitchell, Margaret
    PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18), 2018, : 335 - 340
  • [44] Molecular Generation for Desired Transcriptome Changes With Adversarial Autoencoders
    Shayakhmetov, Rim
    Kuznetsov, Maksim
    Zhebrak, Alexander
    Kadurin, Artur
    Nikolenko, Sergey
    Aliper, Alexander
    Polykovskiy, Daniil
    FRONTIERS IN PHARMACOLOGY, 2020, 11
  • [45] MITIGATING THE IMPACT OF SPEECH RECOGNITION ERRORS ON SPOKEN QUESTION ANSWERING BY ADVERSARIAL DOMAIN ADAPTATION
    Lee, Chia-Hsuan
    Chen, Yun-Nung
    Lee, Hung-Yi
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 7300 - 7304
  • [46] Generative Probabilistic Novelty Detection with Isometric Adversarial Autoencoders
    Almohsen, Ranya
    Keaton, Matthew R.
    Adjeroh, Donald A.
    Doretto, Gianfranco
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 2002 - 2012
  • [47] Adversarial dual autoencoders for trust-aware recommendation
    Manqing Dong
    Lina Yao
    Xianzhi Wang
    Xiwei Xu
    Liming Zhu
    Neural Computing and Applications, 2023, 35 : 13065 - 13075
  • [48] Adolescent pregnancy: Understanding the impact of age and race on outcomes
    DuPlessis, HM
    Bell, R
    Richards, T
    JOURNAL OF ADOLESCENT HEALTH, 1997, 20 (03) : 187 - 197
  • [49] Understanding autoencoders with information theoretic concepts
    Yu, Shujian
    Principe, Jose C.
    NEURAL NETWORKS, 2019, 117 : 104 - 123
  • [50] EdVAE: Mitigating codebook collapse with evidential discrete variational autoencoders
    Baykal, Gulcin
    Kandemir, Melih
    Unal, Gozde
    PATTERN RECOGNITION, 2024, 156