Understanding and mitigating the impact of race with adversarial autoencoders

被引:0
|
作者
Sarullo, Kathryn [1 ]
Swamidass, S. Joshua [2 ]
机构
[1] Washington Univ, Dept Comp Sci, St. Louis, MO 63130 USA
[2] Washington Univ, Dept Pathol & Immunol, Sch Med St. Louis, St. Louis, MO USA
来源
COMMUNICATIONS MEDICINE | 2024年 / 4卷 / 01期
关键词
D O I
10.1038/s43856-024-00627-3
中图分类号
R-3 [医学研究方法]; R3 [基础医学];
学科分类号
1001 ;
摘要
BackgroundArtificial intelligence carries the risk of exacerbating some of our most challenging societal problems, but it also has the potential of mitigating and addressing these problems. The confounding effects of race on machine learning is an ongoing subject of research. This study aims to mitigate the impact of race on data-derived models, using an adversarial variational autoencoder (AVAE). In this study, race is a self-reported feature. Race is often excluded as an input variable, however, due to the high correlation between race and several other variables, race is still implicitly encoded in the data.MethodsWe propose building a model that (1) learns a low dimensionality latent spaces, (2) employs an adversarial training procedure that ensure its latent space does not encode race, and (3) contains necessary information for reconstructing the data. We train the autoencoder to ensure the latent space does not indirectly encode race.ResultsIn this study, AVAE successfully removes information about race from the latent space (AUC ROC = 0.5). In contrast, latent spaces constructed using other approaches still allow the reconstruction of race with high fidelity. The AVAE's latent space does not encode race but conveys important information required to reconstruct the dataset. Furthermore, the AVAE's latent space does not predict variables related to race (R2 = 0.003), while a model that includes race does (R2 = 0.08).ConclusionsThough we constructed a race-independent latent space, any variable could be similarly controlled. We expect AVAEs are one of many approaches that will be required to effectively manage and understand bias in ML. Computer models used in healthcare can sometimes be biased based on race, leading to unfair outcomes. Our study focuses on understanding and reducing the impact of self-reported race in computer models that learn from data. We use a model called an Adversarial Variational Autoencoder (AVAE), which helps ensure that the models don't accidentally use race in their calculations. The AVAE technique creates a simplified version of the data, called a latent space, that leaves out race information but keeps other important details needed for accurate predictions. Our results show that this approach successfully removes race information from the models while still allowing them to work well. This method is one of many steps needed to address bias in computer learning and ensure fairer outcomes. Our findings highlight the importance of developing tools that can manage and understand bias, contributing to more equitable and trustworthy technology. Sarullo and Swamidass use an adversarial variational autoencoder (AVAE) to remove race information from computer models while retaining essential data for accurate predictions, effectively reducing bias. This approach highlights the importance of developing tools to manage bias, ensuring fairer and more trustworthy technology.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Multi-view Defense with Adversarial Autoencoders
    Sun, Xuli
    Sun, Shiliang
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [22] Drum Synthesis and Rhythmic Transformation with Adversarial Autoencoders
    Tomczak, Maciej
    Goto, Masataka
    Hockman, Jason
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2427 - 2435
  • [23] Face Aging by Explainable Conditional Adversarial Autoencoders
    Korgialas, Christos
    Pantraki, Evangelia
    Bolari, Angeliki
    Sotiroudi, Martha
    Kotropoulos, Constantine
    JOURNAL OF IMAGING, 2023, 9 (05)
  • [24] Deep Mixture of Adversarial Autoencoders Clustering Network
    Liu, Aofu
    Ji, Zexuan
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 191 - 202
  • [25] Adversarial and variational autoencoders improve metagenomic binning
    Lindez, Pau Piera
    Johansen, Joachim
    Kutuzova, Svetlana
    Sigurdsson, Arnor Ingi
    Nissen, Jakob Nybo
    Rasmussen, Simon
    COMMUNICATIONS BIOLOGY, 2023, 6 (01)
  • [26] ANE: Network Embedding via Adversarial Autoencoders
    Xiao, Yang
    Xiao, Ding
    Hu, Binbin
    Shi, Chuan
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2018, : 66 - 73
  • [27] Adversarial and variational autoencoders improve metagenomic binning
    Pau Piera Líndez
    Joachim Johansen
    Svetlana Kutuzova
    Arnor Ingi Sigurdsson
    Jakob Nybo Nissen
    Simon Rasmussen
    Communications Biology, 6
  • [28] Alleviating Adversarial Attacks on Variational Autoencoders with MCMC
    Kuzina, Anna
    Welling, Max
    Tomczak, Jakub M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [29] On Mitigating Popularity Bias in Recommendations via Variational Autoencoders
    Borges, Rodrigo
    Stefanidis, Kostas
    36TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2021, 2021, : 1383 - 1386
  • [30] Double-Adversarial Activation Anomaly Detection: Adversarial Autoencoders are Anomaly Generators
    Schulze, Jan-Philipp
    Sperl, Philip
    Boettinger, Konstantin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,