Understanding and mitigating the impact of race with adversarial autoencoders

被引:0
|
作者
Sarullo, Kathryn [1 ]
Swamidass, S. Joshua [2 ]
机构
[1] Washington Univ, Dept Comp Sci, St. Louis, MO 63130 USA
[2] Washington Univ, Dept Pathol & Immunol, Sch Med St. Louis, St. Louis, MO USA
来源
COMMUNICATIONS MEDICINE | 2024年 / 4卷 / 01期
关键词
D O I
10.1038/s43856-024-00627-3
中图分类号
R-3 [医学研究方法]; R3 [基础医学];
学科分类号
1001 ;
摘要
BackgroundArtificial intelligence carries the risk of exacerbating some of our most challenging societal problems, but it also has the potential of mitigating and addressing these problems. The confounding effects of race on machine learning is an ongoing subject of research. This study aims to mitigate the impact of race on data-derived models, using an adversarial variational autoencoder (AVAE). In this study, race is a self-reported feature. Race is often excluded as an input variable, however, due to the high correlation between race and several other variables, race is still implicitly encoded in the data.MethodsWe propose building a model that (1) learns a low dimensionality latent spaces, (2) employs an adversarial training procedure that ensure its latent space does not encode race, and (3) contains necessary information for reconstructing the data. We train the autoencoder to ensure the latent space does not indirectly encode race.ResultsIn this study, AVAE successfully removes information about race from the latent space (AUC ROC = 0.5). In contrast, latent spaces constructed using other approaches still allow the reconstruction of race with high fidelity. The AVAE's latent space does not encode race but conveys important information required to reconstruct the dataset. Furthermore, the AVAE's latent space does not predict variables related to race (R2 = 0.003), while a model that includes race does (R2 = 0.08).ConclusionsThough we constructed a race-independent latent space, any variable could be similarly controlled. We expect AVAEs are one of many approaches that will be required to effectively manage and understand bias in ML. Computer models used in healthcare can sometimes be biased based on race, leading to unfair outcomes. Our study focuses on understanding and reducing the impact of self-reported race in computer models that learn from data. We use a model called an Adversarial Variational Autoencoder (AVAE), which helps ensure that the models don't accidentally use race in their calculations. The AVAE technique creates a simplified version of the data, called a latent space, that leaves out race information but keeps other important details needed for accurate predictions. Our results show that this approach successfully removes race information from the models while still allowing them to work well. This method is one of many steps needed to address bias in computer learning and ensure fairer outcomes. Our findings highlight the importance of developing tools that can manage and understand bias, contributing to more equitable and trustworthy technology. Sarullo and Swamidass use an adversarial variational autoencoder (AVAE) to remove race information from computer models while retaining essential data for accurate predictions, effectively reducing bias. This approach highlights the importance of developing tools to manage bias, ensuring fairer and more trustworthy technology.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Understanding and Mitigating the Impact of Model Compression for Document Image Classification
    Siddiqui, Shoaib Ahmed
    Dengel, Andreas
    Ahmed, Sheraz
    DOCUMENT ANALYSIS AND RECOGNITION - ICDAR 2021, PT I, 2021, 12821 : 147 - 159
  • [32] Confirmation Bias in Sport Science: Understanding and Mitigating Its Impact
    Beato, Marco
    Latinjak, Alexander T.
    Bertollo, Maurizio
    Boullosa, Daniel
    INTERNATIONAL JOURNAL OF SPORTS PHYSIOLOGY AND PERFORMANCE, 2025,
  • [33] Adversarial dual autoencoders for trust-aware recommendation
    Dong, Manqing
    Yao, Lina
    Wang, Xianzhi
    Xu, Xiwei
    Zhu, Liming
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (18): : 13065 - 13075
  • [34] Adversarial Autoencoders for Metasurface Design Optimization (invited)Y
    Kudyshev, Zhaxylyk A.
    Kildishev, Alexander, V
    Shalaev, Vladimir M.
    Boltasseva, Alexandra
    2020 INTERNATIONAL APPLIED COMPUTATIONAL ELECTROMAGNETICS SOCIETY SYMPOSIUM (2020 ACES-MONTEREY), 2020,
  • [35] Adversarial Autoencoders Oversampling Algorithm for Imbalanced Image Data
    Zhi, Weimei
    Chang, Zhi
    Lu, Junhua
    Geng, Zhengqian
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (11): : 4208 - 4218
  • [36] Adversarial autoencoders with constant-curvature latent manifolds
    Grattarola, Daniele
    Livi, Lorenzo
    Alippi, Cesare
    APPLIED SOFT COMPUTING, 2019, 81
  • [37] Robust Anomaly Detection in Images Using Adversarial Autoencoders
    Beggel, Laura
    Pfeiffer, Michael
    Bischl, Bernd
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 11906 : 206 - 222
  • [38] Adversarial Defense based on Structure-to-Signal Autoencoders
    Folz, Joachim
    Palacio, Sebastian
    Hees, Joern
    Dengel, Andreas
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 3568 - 3577
  • [39] Quality Guarantees for Autoencoders via Unsupervised Adversarial Attacks
    Boeing, Benedikt
    Roy, Rajarshi
    Mueller, Emmanuel
    Neider, Daniel
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 206 - 222
  • [40] Adversarial Training of Deep Autoencoders Towards Recommendation Tasks
    Chae, Dong-Kyu
    Kim, Sang-Wook
    PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT (IEEE IC-NIDC), 2018, : 91 - 95