Tackling Algorithmic Bias in Neural-Network Classifiers using Wasserstein-2 Regularization

被引:0
|
作者
Laurent Risser
Alberto González Sanz
Quentin Vincenot
Jean-Michel Loubes
机构
[1] Institut de Mathématiques de Toulouse (UMR 5219),Institut de Mathématiques de Toulouse (UMR 5219)
[2] CNRS,undefined
[3] Artificial and Natural Intelligence Toulouse Institute (ANITI),undefined
[4] Institut de Recherche Technologique (IRT) Saint Exupéry,undefined
[5] Université de Toulouse,undefined
关键词
Shape recognition; Algorithmic bias; Image Classification; Neural-networks; Regularization;
D O I
暂无
中图分类号
学科分类号
摘要
The increasingly common use of neural network classifiers in industrial and social applications of image analysis has allowed impressive progress these last years. Such methods are, however, sensitive to algorithmic bias, i.e., to an under- or an over-representation of positive predictions or to higher prediction errors in specific subgroups of images. We then introduce in this paper a new method to temper the algorithmic bias in Neural-Network-based classifiers. Our method is Neural-Network architecture agnostic and scales well to massive training sets of images. It indeed only overloads the loss function with a Wasserstein-2-based regularization term for which we back-propagate the impact of specific output predictions using a new model, based on the Gâteaux derivatives of the predictions distribution. This model is algorithmically reasonable and makes it possible to use our regularized loss with standard stochastic gradient-descent strategies. Its good behavior is assessed on the reference Adult census, MNIST, CelebA datasets.
引用
收藏
页码:672 / 689
页数:17
相关论文
共 50 条
  • [1] Tackling Algorithmic Bias in Neural-Network Classifiers using Wasserstein-2 Regularization
    Risser, Laurent
    Sanz, Alberto Gonzalez
    Vincenot, Quentin
    Loubes, Jean-Michel
    JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2022, 64 (06) : 672 - 689
  • [2] ON THE GENERALIZATION ABILITY OF NEURAL-NETWORK CLASSIFIERS
    MUSAVI, MT
    CHAN, KH
    HUMMELS, DM
    KALANTRI, K
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1994, 16 (06) : 659 - 663
  • [3] Improvement of Neural-Network Classifiers Using Fuzzy Floating Centroids
    Liu, Shuangrong
    Wang, Lin
    Yang, Bo
    Zhou, Jin
    Chen, Zhenxiang
    Dong, Huifen
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (03) : 1392 - 1404
  • [4] Improving Neural-Network Classifiers Using Nearest Neighbor Partitioning
    Wang, Lin
    Yang, Bo
    Chen, Yuehui
    Zhang, Xiaoqian
    Orchard, Jeff
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (10) : 2255 - 2267
  • [5] COMBINING THE RESULTS OF SEVERAL NEURAL-NETWORK CLASSIFIERS
    ROGOVA, G
    NEURAL NETWORKS, 1994, 7 (05) : 777 - 781
  • [6] Energy Optimisation of Cascading Neural-network Classifiers
    Agrawal, Vinamra
    Gopalan, Anandha
    PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON SMART CITIES AND GREEN ICT SYSTEMS (SMARTGREENS), 2020, : 149 - 158
  • [7] The eigenspace separation transform for neural-network classifiers
    Army Research Laboratory, Attn: AMSRL-IS-TA, 2800 Powder M., Adelphi, MD 20783-1197, United States
    Neural Netw., 3 (419-427):
  • [8] The eigenspace separation transform for neural-network classifiers
    Torrieri, D
    NEURAL NETWORKS, 1999, 12 (03) : 419 - 427
  • [9] STRONG UNIVERSAL CONSISTENCY OF NEURAL-NETWORK CLASSIFIERS
    FARAGO, A
    LUGOSI, G
    IEEE TRANSACTIONS ON INFORMATION THEORY, 1993, 39 (04) : 1146 - 1151
  • [10] THE COMBINATION OF MULTIPLE CLASSIFIERS BY A NEURAL-NETWORK APPROACH
    HUANG, YS
    LIU, K
    SUEN, CY
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 1995, 9 (03) : 579 - 597