Fairness-aware Data Integration

被引:1
|
作者
Mazilu, Lacramioara [1 ,2 ]
Paton, Norman W. [1 ]
Konstantinou, Nikolaos [1 ]
Fernandes, Alvaro A. A. [1 ]
机构
[1] Univ Manchester, Oxford Rd, Manchester M13 9PL, Lancs, England
[2] Peak AI Ltd, Charlotte St, Manchester M1 4ET, Lancs, England
来源
基金
英国工程与自然科学研究理事会;
关键词
Data integration; data preparation; fairness; bias; CLASSIFICATION;
D O I
10.1145/3519419
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning can be applied in applications that take decisions that impact people's lives. Such techniques have the potential to make decision making more objective, but there also is a risk that the decisions can discriminate against certain groups as a result of bias in the underlying data. Reducing bias, or promoting fairness, has been a focus of significant investigation in machine learning, for example, based on preprocessing the training data, changing the learning algorithm, or post-processing the results of the learning. However, prior to these activities, data integration discovers and integrates the data that is used for training, and data integration processes have the potential to produce data that leads to biased conclusions. In this article, we propose an approach that generates schema mappings in ways that take into account: (i) properties that are intrinsic to mapping results that may give rise to bias in analyses; and (ii) bias observed in classifiers trained on the results of different sets of mappings. The approach explores a space of different ways of integrating the data, using a Tabu search algorithm, guided by bias-aware objective functions that represent different types of bias.The resulting approach is evaluated using Adult Census and German Credit datasets to explore the extent to which and the circumstances in which the approach can increase the fairness of the results of the data integration process.
引用
收藏
页数:26
相关论文
共 50 条
  • [41] A novel fairness-aware parallel download scheme
    Kim, Eunhye
    Karrer, Roger P.
    Park, Ju-Won
    Kim, Sehun
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2016, 9 (01) : 42 - 53
  • [42] Fairness-Aware Optimal Graph Filter Design
    Kose, O. Deniz
    Mateos, Gonzalo
    Shen, Yanning
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (02) : 142 - 154
  • [43] Fairness-aware Adaptive Network Link Prediction
    Kose, O. Deniz
    Shen, Yanning
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 677 - 681
  • [44] FAIR: Fairness-aware information retrieval evaluation
    Gao, Ruoyuan
    Ge, Yingqiang
    Shah, Chirag
    JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY, 2022, 73 (10) : 1461 - 1473
  • [45] Fairness-aware Configuration of Machine Learning Libraries
    Tizpaz-Niari, Saeid
    Kumar, Ashish
    Tan, Gang
    Trivedi, Ashutosh
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 909 - 920
  • [46] How Fair is Fairness-aware Representative Ranking?
    Saxena, Akrati
    Fletcher, George
    Pechenizkiy, Mykola
    WEB CONFERENCE 2021: COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2021), 2021, : 161 - 165
  • [47] Fairness-aware Bandit-based Recommendation
    Huang, Wen
    Labille, Kevin
    Wu, Xintao
    Lee, Dongwon
    Heffernan, Neil
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1273 - 1278
  • [48] FairGAN: Fairness-aware Generative Adversarial Networks
    Xu, Depeng
    Yuan, Shuhan
    Zhang, Lu
    Wu, Xintao
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 570 - 575
  • [49] Fairness-aware Differentially Private Collaborative Filtering
    Yang, Zhenhuan
    Ge, Yingqiang
    Su, Congzhe
    Wang, Dingxian
    Zhao, Xiaoting
    Ying, Yiming
    COMPANION OF THE WORLD WIDE WEB CONFERENCE, WWW 2023, 2023, : 927 - 931
  • [50] Towards Fairness-aware Adversarial Network Pruning
    Zhang, Lei
    Wang, Zhibo
    Dong, Xiaowei
    Feng, Yunhe
    Pang, Xiaoyi
    Zhang, Zhifei
    Ren, Kui
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5145 - 5154