How AI developers can assure algorithmic fairness

被引:0
|
作者
Xivuri K. [1 ]
Twinomurinzi H. [1 ]
机构
[1] Centre for Applied Data Science, University of Johannesburg, Johannesburg
来源
Discover Artificial Intelligence | 2023年 / 3卷 / 01期
关键词
AI developers; Algorithms; Artificial intelligence (AI); Domination-free development environment; Fairness; Jurgen Habermas; Process model;
D O I
10.1007/s44163-023-00074-4
中图分类号
学科分类号
摘要
Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team. © The Author(s) 2023.
引用
收藏
相关论文
共 50 条
  • [41] The ideals program in algorithmic fairness
    Stewart, Rush T.
    AI & SOCIETY, 2024,
  • [42] Algorithmic fairness in social context
    Huang Y.
    Liu W.
    Gao W.
    Lu X.
    Liang X.
    Yang Z.
    Li H.
    Ma L.
    Tang S.
    BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 2023, 3 (03):
  • [43] Algorithmic fairness in computational medicine
    Xu, Jie
    Xiao, Yunyu
    Wang, Wendy Hui
    Ning, Yue
    Shenkman, Elizabeth A.
    Bian, Jiang
    Wang, Fei
    EBIOMEDICINE, 2022, 84
  • [44] An Economic Perspective on Algorithmic Fairness
    Rambachan, Ashesh
    Kleinberg, Jon
    Ludwig, Jens
    Mullainathan, Sendhil
    AEA PAPERS AND PROCEEDINGS, 2020, 110 : 91 - 95
  • [45] On Algorithmic Fairness in Medical Practice
    Grote, Thomas
    Keeling, Geoff
    CAMBRIDGE QUARTERLY OF HEALTHCARE ETHICS, 2022, 31 (01) : 83 - 94
  • [46] Predictive policing and algorithmic fairness
    Tzu-Wei Hung
    Chun-Ping Yen
    Synthese, 201
  • [47] Fairness in Algorithmic Decision Making
    Chakraborty, Abhijnan
    Gummadi, Krishna P.
    PROCEEDINGS OF THE 7TH ACM IKDD CODS AND 25TH COMAD (CODS-COMAD 2020), 2020, : 367 - 368
  • [48] Predictive policing and algorithmic fairness
    Hung, Tzu-Wei
    Yen, Chun-Ping
    SYNTHESE, 2023, 201 (06)
  • [49] On the Fairness of Causal Algorithmic Recourse
    von Kuegelgen, Julius
    Karimi, Amir-Hossein
    Bhatt, Umang
    Valera, Isabel
    Weller, Adrian
    Schoelkopf, Bernhard
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9584 - 9594
  • [50] Poisoning Attacks on Algorithmic Fairness
    Solans, David
    Biggio, Battista
    Castillo, Carlos
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT I, 2021, 12457 : 162 - 177