How AI developers can assure algorithmic fairness

被引:0
|
作者
Xivuri K. [1 ]
Twinomurinzi H. [1 ]
机构
[1] Centre for Applied Data Science, University of Johannesburg, Johannesburg
来源
Discover Artificial Intelligence | 2023年 / 3卷 / 01期
关键词
AI developers; Algorithms; Artificial intelligence (AI); Domination-free development environment; Fairness; Jurgen Habermas; Process model;
D O I
10.1007/s44163-023-00074-4
中图分类号
学科分类号
摘要
Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team. © The Author(s) 2023.
引用
收藏
相关论文
共 50 条
  • [1] Perception of fairness in algorithmic decisions: Future developers' perspective
    Kleanthous, Styliani
    Kasinidou, Maria
    Barlas, Pinar
    Otterbacher, Jahna
    PATTERNS, 2022, 3 (01):
  • [2] Algorithmic Fairness in AI An Interdisciplinary View
    Pfeiffer, Jella
    Gutschow, Julia
    Haas, Christian
    Moeslein, Florian
    Maspfuhl, Oliver
    Borgers, Frederik
    Alpsancar, Suzana
    BUSINESS & INFORMATION SYSTEMS ENGINEERING, 2023, 65 (02) : 209 - 222
  • [3] Disability, fairness, and algorithmic bias in AI recruitment
    Tilmes, Nicholas
    ETHICS AND INFORMATION TECHNOLOGY, 2022, 24 (02)
  • [4] Disability, fairness, and algorithmic bias in AI recruitment
    Nicholas Tilmes
    Ethics and Information Technology, 2022, 24
  • [5] The Fairness in Algorithmic Fairness
    Sune Holm
    Res Publica, 2023, 29 : 265 - 281
  • [6] The Fairness in Algorithmic Fairness
    Holm, Sune
    RES PUBLICA-A JOURNAL OF MORAL LEGAL AND POLITICAL PHILOSOPHY, 2023, 29 (02): : 265 - 281
  • [7] HOW CAN WE ASSURE SUPERIOR LEADERSHIP
    ROWLEY, LN
    MECHANICAL ENGINEERING, 1967, 89 (11) : 89 - &
  • [8] How can the principle of justice as fairness help address the digital divide in AI and healthcare?
    Gozum, Ivan Efreaim A.
    JOURNAL OF PUBLIC HEALTH, 2024, 47 (01) : e187 - e188
  • [9] AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias
    Bellamy, R. K. E.
    Dey, K.
    Hind, M.
    Hoffman, S. C.
    Houde, S.
    Kannan, K.
    Lohia, P.
    Martino, J.
    Mehta, S.
    Mojsilovie, A.
    Nagar, S.
    Ramamurthy, K. Natesan
    Richards, J.
    Saha, D.
    Sattigeri, P.
    Singh, M.
    Varshney, K. R.
    Zhang, Y.
    IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
  • [10] Algorithmic Fairness and Fairness Computing
    Fan Z.
    Meng X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (09): : 2048 - 2066