Goal Orientation for Fair Machine Learning Algorithms

被引:1
|
作者
Xu, Heng [1 ,2 ]
Zhang, Nan [1 ]
机构
[1] Univ Florida, Warrington Coll Business, Gainesville, FL 32611 USA
[2] Amer Univ, Kogod Sch Business, Washington, DC 20016 USA
基金
美国国家科学基金会;
关键词
Fairness; machine learning; optimization goal; selection; screening; ALTERNATIVE PREDICTORS; PERSONNEL-SELECTION; COGNITIVE-ABILITY; SECRETARY PROBLEM; BIAS; PERFORMANCE; MANAGEMENT; EMPLOYMENT; VALIDITY; FUTURE;
D O I
10.1177/10591478241234998
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
A key challenge facing the use of machine learning (ML) in organizational selection settings (e.g., the processing of loan or job applications) is the potential bias against (racial and gender) minorities. To address this challenge, a rich literature of Fairness-Aware ML (FAML) algorithms has emerged, attempting to ameliorate biases while maintaining the predictive accuracy of ML algorithms. Almost all existing FAML algorithms define their optimization goals according to a selection task, meaning that ML outputs are assumed to be the final selection outcome. In practice, though, ML outputs are rarely used as-is. In personnel selection, for example, ML often serves a support role to human resource managers, allowing them to more easily exclude unqualified applicants. This effectively assigns to ML a screening rather than a selection task. It might be tempting to treat selection and screening as two variations of the same task that differ only quantitatively on the admission rate. This paper, however, reveals a qualitative difference between the two in terms of fairness. Specifically, we demonstrate through conceptual development and mathematical analysis that miscategorizing a screening task as a selection one could not only degrade final selection quality but also result in fairness problems such as selection biases within the minority group. After validating our findings with experimental studies on simulated and real-world data, we discuss several business and policy implications, highlighting the need for firms and policymakers to properly categorize the task assigned to ML in assessing and correcting algorithmic biases.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Fair Algorithms for Machine Learning
    Kearns, Michael
    EC'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON ECONOMICS AND COMPUTATION, 2017, : 1 - 1
  • [2] "Un"Fair Machine Learning Algorithms
    Fu, Runshan
    Aseri, Manmohan
    Singh, ParamVir
    Srinivasan, Kannan
    MANAGEMENT SCIENCE, 2022, 68 (06) : 4173 - 4195
  • [3] Goal orientation and learning at school
    Schütz, C
    ZEITSCHRIFT FUR PADAGOGISCHE PSYCHOLOGIE, 1998, 12 (04): : 260 - 262
  • [4] LEARNING GOAL ORIENTATION IN DISCUSSION
    LESSING, C
    GEGENWARTSKUNDE GESELLSCHAFT STAAT ERZIEHUNG, 1975, 24 (01): : 57 - 61
  • [5] Optimizing Equity: Working towards Fair Machine Learning Algorithms in Laboratory Medicine
    Azimi, Vahid
    Zaydman, Mark A.
    JOURNAL OF APPLIED LABORATORY MEDICINE, 2023, 8 (01): : 113 - 128
  • [6] Quantum Fair Machine Learning
    Perrier, Elija
    AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 843 - 853
  • [7] Paradoxes in Fair Machine Learning
    Golz, Paul
    Kahng, Anson
    Procaccia, Ariel D.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Learning goal orientation and abusive supervision
    Mao, Hsiao-Yen
    E & M EKONOMIE A MANAGEMENT, 2023, 26 (03): : 33 - 50
  • [9] Fair Algorithms for Learning in Allocation Problems
    Elzayn, Hadi
    Jabbari, Shahin
    Jung, Christopher
    Kearns, Michael
    Neel, Seth
    Roth, Aaron
    Schutzman, Zachary
    FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, : 170 - 179
  • [10] Algorithms for Machine Learning
    Hsu, Daniel
    IEEE INTELLIGENT SYSTEMS, 2016, 31 (01) : 60 - 60