Bias and Discrimination in AI: A Cross-Disciplinary Perspective

被引:85
|
作者
Ferrer, Xavier [1 ]
van Nuenen, Tom [2 ]
Such, Jose M. [2 ,3 ]
Cote, Mark [4 ]
Criado, Natalia [5 ,6 ]
机构
[1] Kings Coll London, Digital Discriminat, Dept Informat, London WC2R 2LS, England
[2] Kings Coll London, Dept Informat, London WC2R 2LS, England
[3] Kings Coll London, KCL Cybersecur Ctr, London WC2R 2LS, England
[4] Kings Coll London, Dept Digital Humanities, London WC2R 2LS, England
[5] Kings Coll London, Comp Sci, Dept Informat, London WC2R 2LS, England
[6] Kings Coll London, UKRI Ctr Doctoral Training Safe & Trusted AI, London WC2R 2LS, England
基金
英国工程与自然科学研究理事会;
关键词
TRANSPARENCY;
D O I
10.1109/MTS.2021.3056293
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Operating at a large scale and impacting large groups of people, automated systems can make consequential and sometimes contestable decisions. Automated decisions can impact a range of phenomena, from credit scores to insurance payouts to health evaluations. These forms of automation can become problematic when they place certain groups or people at a systematic disadvantage. These are cases of discrimination - which is legally defined as the unfair or unequal treatment of an individual (or group) based on certain protected characteristics (also known as protected attributes) such as income, education, gender, or ethnicity. When the unfair treatment is caused by automated decisions, usually taken by intelligent agents or other AI-based systems, the topic of digital discrimination arises. Digital discrimination is prevalent in a diverse range of fields, such as in risk assessment systems for policing and credit scores [1], [2]. © 1982-2012 IEEE.
引用
收藏
页码:72 / 80
页数:9
相关论文
共 50 条