THE INTUITIVE APPEAL OF EXPLAINABLE MACHINES

被引:20
作者
Selbst, Andrew D. [1 ,2 ]
Barocas, Solon [3 ]
机构
[1] Data & Soc Res Inst, New York, NY 10011 USA
[2] Yale Informat Soc Project, New Haven, CT 06511 USA
[3] Cornell Univ, Dept Informat Sci, Ithaca, NY 14853 USA
基金
美国国家科学基金会;
关键词
AUTOMATED DECISION-MAKING; PROCEDURAL JUSTICE; RULE EXTRACTION; PROTECTION; PRIVACY; CLASSIFICATION; EXPLANATION; INFORMATION; FAIRNESS; IMPACT;
D O I
暂无
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model's development, not just explanations of the model itself.
引用
收藏
页码:1085 / 1139
页数:55
相关论文
共 194 条
[1]  
Adebayo Julius, 2017, CLOUDERA FAST F 0309
[2]   Auditing black-box models for indirect influence [J].
Adler, Philip ;
Falk, Casey ;
Friedler, Sorelle A. ;
Nix, Tionney ;
Rybeck, Gabriel ;
Scheidegger, Carlos ;
Smith, Brandon ;
Venkatasubramanian, Suresh .
KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) :95-122
[3]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[4]  
Angwin J., PROPUBLICA
[5]  
[Anonymous], SCI TECH HUMAN VALUE
[6]  
[Anonymous], 2017, U. Pa. L. Rev. Online
[7]  
[Anonymous], NATION
[8]  
[Anonymous], ALG TRANSP END SECR
[9]  
[Anonymous], DER PROCESS
[10]  
[Anonymous], 2017, FED REGISTER, V82