Evaluating Explanations by Cognitive Value

被引:9
|
作者
Chander, Ajay [1 ]
Srinivasan, Ramya [1 ]
机构
[1] Fujitsu Labs Amer, Sunnyvale, CA 94085 USA
关键词
Explanations; AI; Cognitive value; Business owner; Causal modeling;
D O I
10.1007/978-3-319-99740-7_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The transparent AI initiative has ignited several academic and industrial endeavors and produced some impressive technologies and results thus far. Many state-of-the-art methods provide explanations that mostly target the needs of AI engineers. However, there is very little work on providing explanations that support the needs of business owners, software developers, and consumers who all play significant roles in the service development and use cycle. By considering the overall context in which an explanation is presented, including the role played by the human-in-the-loop, we can hope to craft effective explanations. In this paper, we introduce the notion of the "cognitive value" of an explanation and describe its role in providing effective explanations within a given context. Specifically, we consider the scenario of a business owner seeking to improve sales of their product, and compare explanations provided by some existing interpretable machine learning algorithms (random forests, scalable Bayesian Rules, causal models) in terms of the cognitive value they offer to the business owner. We hope that our work will foster future research in the field of transparent AI to incorporate the cognitive value of explanations in crafting and evaluating explanations.
引用
收藏
页码:314 / 328
页数:15
相关论文
共 50 条
  • [31] HIVE: Evaluating the Human Interpretability of Visual Explanations
    Kim, Sunnie S. Y.
    Meister, Nicole
    Ramaswamy, Vikram V.
    Fong, Ruth
    Russakovsky, Olga
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 280 - 298
  • [32] Detection Accuracy for Evaluating Compositional Explanations of Units
    Makinwa, Sayo M.
    La Rosa, Biagio
    Capobianco, Roberto
    AIXIA 2021 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13196 : 550 - 563
  • [33] Evaluating Anomaly Explanations Using Ground Truth
    Friedman, Liat Antwarg
    Galed, Chen
    Rokach, Lior
    Shapira, Bracha
    AI, 2024, 5 (04) : 2375 - 2392
  • [34] Drawing conclusions: Representing and evaluating competing explanations
    Liefgreen, Alice
    Lagnado, David A.
    COGNITION, 2023, 234
  • [35] Evaluating explanations for poverty selectivity in foreign aid
    Heinrich, Tobias
    Kobayashi, Yoshiharu
    KYKLOS, 2022, 75 (01) : 30 - 47
  • [36] BASELINES FOR EVALUATING EXPLANATIONS OF COALITION BEHAVIOR IN CONGRESS
    HAMMOND, TH
    FRASER, JM
    JOURNAL OF POLITICS, 1983, 45 (03): : 635 - 656
  • [37] Evaluating the Pros and Cons of Recommender Systems Explanations
    Wardatzky, Kathrin
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 1302 - 1307
  • [39] Minun: Evaluating Counterfactual Explanations for Entity Matching
    Wang, Jin
    Li, Yuliang
    PROCEEDINGS OF THE 6TH WORKSHOP ON DATA MANAGEMENT FOR END-TO-END MACHINE LEARNING, DEEM 2022, 2022,
  • [40] Attention Flows are Shapley Value Explanations
    Ethayarajh, Kawin
    Jurafsky, Dan
    ACL-IJCNLP 2021: THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 2, 2021, : 49 - 54