Engineering user-centered explanations to query answers in ontology-driven socio-technical systems

被引:1
|
作者
Teze, Juan Carlos L. [1 ,2 ]
Paredes, Jose Nicolas [3 ,4 ]
Martinez, Maria Vanina [6 ]
Simari, Gerardo Ignacio [3 ,4 ,5 ]
机构
[1] Univ Nacl Entre Rios UNER, Fac Ciencias Adm, Uruguay, Argentina
[2] Univ Nacl Entre Rios UNER, Consejo Nacl Invest Cient & Tecn CONICET, Uruguay, Argentina
[3] Univ Nacl Sur UNS, Dept Ciencias & Ingn Comp, Bahia Blanca, Argentina
[4] Consejo Nacl Invest Cient & Tecn, Inst Ciencias Ingn Comp UNS, Inst Ciencias & Ingn Comp, Buenos Aires, Argentina
[5] Univ Buenos Aires UBA, Dept Comp, Buenos Aires, Argentina
[6] UBA, CONICET, Inst Ciencias Comp ICC, Buenos Aires, Argentina
关键词
Ontological languages; socio-technical systems; Explainable Artificial Intelligence; hate speech in social platforms; ARTIFICIAL-INTELLIGENCE; EXPLAIN; XAI; AI;
D O I
10.3233/SW-233297
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The role of explanations in intelligent systems has in the last few years entered the spotlight as AI-based solutions appear in an ever-growing set of applications. Though data-driven (or machine learning) techniques are often used as examples of how opaque (also called black box) approaches can lead to problems such as bias and general lack of explainability and interpretability, in reality these features are difficult to tame in general, even for approaches that are based on tools typically considered to be more amenable, like knowledge-based formalisms. In this paper, we continue a line of research and development towards building tools that facilitate the implementation of explainable and interpretable hybrid intelligent socio-technical systems, focusing on features that users can leverage to build explanations to their queries. In particular, we present the implementation of a recently-proposed application framework (and make available its source code) for developing such systems, and explore user-centered mechanisms for building explanations based both on the kinds of explanations required (such as counterfactual, contextual, etc.) and the inputs used for building them (coming from various sources, such as the knowledge base and lower-level data-driven modules). In order to validate our approach, we develop two use cases, one as a running example for detecting hate speech in social platforms and the other as an extension that also contemplates cyberbullying scenarios.
引用
收藏
页码:991 / 1020
页数:30
相关论文
共 45 条