Justice and the Normative Standards of Explainability in Healthcare

被引:1
作者
Kempt H. [1 ]
Freyer N. [1 ,2 ]
Nagel S.K. [1 ]
机构
[1] RWTH Aachen University, Theaterplatz 14, Aachen
[2] FH Aachen, Eupener Straße 70, Aachen
关键词
AI ethics; Clinical decision support systems; Explainability; Justice; Medical AI; Normative standards;
D O I
10.1007/s13347-022-00598-0
中图分类号
学科分类号
摘要
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability. © 2022, The Author(s).
引用
收藏
相关论文
共 35 条
[21]  
London A.J., Artificial intelligence and black-box medical decisions: Accuracy versus explainability, Hastings Center Report, 49, 1, pp. 15-21, (2019)
[22]  
McDougall R.J., Computer knows best? The need for value-flexibility in medical AI, Journal of Medical Ethics, 45, 3, pp. 156-160, (2019)
[23]  
Mitra A.G., Biller-Andorno N., Vulnerability and exploitation in a globalized world, IJFAB: International Journal of Feminist Approaches to Bioethics, 6, 1, pp. 91-102, (2013)
[24]  
Penu Obed Kwame Adzakuboateng Richardowusu Acheampong, Towards explainable AI(XAI): Determining the factors for firms’ adoption and use of xAI in Sub-Saharan Africa, AMCIS 2021 Treos, (2021)
[25]  
Ploug T., Holm S., The four dimensions of contestable AI diagnostics – A patient-centric approach to explainable AI, Artificial Intelligence in Medicine, 107, (2020)
[26]  
Rawls J., A theory of justice, (1971)
[27]  
Rudin C., Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, Nat Mach Intell, 1, 5, pp. 206-215, (2019)
[28]  
General Comment No. 14: The right to the highest attainable standard of health., (2000)
[29]  
Voigt K., Wester G., Relational equality and health, Social Philosophy and Policy, 31, 2, pp. 204-229, (2015)
[30]  
Wadden J.J., Defining the undefinable: The black box problem in healthcare artificial intelligence, Journal of Medical Ethics, 48, 10, pp. 764-768, (2022)