Responsibility assignment won’t solve the moral issues of artificial intelligence

被引:0
|
作者
Jan-Hendrik Heinrichs
机构
[1] Forschungszentrum Jülich,Institut für Ethik in den Neurowissenschaften (INM
[2] RWTH Aachen,8)
来源
AI and Ethics | 2022年 / 2卷 / 4期
关键词
Responsibility; Artificial intelligence; Ethical terminology;
D O I
10.1007/s43681-022-00133-z
中图分类号
学科分类号
摘要
Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial intelligence? Both questions presuppose that the term ‘responsibility’ is a good tool for analysing the moral issues surrounding artificial intelligence. This article will draw this presupposition into doubt and show how reference to responsibility obscures the complexity of moral situations and moral agency, which can be analysed with a more differentiated toolset of moral terminology. It suggests that the impression of responsibility gaps only occurs if we gloss over the complexity of the moral situation in which artificial intelligent tools are employed and if—counterfactually—we ascribe them some kind of pseudo-agential status.
引用
收藏
页码:727 / 736
页数:9
相关论文
共 50 条