Why and How Robots Should Say 'No'

被引:17
|
作者
Briggs, Gordon [1 ]
Williams, Tom [2 ]
Jackson, Ryan Blake [2 ]
Scheutz, Matthias [3 ]
机构
[1] US Naval Res Lab, Navy Ctr Appl Res Artificial Intelligence, 4555 Overlook Ave SW, Washington, DC 20375 USA
[2] Colorado Sch Mines, Dept Comp Sci, MIRRORLab, Golden, CO 80401 USA
[3] Tufts Univ, Dept Comp Sci, Medford, MA 02155 USA
关键词
Autonomous moral agents; Natural language generation; Human-robot interaction; Command rejection; MORAL JUDGMENT; MENTAL MODELS; GENDER; FRAMEWORK; BEHAVIOR; PROTEST;
D O I
10.1007/s12369-021-00780-y
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Language-enabled robots with moral reasoning capabilities will inevitably face situations in which they have to respond to human commands that might violate normative principles and could cause harm to humans. We believe that it is critical for robots to be able to reject such commands. We thus address the two key challenges of when and how to reject norm-violating directives. First, we present research in both engineering language-enabled robots that can engage in rudimentary rejection dialogues, as well as related HRI research into the effectiveness of robot protest. Second, we argue that how rejections are phrased is important and review the factors that should guide natural language formulations of command rejections. Finally, we conclude by identifying relevant open questions that will further inform the design of future language-capable and morally competent robots.
引用
收藏
页码:323 / 339
页数:17
相关论文
共 50 条