共 22 条
How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons
被引:40
|作者:
Maas, Matthijs M.
[1
,2
]
机构:
[1] Univ Copenhagen, Ctr Int Law Conflict & Crisis, Fac Law, Karen Blixens Plads 16,Bldg 6A-4-05, DK-2300 Copenhagen, Denmark
[2] Univ Oxford, Future Humanity Inst, Ctr Governance AI, Oxford, England
关键词:
Artificial intelligence;
AI;
arms race;
arms control;
nonproliferation;
epistemic communities;
normal accidents;
governance;
EPISTEMIC COMMUNITIES;
NONPROLIFERATION;
COOPERATION;
REGIMES;
D O I:
10.1080/13523260.2019.1576464
中图分类号:
D81 [国际关系];
学科分类号:
030207 ;
摘要:
Many observers anticipate "arms races" between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized "epistemic communities" of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to "normal accidents," such that assurances of "meaningful human control" are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.
引用
收藏
页码:285 / 311
页数:27
相关论文