Peacekeeping Conditions for an Artificial Intelligence Society

被引:1
|
作者
Yamakawa, Hiroshi [1 ,2 ,3 ]
机构
[1] Whole Brain Architecture Initiat, Edogawa Ku, Nishikoiwa 2-19-21, Tokyo 1330057, Japan
[2] RIKEN Ctr Adv Intelligence Project, Chuo Ku, Nihonbashi 1 Chome Mitsui Bldg,15th Floor, Tokyo 1030027, Japan
[3] Dwango Co Ltd, Chuo Ku, KABUKIZA TOWER,4-12-15 Ginza, Tokyo 1040061, Japan
关键词
autonomous distributed system; conflict; existential risk; distributed goals management; terraforming; technological singularity;
D O I
10.3390/bdcc3020034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 50 条