Peacekeeping Conditions for an Artificial Intelligence Society

被引:1
|
作者
Yamakawa, Hiroshi [1 ,2 ,3 ]
机构
[1] Whole Brain Architecture Initiat, Edogawa Ku, Nishikoiwa 2-19-21, Tokyo 1330057, Japan
[2] RIKEN Ctr Adv Intelligence Project, Chuo Ku, Nihonbashi 1 Chome Mitsui Bldg,15th Floor, Tokyo 1030027, Japan
[3] Dwango Co Ltd, Chuo Ku, KABUKIZA TOWER,4-12-15 Ginza, Tokyo 1040061, Japan
关键词
autonomous distributed system; conflict; existential risk; distributed goals management; terraforming; technological singularity;
D O I
10.3390/bdcc3020034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 50 条
  • [31] Artificial Intelligence: Society’s New Black Box?
    Clarisa Elena Nelu
    Development, 2024, 67 (1) : 61 - 74
  • [32] Specialty Society Support for Multicenter Research in Artificial Intelligence
    Allen, Bibb
    Schmidt, Kendall
    Brink, Laura
    Pisano, E.
    Coombs, Laura
    Apgar, Charles
    Dreyer, Keith
    Wald, Christoph
    ACADEMIC RADIOLOGY, 2023, 30 (04) : 640 - 643
  • [33] 2022 Swedish Artificial Intelligence Society Workshop (SAIS)
    34th Workshop of the Swedish Artificial Intelligence Society, SAIS 2022, 2022,
  • [34] Artificial Intelligence Recapitulates Ageism Inherently Expressed in Society
    Martens, L.
    Virgin, N.
    Derenne, N.
    Hoffarth, P.
    VanEck, R. N.
    Manocha, G. D.
    Jurivich, D.
    JOURNAL OF THE AMERICAN GERIATRICS SOCIETY, 2024, 72 : S209 - S209
  • [35] SIGINT AND PEACEKEEPING The untapped intelligence resource
    Aid, Matthew M.
    PEACEKEEPING INTELLIGENCE: NEW PLAYERS, EXTENDED BOUNDARIES, 2006, : 41 - 57
  • [36] Assessing the Performance of Artificial Intelligence Models: Insights from the American Society of Functional Neuroradiology Artificial Intelligence Competition
    Jiang, Bin
    Ozkara, Burak B.
    Zhu, Guangming
    Boothroyd, Derek
    Allen, Jason W.
    Barboriak, Daniel P.
    Chang, Peter
    Chan, Cynthia
    Chaudhari, Ruchir
    Chen, Hui
    Chukus, Anjeza
    Ding, Victoria
    Douglas, David
    Filippi, Christopher G.
    Flanders, Adam E.
    Godwin, Ryan
    Hashmi, Syed
    Hess, Christopher
    Hsu, Kevin
    Lui, Yvonne W.
    Maldjian, Joseph A.
    Michel, Patrik
    Nalawade, Sahil S.
    Patel, Vishal
    Raghavan, Prashant
    Sair, Haris I.
    Tanabe, Jody
    Welker, Kirk
    Whitlow, Christopher T.
    Zaharchuk, Greg
    Wintermark, Max
    AMERICAN JOURNAL OF NEURORADIOLOGY, 2024, 45 (09) : 1276 - 1283
  • [37] JUST PEACEKEEPING Managing the relationship between peacekeeping intelligence and the prevention and punishment of international crimes
    Penny, Christopher K.
    PEACEKEEPING INTELLIGENCE: NEW PLAYERS, EXTENDED BOUNDARIES, 2006, : 140 - 157
  • [38] The role of artificial intelligence and algorithms in the working conditions formation
    Getman, Anatolii P.
    Yaroshenko, Oleg M.
    Dmytryk, Olga O.
    Tykhonovych, Oleksii Y.
    Hryn, Dmytro V.
    AI & SOCIETY, 2024,
  • [39] Reflections on possible conflicts between artificial intelligence and the future of society
    Brito Paredes, Patricio
    Villavicencio Aguilar, Carmita
    Sanchez Saca, Pamela
    REVISTA DE LA UNIVERSIDAD DEL ZULIA, 2019, 10 (28): : 260 - 280
  • [40] THE IMPACT OF ARTIFICIAL INTELLIGENCE ON SOCIETY VIEWS OF ISLAMIC RELIGIOUS LEADERS
    Vinichenko, Mikhail V.
    Chulanova, Oxana L.
    Vinogradova, Marina V.
    Amozova, Larisa N.
    EUROPEAN JOURNAL OF SCIENCE AND THEOLOGY, 2020, 16 (03) : 67 - 77