Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective

被引:0
|
作者
Tom Everitt
Marcus Hutter
Ramana Kumar
Victoria Krakovna
机构
[1] DeepMind,
[2] Australian National University,undefined
来源
Synthese | 2021年 / 198卷
关键词
AGI safety; Reinforcement learning; Bayesian learning; Causality; Decision theory; Causal influence diagrams;
D O I
暂无
中图分类号
学科分类号
摘要
Can humans get arbitrarily capable reinforcement learning (RL) agents to do their bidding? Or will sufficiently capable RL agents always find ways to bypass their intended objectives by shortcutting their reward signal? This question impacts how far RL can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we study when an RL agent has an instrumental goal to tamper with its reward process, and describe design principles that prevent instrumental goals for two different types of reward tampering (reward function tampering and RF-input tampering). Combined, the design principles can prevent reward tampering from being an instrumental goal. The analysis benefits from causal influence diagrams to provide intuitive yet precise formalizations.
引用
收藏
页码:6435 / 6467
页数:32
相关论文
共 21 条
  • [1] Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective
    Everitt, Tom
    Hutter, Marcus
    Kumar, Ramana
    Krakovna, Victoria
    SYNTHESE, 2021, 198 (SUPPL 27) : 6435 - 6467
  • [2] Interpretable Reward Redistribution in Reinforcement Learning: A Causal Approach
    Zhang, Yudi
    Du, Yali
    Huang, Biwei
    Wang, Ziyan
    Wang, Jun
    Fang, Meng
    Pechenizkiy, Mykola
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Multi-Objectivization of Reinforcement Learning Problems by Reward Shaping
    Brys, Tim
    Harutyunyan, Anna
    Vrancx, Peter
    Taylor, Matthew E.
    Kudenko, Daniel
    Nowe, Ann
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 2315 - 2322
  • [4] Causal Influence Detection for Improving Efficiency in Reinforcement Learning
    Seitzer, Maximilian
    Schoelkopf, Bernhard
    Martius, Georg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] Aberrant reward learning, but not negative reinforcement learning, is related to depressive symptoms: an attentional perspective
    Hertz-Palmor, Nimrod
    Rozenblit, Danielle
    Lavi, Shani
    Zeltser, Jonathan
    Kviatek, Yonatan
    Lazarov, Amit
    PSYCHOLOGICAL MEDICINE, 2024, 54 (04) : 794 - 807
  • [6] Balance Reward and Safety Optimization for Safe Reinforcement Learning: A Perspective of Gradient Manipulation
    Gu, Shangding
    Sel, Bilgehan
    Ding, Yuhao
    Wang, Lu
    Lin, Qingwei
    Jin, Ming
    Knoll, Alois
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21099 - 21106
  • [7] Multi-Agent Reinforcement Learning for Problems with Combined Individual and Team Reward
    Sheikh, Hassam Ullah
    Boloni, Ladislau
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [8] Reinforcement learning in situated agents: Theoretical problems and practical solutions
    Pendrith, MD
    ADVANCES IN ROBOT LEARNING, PROCEEDINGS, 2000, 1812 : 84 - 102
  • [9] Solving semi-Markov decision problems using average reward reinforcement learning
    Das, TK
    Gosavi, A
    Mahadevan, S
    Marchalleck, N
    MANAGEMENT SCIENCE, 1999, 45 (04) : 560 - 574
  • [10] Towards Designing Optimal Reward Functions in Multi-Agent Reinforcement Learning Problems
    Grunitzki, Ricardo
    da Silva, Bruno C.
    Bazzan, Ana L. C.
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,