Reward Reports for Reinforcement Learning

被引:8
|
作者
Gilbert, Thomas Krendl [1 ]
Lambert, Nathan [2 ]
Dean, Sarah [3 ]
Zick, Tom [4 ]
Snoswell, Aaron [5 ]
Mehta, Soham [6 ]
机构
[1] Cornell Tech, Digital Life Initiat, New York, NY 10044 USA
[2] HuggingFace, Berkeley, CA USA
[3] Cornell Univ, Ithaca, NY USA
[4] Harvard Law Sch, Boston, MA USA
[5] Queensland Univ Technol, Ctr Automated Decis Making & Soc, Brisbane, Qld, Australia
[6] Columbia Univ, New York, NY USA
关键词
Reward function; reporting; documentation; disaggregated evaluation; ethical considerations; MODEL; GO;
D O I
10.1145/3600211.3604698
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Building systems that are good for society in the face of complex societal effects requires a dynamic approach. Recent approaches to machine learning (ML) documentation have demonstrated the promise of discursive frameworks for deliberation about these complexities. However, these developments have been grounded in a static ML paradigm, leaving the role of feedback and post-deployment performance unexamined. Meanwhile, recent work in reinforcement learning has shown that the effects of feedback and optimization objectives on system behavior can be wide-ranging and unpredictable. In this paper we sketch a framework for documenting deployed and iteratively updated learning systems, which we call Reward Reports. Taking inspiration from technical concepts in reinforcement learning, we outline Reward Reports as living documents that track updates to design choices and assumptions behind what a particular automated system is optimizing for. They are intended to track dynamic phenomena arising from system deployment, rather than merely static properties of models or data. After presenting the elements of a Reward Report, we discuss a concrete example: Meta's BlenderBot 3 chatbot. Several others for game-playing (DeepMind's MuZero), content recommendation (MovieLens), and traffic control (Project Flow) are included in the appendix.
引用
收藏
页码:84 / 130
页数:47
相关论文
共 50 条
  • [31] Explicable Reward Design for Reinforcement Learning Agents
    Devidze, Rati
    Radanovic, Goran
    Kamalaruban, Parameswaran
    Singla, Adish
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [32] Robust Average-Reward Reinforcement Learning
    Wang, Yue
    Velasquez, Alvaro
    Atia, George
    Prater-Bennette, Ashley
    Zou, Shaofeng
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 719 - 803
  • [33] Skill Reward for Safe Deep Reinforcement Learning
    Cheng, Jiangchang
    Yu, Fumin
    Zhang, Hongliang
    Dai, Yinglong
    UBIQUITOUS SECURITY, 2022, 1557 : 203 - 213
  • [34] On the Power of Global Reward Signals in Reinforcement Learning
    Kemmerich, Thomas
    Buening, Hans Kleine
    MULTIAGENT SYSTEM TECHNOLOGIES, 2011, 6973 : 53 - +
  • [35] Option compatible reward inverse reinforcement learning
    Hwang, Rakhoon
    Lee, Hanjin
    Hwang, Hyung Ju
    PATTERN RECOGNITION LETTERS, 2022, 154 : 83 - 89
  • [36] DISCRIMINATION OF REWARD IN LEARNING WITH PARTIAL AND CONTINUOUS REINFORCEMENT
    HULSE, SH
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1962, 64 (03): : 227 - &
  • [37] Evolution of an Internal Reward Function for Reinforcement Learning
    Zuo, Weiyi
    Pedersen, Joachim Winther
    Risi, Sebastian
    PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2023 COMPANION, 2023, : 351 - 354
  • [38] Reinforcement learning with nonstationary reward depending on the episode
    Shibuya, Takeshi
    Yasunobu, Seiji
    2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2011, : 2145 - 2150
  • [39] Inverse Reinforcement Learning with the Average Reward Criterion
    Wu, Feiyang
    Ke, Jingyang
    Wu, Anqi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [40] THE ROLE OF SECONDARY REINFORCEMENT IN DELAYED REWARD LEARNING
    SPENCE, KW
    PSYCHOLOGICAL REVIEW, 1947, 54 (01) : 1 - 8