Reward Reports for Reinforcement Learning

被引:8
|
作者
Gilbert, Thomas Krendl [1 ]
Lambert, Nathan [2 ]
Dean, Sarah [3 ]
Zick, Tom [4 ]
Snoswell, Aaron [5 ]
Mehta, Soham [6 ]
机构
[1] Cornell Tech, Digital Life Initiat, New York, NY 10044 USA
[2] HuggingFace, Berkeley, CA USA
[3] Cornell Univ, Ithaca, NY USA
[4] Harvard Law Sch, Boston, MA USA
[5] Queensland Univ Technol, Ctr Automated Decis Making & Soc, Brisbane, Qld, Australia
[6] Columbia Univ, New York, NY USA
关键词
Reward function; reporting; documentation; disaggregated evaluation; ethical considerations; MODEL; GO;
D O I
10.1145/3600211.3604698
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Building systems that are good for society in the face of complex societal effects requires a dynamic approach. Recent approaches to machine learning (ML) documentation have demonstrated the promise of discursive frameworks for deliberation about these complexities. However, these developments have been grounded in a static ML paradigm, leaving the role of feedback and post-deployment performance unexamined. Meanwhile, recent work in reinforcement learning has shown that the effects of feedback and optimization objectives on system behavior can be wide-ranging and unpredictable. In this paper we sketch a framework for documenting deployed and iteratively updated learning systems, which we call Reward Reports. Taking inspiration from technical concepts in reinforcement learning, we outline Reward Reports as living documents that track updates to design choices and assumptions behind what a particular automated system is optimizing for. They are intended to track dynamic phenomena arising from system deployment, rather than merely static properties of models or data. After presenting the elements of a Reward Report, we discuss a concrete example: Meta's BlenderBot 3 chatbot. Several others for game-playing (DeepMind's MuZero), content recommendation (MovieLens), and traffic control (Project Flow) are included in the appendix.
引用
收藏
页码:84 / 130
页数:47
相关论文
共 50 条
  • [41] Balancing multiple sources of reward in reinforcement learning
    Shelton, CR
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 13, 2001, 13 : 1082 - 1088
  • [42] IMMEDIATE REINFORCEMENT IN DELAYED REWARD LEARNING IN PIGEONS
    WINTER, J
    PERKINS, CC
    JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR, 1982, 38 (02) : 169 - 179
  • [43] Evolved Intrinsic Reward Functions for Reinforcement Learning
    Niekum, Scott
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 1955 - 1956
  • [44] Reward Shaping Based Federated Reinforcement Learning
    Hu, Yiqiu
    Hua, Yun
    Liu, Wenyan
    Zhu, Jun
    IEEE ACCESS, 2021, 9 : 67259 - 67267
  • [45] CONDITIONED (SECONDARY) REINFORCEMENT AND DELAYED REWARD LEARNING
    PERKINS, CC
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1981, 18 (02) : 57 - 57
  • [46] Hindsight Reward Shaping in Deep Reinforcement Learning
    de Villiers, Byron
    Sabatta, Deon
    2020 INTERNATIONAL SAUPEC/ROBMECH/PRASA CONFERENCE, 2020, : 653 - 659
  • [47] Robust Average-Reward Reinforcement Learning
    Wang, Yue
    Velasquez, Alvaro
    Atia, George
    Prater-Bennette, Ashley
    Zou, Shaofeng
    Journal of Artificial Intelligence Research, 2024, 80 : 719 - 803
  • [48] Reward-Free Exploration for Reinforcement Learning
    Jin, Chi
    Krishnamurthy, Akshay
    Simchowitz, Max
    Yu, Tiancheng
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [49] AntNet with Reward-Penalty Reinforcement Learning
    Lalbakhsh, Pooia
    Zaeri, Bahram
    Lalbakhsh, Ali
    Fesharaki, Mehdi N.
    2010 SECOND INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, COMMUNICATION SYSTEMS AND NETWORKS (CICSYN), 2010, : 17 - 21
  • [50] Schedules of Reinforcement, Learning, and Frequency Reward Programs
    Craig, Adam
    Silk, Timothy
    ADVANCES IN CONSUMER RESEARCH, VOL XXXVI, 2009, 36 : 555 - 555