Challenging Common Assumptions in Convex Reinforcement Learning

被引:0
|
作者
Mutti, Mirco [1 ,2 ]
De Santi, Riccardo [3 ]
De Bartolomeis, Piersilvio [3 ]
Restelli, Marcello [1 ]
机构
[1] Politecn Milan, Milan, Italy
[2] Univ Bologna, Bologna, Italy
[3] Swiss Fed Inst Technol, Zurich, Switzerland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The classic Reinforcement Learning (RL) formulation concerns the maximization of a scalar reward function. More recently, convex RL has been introduced to extend the RL formulation to all the objectives that are convex functions of the state distribution induced by a policy. Notably, convex RL covers several relevant applications that do not fall into the scalar formulation, including imitation learning, risk-averse RL, and pure exploration. In classic RL, it is common to optimize an infinite trials objective, which accounts for the state distribution instead of the empirical state visitation frequencies, even though the actual number of trajectories is always finite in practice. This is theoretically sound since the infinite trials and finite trials objectives are equivalent and thus lead to the same optimal policy. In this paper, we show that this hidden assumption does not hold in convex RL. In particular, we prove that erroneously optimizing the infinite trials objective in place of the actual finite trials one, as it is usually done, can lead to a significant approximation error. Since the finite trials setting is the default in both simulated and real-world RL, we believe shedding light on this issue will lead to better approaches and methodologies for convex RL, impacting relevant research areas such as imitation learning, risk-averse RL, and pure exploration among others.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
    Locatello, Francesco
    Bauer, Stefan
    Lucic, Mario
    Ratsch, Gunnar
    Gelly, Sylvain
    Scholkopf, Bernhard
    Bachem, Olivier
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [2] Pragmatic Terrorism: Challenging Common Assumptions
    Pollichieni, Luciano
    GLOBAL POLICY, 2021, 12 (02) : 239 - 240
  • [3] Challenging "common-sense" assumptions in bioethics
    Lustig, BA
    JOURNAL OF MEDICINE AND PHILOSOPHY, 2005, 30 (04): : 325 - 329
  • [4] Reinforcement Learning with Convex Constraints
    Miryoosefi, Sobhan
    Brantley, Kiante
    Daume, Hal, III
    Dudik, Miroslav
    Schapire, Robert E.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [5] CHALLENGING COMMON ASSUMPTIONS ABOUT CATASTROPHIC FORGETTING AND KNOWLEDGE ACCUMULATION
    Lesort, Timothee
    Ostapenko, Oleksiy
    Rodriguez, Pau
    Misra, Diganta
    Arefin, Md Rifat
    Charlin, Laurent
    Rish, Irina
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 43 - 65
  • [6] Challenging assumptions
    Saunders, HD
    PHYSICAL THERAPY, 1998, 78 (07): : 783 - 784
  • [7] Challenging Assumptions
    Morgan, John
    GEOGRAPHY, 2009, 94 : 115 - 118
  • [8] Convex Reinforcement Learning in Finite Trials
    Mutti, Mirco
    De Santi, Riccardo
    De Bartolomeis, Piersilvio
    Restelli, Marcello
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [9] Challenging the assumptions about the frequency and coexistence of learning disability types
    Mayes, Susan Dickerson
    Calhoun, Susan L.
    SCHOOL PSYCHOLOGY INTERNATIONAL, 2007, 28 (04) : 437 - 448
  • [10] Challenging assumptions: Mobile Learning for Mathematics Project in South Africa
    Roberts, Nicky
    Vanska, Riitta
    DISTANCE EDUCATION, 2011, 32 (02) : 243 - 259