Learning Bounds for Risk-sensitive Learning

被引:0
|
作者
Lee, Jaeho [1 ]
Park, Sejun [2 ]
Shin, Jinwoo [1 ,2 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Daejeon, South Korea
[2] Korea Adv Inst Sci & Technol KAIST, Grad Sch AI, Daejeon, South Korea
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss, instead of the standard expected loss. In this paper, we propose to study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents (OCE): our general scheme can handle various known risks, e.g., the entropic risk, mean-variance, and conditional value-at-risk, as special cases. We provide two learning bounds on the performance of empirical OCE minimizer. The first result gives an OCE guarantee based on the Rademacher average of the hypothesis space, which generalizes and improves existing results on the expected loss and the conditional value-at-risk. The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE. Finally, we demonstrate the practical implications of the proposed bounds via exploratory experiments on neural networks.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Risk-sensitive reinforcement learning algorithms with generalized average criterion
    Chang-ming Yin
    Wang Han-xing
    Zhao Fei
    Applied Mathematics and Mechanics, 2007, 28 : 405 - 416
  • [32] Gradient-Based Inverse Risk-Sensitive Reinforcement Learning
    Mazumdar, Eric
    Ratliff, Lillian J.
    Fiez, Tanner
    Sastry, S. Shankar
    2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2017,
  • [33] Risk-Sensitive Reinforcement Learning via Policy Gradient Search
    Prashanth, L. A.
    Fu, Michael C.
    FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2022, 15 (05): : 537 - 693
  • [34] Risk-sensitive learning is a winning strategy for leading an urban invasion
    Breen, Alexis J.
    Deffner, Dominik
    ELIFE, 2024, 12
  • [35] Risk-Sensitive Evaluation and Learning to Rank using Multiple Baselines
    Dincer, B. Taner
    Macdonald, Craig
    Ounis, Iadh
    SIGIR'16: PROCEEDINGS OF THE 39TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2016, : 483 - 492
  • [36] Risk-sensitive reinforcement learning algorithms with generalized average criterion
    Yin Chang-ming
    Wang Han-xing
    Zhao Fei
    APPLIED MATHEMATICS AND MECHANICS-ENGLISH EDITION, 2007, 28 (03) : 405 - 416
  • [37] Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach
    Fei, Yingjie
    Yang, Zhuoran
    Wang, Zhaoran
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [38] Risk-sensitive reinforcement learning applied to control under constraints
    Geibel, P. (PGEIBEL@UOS.DE), 1600, American Association for Artificial Intelligence (24):
  • [39] Risk-Sensitive Portfolio Management by using Distributional Reinforcement Learning
    Harnpadungkij, Thammasorn
    Chaisangmongkon, Warasinee
    Phunchongharn, Phond
    2019 IEEE 10TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY (ICAST 2019), 2019, : 110 - 115
  • [40] Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks
    Ben Khalifa, Nesrine
    Assaad, Mohamad
    Debbah, Merouane
    2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,