Differentially Private Regret Minimization in Episodic Markov Decision Processes

被引:0
|
作者
Chowdhury, Sayak Ray [1 ]
Zhou, Xingyu [2 ]
机构
[1] Indian Inst Sci, Bangalore, Karnataka, India
[2] Wayne State Univ, ECE Dept, Detroit, MI USA
关键词
ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study regret minimization in finite horizon tabular Markov decision processes (MDPs) under the constraints of differential privacy (DP). This is motivated by the widespread applications of reinforcement learning (RL) in real-world sequential decision making problems, where protecting users' sensitive and private information is becoming paramount. We consider two variants of DP - joint DP (JDP), where a centralized agent is responsible for protecting users' sensitive data and local DP (LDP), where information needs to be protected directly on the user side. We first propose two general frameworks - one for policy optimization and another for value iteration - for designing private, optimistic RL algorithms. We then instantiate these frameworks with suitable privacy mechanisms to satisfy JDP and LDP requirements, and simultaneously obtain sublinear regret guarantees. The regret bounds show that under JDP, the cost of privacy is only a lower order additive term, while for a stronger privacy protection under LDP, the cost suffered is multiplicative. Finally, the regret bounds are obtained by a unified analysis, which, we believe, can be extended beyond tabular MDPs.
引用
收藏
页码:6375 / 6383
页数:9
相关论文
共 50 条
  • [1] Differentially Private Reward Functions for Markov Decision Processes
    Benvenuti, Alexander
    Hawkins, Calvin
    Falling, Brandon
    Chen, Bo
    Bialy, Brendan
    Dennis, Miriam
    Hale, Matthew
    2024 IEEE CONFERENCE ON CONTROL TECHNOLOGY AND APPLICATIONS, CCTA 2024, 2024, : 631 - 636
  • [2] Reinforcement Learning Algorithms for Regret Minimization in Structured Markov Decision Processes
    Prabuchandran, K. J.
    Bodas, Tejas
    Tulabandhula, Theja
    AAMAS'16: PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, 2016, : 1289 - 1290
  • [3] The empirical Bayes envelope and regret minimization in competitive Markov decision processes
    Mannor, S
    Shimkin, N
    MATHEMATICS OF OPERATIONS RESEARCH, 2003, 28 (02) : 327 - 345
  • [4] A Duality Approach for Regret Minimization in Average-Reward Ergodic Markov Decision Processes
    Gong, Hao
    Wang, Mengdi
    LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 862 - 883
  • [5] Square-Root Regret Bounds for Continuous-Time Episodic Markov Decision Processes
    Gao, Xuefeng
    Zhou, Xunyu
    MATHEMATICS OF OPERATIONS RESEARCH, 2025,
  • [6] Dynamic Regret of Online Markov Decision Processes
    Zhao, Peng
    Li, Long-Fei
    Zhou, Zhi-Hua
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [7] Parametric Regret in Uncertain Markov Decision Processes
    Xu, Huan
    Mannor, Shie
    PROCEEDINGS OF THE 48TH IEEE CONFERENCE ON DECISION AND CONTROL, 2009 HELD JOINTLY WITH THE 2009 28TH CHINESE CONTROL CONFERENCE (CDC/CCC 2009), 2009, : 3606 - 3613
  • [8] Episodic task learning in Markov decision processes
    Yong Lin
    Fillia Makedon
    Yurong Xu
    Artificial Intelligence Review, 2011, 36 : 87 - 98
  • [9] Episodic task learning in Markov decision processes
    Lin, Yong
    Makedon, Fillia
    Xu, Yurong
    ARTIFICIAL INTELLIGENCE REVIEW, 2011, 36 (02) : 87 - 98
  • [10] Variance minimization of parameterized Markov decision processes
    Xia, Li
    DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS, 2018, 28 (01): : 63 - 81