Differentially Private Regret Minimization in Episodic Markov Decision Processes

被引:0
|
作者
Chowdhury, Sayak Ray [1 ]
Zhou, Xingyu [2 ]
机构
[1] Indian Inst Sci, Bangalore, Karnataka, India
[2] Wayne State Univ, ECE Dept, Detroit, MI USA
关键词
ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study regret minimization in finite horizon tabular Markov decision processes (MDPs) under the constraints of differential privacy (DP). This is motivated by the widespread applications of reinforcement learning (RL) in real-world sequential decision making problems, where protecting users' sensitive and private information is becoming paramount. We consider two variants of DP - joint DP (JDP), where a centralized agent is responsible for protecting users' sensitive data and local DP (LDP), where information needs to be protected directly on the user side. We first propose two general frameworks - one for policy optimization and another for value iteration - for designing private, optimistic RL algorithms. We then instantiate these frameworks with suitable privacy mechanisms to satisfy JDP and LDP requirements, and simultaneously obtain sublinear regret guarantees. The regret bounds show that under JDP, the cost of privacy is only a lower order additive term, while for a stronger privacy protection under LDP, the cost suffered is multiplicative. Finally, the regret bounds are obtained by a unified analysis, which, we believe, can be extended beyond tabular MDPs.
引用
收藏
页码:6375 / 6383
页数:9
相关论文
共 50 条
  • [41] Variance minimization for continuous-time Markov decision processes: two approaches
    ZHU Quanxin Department of Mathematics Ningbo University Ningbo China
    Applied Mathematics:A Journal of Chinese Universities(Series B), 2010, 25 (04) : 400 - 410
  • [42] First Passage Risk Probability Minimization for Piecewise Deterministic Markov Decision Processes
    Wen, Xin
    Huo, Hai-feng
    Guo, Xian-ping
    ACTA MATHEMATICAE APPLICATAE SINICA-ENGLISH SERIES, 2022, 38 (03): : 549 - 567
  • [43] Variance minimization for continuous-time Markov decision processes: two approaches
    ZHU Quan-xin Department of Mathematics
    Applied Mathematics:A Journal of Chinese Universities, 2010, (04) : 400 - 410
  • [44] Regret Analysis of Policy Gradient Algorithm for Infinite Horizon Average Reward Markov Decision Processes
    Bai, Qinbo
    Mondal, Washim Uddin
    Aggarwal, Vaneet
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 10980 - 10988
  • [45] √n-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank
    Dong, Kefan
    Peng, Jian
    Wang, Yining
    Zhou, Yuan
    CONFERENCE ON LEARNING THEORY, VOL 125, 2020, 125
  • [46] Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes
    Zhang, Shun
    Durfee, Edmund H.
    Singh, Satinder
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 4867 - 4873
  • [47] A Sublinear-Regret Reinforcement Learning Algorithm on Constrained Markov Decision Processes with reset action
    Watanabe, Takashi
    Sakuragawa, Takashi
    ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, 2020, : 51 - 55
  • [48] LOGARITHMIC REGRET BOUNDS FOR CONTINUOUS-TIME AVERAGE-REWARD MARKOV DECISION PROCESSES
    Gao, Xuefeng
    Zhou, Xun Yu
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2024, 62 (05) : 2529 - 2556
  • [49] Markov decision processes
    White, D.J.
    Journal of the Operational Research Society, 1995, 46 (06):
  • [50] Markov Decision Processes
    Bäuerle N.
    Rieder U.
    Jahresbericht der Deutschen Mathematiker-Vereinigung, 2010, 112 (4) : 217 - 243