Differentially Private Regret Minimization in Episodic Markov Decision Processes

被引:0
|
作者
Chowdhury, Sayak Ray [1 ]
Zhou, Xingyu [2 ]
机构
[1] Indian Inst Sci, Bangalore, Karnataka, India
[2] Wayne State Univ, ECE Dept, Detroit, MI USA
关键词
ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study regret minimization in finite horizon tabular Markov decision processes (MDPs) under the constraints of differential privacy (DP). This is motivated by the widespread applications of reinforcement learning (RL) in real-world sequential decision making problems, where protecting users' sensitive and private information is becoming paramount. We consider two variants of DP - joint DP (JDP), where a centralized agent is responsible for protecting users' sensitive data and local DP (LDP), where information needs to be protected directly on the user side. We first propose two general frameworks - one for policy optimization and another for value iteration - for designing private, optimistic RL algorithms. We then instantiate these frameworks with suitable privacy mechanisms to satisfy JDP and LDP requirements, and simultaneously obtain sublinear regret guarantees. The regret bounds show that under JDP, the cost of privacy is only a lower order additive term, while for a stronger privacy protection under LDP, the cost suffered is multiplicative. Finally, the regret bounds are obtained by a unified analysis, which, we believe, can be extended beyond tabular MDPs.
引用
收藏
页码:6375 / 6383
页数:9
相关论文
共 50 条
  • [31] Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes
    Emek, Yuval
    Lavi, Ron
    Niazadeh, Rad
    Shi, Yangguang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [32] From minimax value to low-regret algorithms for online Markov decision processes
    Guan, Peng
    Raginsky, Maxim
    Willett, Rebecca
    2014 AMERICAN CONTROL CONFERENCE (ACC), 2014, : 471 - 476
  • [33] Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes
    Emek, Yuval
    Lavi, Ron
    Niazadeh, Rad
    Shi, Yangguang
    MATHEMATICS OF OPERATIONS RESEARCH, 2023, 49 (02) : 880 - 900
  • [34] Information-directed policy sampling for episodic Bayesian Markov decision processes
    Diaz, Victoria
    Ghate, Archis
    IISE TRANSACTIONS, 2024,
  • [35] Improved Regret for Differentially Private Exploration in Linear MDP
    Ngo, Dung Daniel
    Vietri, Giuseppe
    Wu, Zhiwei Steven
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [36] Adaptive strategies and regret minimization in arbitrarily varying Markov environments
    Mannor, S
    Shimkin, N
    COMPUTATIONAL LEARNING THEORY, PROCEEDINGS, 2001, 2111 : 128 - 142
  • [37] Variance minimization for continuous-time Markov decision processes: two approaches
    Zhu Quan-xin
    APPLIED MATHEMATICS-A JOURNAL OF CHINESE UNIVERSITIES SERIES B, 2010, 25 (04) : 400 - 410
  • [38] A Sensitivity-Based Construction Approach to Variance Minimization of Markov Decision Processes
    Huang, Yonghao
    Chen, Xi
    ASIAN JOURNAL OF CONTROL, 2019, 21 (03) : 1166 - 1178
  • [39] Variance minimization for continuous-time Markov decision processes: two approaches
    Quan-xin Zhu
    Applied Mathematics-A Journal of Chinese Universities, 2010, 25 : 400 - 410
  • [40] First Passage Risk Probability Minimization for Piecewise Deterministic Markov Decision Processes
    Xin Wen
    Hai-feng Huo
    Xian-ping Guo
    Acta Mathematicae Applicatae Sinica, English Series, 2022, 38 : 549 - 567