Differentially Private Regret Minimization in Episodic Markov Decision Processes

被引:0
|
作者
Chowdhury, Sayak Ray [1 ]
Zhou, Xingyu [2 ]
机构
[1] Indian Inst Sci, Bangalore, Karnataka, India
[2] Wayne State Univ, ECE Dept, Detroit, MI USA
关键词
ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study regret minimization in finite horizon tabular Markov decision processes (MDPs) under the constraints of differential privacy (DP). This is motivated by the widespread applications of reinforcement learning (RL) in real-world sequential decision making problems, where protecting users' sensitive and private information is becoming paramount. We consider two variants of DP - joint DP (JDP), where a centralized agent is responsible for protecting users' sensitive data and local DP (LDP), where information needs to be protected directly on the user side. We first propose two general frameworks - one for policy optimization and another for value iteration - for designing private, optimistic RL algorithms. We then instantiate these frameworks with suitable privacy mechanisms to satisfy JDP and LDP requirements, and simultaneously obtain sublinear regret guarantees. The regret bounds show that under JDP, the cost of privacy is only a lower order additive term, while for a stronger privacy protection under LDP, the cost suffered is multiplicative. Finally, the regret bounds are obtained by a unified analysis, which, we believe, can be extended beyond tabular MDPs.
引用
收藏
页码:6375 / 6383
页数:9
相关论文
共 50 条
  • [21] An Analytic Characterization of Model Minimization in Factored Markov Decision Processes
    Guo, Wenyuan
    Leong, Tze-Yun
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 1077 - 1082
  • [22] Decision making using minimization of regret
    Yager, RR
    INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, 2004, 36 (02) : 109 - 128
  • [23] Markov decision processes with variance minimization: A new condition and approach
    Zhu, Quanxin
    Guo, Xianping
    STOCHASTIC ANALYSIS AND APPLICATIONS, 2007, 25 (03) : 577 - 592
  • [24] Semi-Markov decision processes with variance minimization criterion
    Qingda Wei
    Xianping Guo
    4OR, 2015, 13 : 59 - 79
  • [25] Pure Exploration in Episodic Fixed-Horizon Markov Decision Processes
    Putta, Sudeep Raja
    Tulabandhula, Theja
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 1703 - 1704
  • [26] IMED-RL: Regret optimal learning of ergodic Markov decision processes
    Pesquerel, Fabien
    Maillard, Odalric-Ambrym
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [27] Online Learning for Markov Decision Processes in Nonstationary Environments: A Dynamic Regret Analysis
    Li, Yingying
    Li, Na
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 1232 - 1237
  • [28] Sampling Based Approaches for Minimizing Regret in Uncertain Markov Decision Processes (MDPs)
    Ahmed, Asrar
    Varakantham, Pradeep
    Lowalekar, Meghna
    Adulyasak, Yossiri
    Jaillet, Patrick
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2017, 59 : 229 - 264
  • [29] Differentially Private Online Submodular Minimization
    Cardoso, Adrian Rivera
    Cummings, Rachel
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [30] Differentially Private Empirical Risk Minimization
    Chaudhuri, Kamalika
    Monteleoni, Claire
    Sarwate, Anand D.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 1069 - 1109