Variational Principles for Mirror Descent and Mirror Langevin Dynamics

被引:0
|
作者
Tzen, Belinda [1 ]
Raj, Anant [2 ,3 ]
Raginsky, Maxim [3 ]
Bach, Francis [2 ]
机构
[1] Columbia Univ, Dept Stat, New York, NY 10027 USA
[2] PSL Res Univ, Ecole Normale Super, INRIA, F-75006 Paris, France
[3] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
来源
关键词
Mirrors; Trajectory; Optimal control; Dynamical systems; Costs; Closed loop systems; Geometry; Optimization; optimal control; stochastic optimal control;
D O I
10.1109/LCSYS.2023.3274069
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mirror descent, introduced by Nemirovski and Yudin in the 1970s, is a primal-dual convex optimization method that can be tailored to the geometry of the optimization problem at hand through the choice of a strongly convex potential function. It arises as a basic primitive in a variety of applications, including large-scale optimization, machine learning, and control. This letter proposes a variational formulation of mirror descent and of its stochastic variant, mirror Langevin dynamics. The main idea, inspired by the classic work of Brezis and Ekeland on variational principles for gradient flows, is to show that mirror descent emerges as a closed-loop solution for a certain optimal control problem, and the Bellman value function is given by the Bregman divergence between the initial condition and the global minimizer of the objective function.
引用
收藏
页码:1542 / 1547
页数:6
相关论文
共 50 条
  • [41] Efficiently Solving MDPs with Stochastic Mirror Descent
    Jin, Yujia
    Sidford, Aaron
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [42] Connected Subgraph Detection with Mirror Descent on SDPs
    Aksoylar, Cem
    Orecchia, Lorenzo
    Saligrama, Venkatesh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [43] Distributed Mirror Descent for Online Composite Optimization
    Yuan, Deming
    Hong, Yiguang
    Ho, Daniel W. C.
    Xu, Shengyuan
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (02) : 714 - 729
  • [44] Fastest rates for stochastic mirror descent methods
    Filip Hanzely
    Peter Richtárik
    Computational Optimization and Applications, 2021, 79 : 717 - 766
  • [45] Accelerated Mirror Descent in Continuous and Discrete Time
    Krichene, Walid
    Bayen, Alexandre M.
    Bartlett, Peter L.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [46] Constant Regret, Generalized Mixability, and Mirror Descent
    Mhammedi, Zakaria
    Williamson, Robert C.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [47] Adaptive Stochastic Mirror Descent for Constrained Optimization
    Bayandina, Anastasia
    2017 CONSTRUCTIVE NONSMOOTH ANALYSIS AND RELATED TOPICS (DEDICATED TO THE MEMORY OF V.F. DEMYANOV) (CNSA), 2017, : 40 - 43
  • [48] Mirror Descent View for Neural Network Quantization
    Ajanthan, Thalaiyasingam
    Gupta, Kartik
    Torr, Philip H. S.
    Hartley, Richard
    Dokania, Puneet K.
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [49] Learning Positive Functions with Pseudo Mirror Descent
    Yang, Yingxiang
    Wang, Haoxiang
    Kiyavash, Negar
    He, Niao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [50] Variational approach for static mirror structures
    Kuznetsov, E. A.
    Passot, T.
    Ruban, V. P.
    Sulem, P. L.
    PHYSICS OF PLASMAS, 2015, 22 (04)