First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces

被引:0
|
作者
Jordan, Michael I. [1 ]
Lin, Tianyi [1 ]
Vlatakis-Gkaragkounis, Emmanouil V. [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
关键词
PROXIMAL POINT ALGORITHM; MONOTONE VECTOR-FIELDS; STOCHASTIC-APPROXIMATION; VARIATIONAL-INEQUALITIES; GEOMETRIC OPTIMIZATION; UNIFIED ANALYSIS; LINE-SEARCH; GRADIENT; MANIFOLDS; ACCELERATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
From optimal transport to robust dimensionality reduction, a plethora of machine learning applications can be cast into the min-max optimization problems over Riemannian manifolds. Though many min-max algorithms have been analyzed in the Euclidean setting, it has proved elusive to translate these results to the Riemannian case. Zhang et al. have recently shown that geodesic convex concave Riemannian problems always admit saddle-point solutions. Inspired by this result, we study whether a performance gap between Riemannian and optimal Euclidean space convex-concave algorithms is necessary. We answer this question in the negative-we prove that the Riemannian corrected extragradient (RCEG) method achieves last-iterate convergence at a linear rate in the geodesically strongly-convex-concave case, matching the Euclidean result. Our results also extend to the stochastic or non-smooth case where RCEG and Riemanian gradient ascent descent (RGDA) achieve near-optimal convergence rates up to factors depending on curvature of the manifold.
引用
收藏
页数:18
相关论文
共 50 条