First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems

被引:0
|
作者
Liu, Mingrui [1 ]
Rafique, Hassan [2 ]
Lin, Qihang [3 ]
Yang, Tianbao [1 ]
机构
[1] Univ Iowa, Dept Comp Sci, Iowa City, IA 52242 USA
[2] Univ Iowa, Dept Math, Iowa City, IA 52242 USA
[3] Univ Iowa, Business Analyt Dept, Iowa City, IA 52242 USA
基金
美国国家科学基金会;
关键词
Weakly-Convex-Weakly-Concave; Min-max; Generative Adversarial Nets; Variational Inequality; First-order Convergence; VARIANCE REDUCTION; VARIATIONAL-INEQUALITIES; MONOTONE-OPERATORS; PROX-METHOD;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider first-order convergence theory and algorithms for solving a class of non-convex non-concave min-max saddle-point problems, whose objective function is weakly convex in the variables of minimization and weakly concave in the variables of maximization. It has many important applications in machine learning including training Generative Adversarial Nets (GANs). We propose an algorithmic framework motivated by the inexact proximal point method, where the weakly monotone variational inequality (VI) corresponding to the original min-max problem is solved through approximately solving a sequence of strongly monotone VIs constructed by adding a strongly monotone mapping to the original gradient mapping. We prove first-order convergence to a nearly stationary solution of the original min-max problem of the generic algorithmic framework and establish different rates by employing different algorithms for solving each strongly monotone VI. Experiments verify the convergence theory and also demonstrate the effectiveness of the proposed methods on training GANs.
引用
收藏
页数:34
相关论文
共 50 条