Generalization Bounds of Nonconvex-(Strongly)-Concave Stochastic Minimax Optimization

被引:0
|
作者
Zhang, Siqi [1 ]
Hu, Yifan [2 ,3 ]
Zhang, Liang [3 ]
He, Niao [3 ]
机构
[1] Johns Hopkins Univ, Baltimore, MD 21218 USA
[2] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
[3] Swiss Fed Inst Technol, Zurich, Switzerland
基金
瑞士国家科学基金会;
关键词
SAMPLE AVERAGE APPROXIMATION; COMPLEXITY; STABILITY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper studies the generalization performance of algorithms for solving nonconvex(strongly)-concave (NC-SC / NC-C) stochastic minimax optimization measured by the stationarity of primal functions. We first establish algorithm-agnostic generalization bounds via uniform convergence between the empirical minimax problem and the population minimax problem. The sample complexities for achieving.-generalization are (O) over tilde (d kappa(2)epsilon(-2)) and (O) over tilde (d epsilon(-4)) for NC-SC and NC-C settings, respectively, where d is the dimension of the primal variable and. is the condition number. We further study the algorithm-dependent generalization bounds via stability arguments of algorithms. In particular, we introduce a novel stability notion for minimax problems and build a connection between stability and generalization. As a result, we establish algorithm-dependent generalization bounds for stochastic gradient descent ascent (SGDA) and the more general sampling-determined algorithms (SDA).
引用
收藏
页数:31
相关论文
共 50 条
  • [41] A hybrid stochastic optimization framework for composite nonconvex optimization
    Quoc Tran-Dinh
    Pham, Nhan H.
    Phan, Dzung T.
    Nguyen, Lam M.
    MATHEMATICAL PROGRAMMING, 2022, 191 (02) : 1005 - 1071
  • [42] A hybrid stochastic optimization framework for composite nonconvex optimization
    Quoc Tran-Dinh
    Nhan H. Pham
    Dzung T. Phan
    Lam M. Nguyen
    Mathematical Programming, 2022, 191 : 1005 - 1071
  • [43] Generalization Bounds for (Wasserstein) Robust Optimization
    An, Yang
    Gao, Rui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [44] Nonconvex Optimization with Dual Bounds and Application in Communication Systems
    Ahadizadeh, Akram
    Anbarzadeh, Sadaf
    JOURNAL OF MATHEMATICS AND COMPUTER SCIENCE-JMCS, 2012, 5 (04): : 304 - 312
  • [45] Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
    Yu. M. Ermol'ev
    V. I. Norkin
    Cybernetics and Systems Analysis, 1998, 34 : 196 - 215
  • [46] Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
    Ermol'ev, YM
    Norkin, VI
    CYBERNETICS AND SYSTEMS ANALYSIS, 1998, 34 (02) : 196 - 215
  • [47] PROXIMAL POINT ALGORITHMS FOR NONCONVEX-NONCONCAVE MINIMAX OPTIMIZATION PROBLEMS
    Li, Xiao-bing
    Jiang, Yuan-xin
    Yao, Bin
    JOURNAL OF NONLINEAR AND CONVEX ANALYSIS, 2024, 25 (08) : 2007 - 2021
  • [48] Limitations of Information-Theoretic Generalization Bounds for Gradient Descent Methods in Stochastic Convex Optimization
    Haghifam, Mahdi
    Rodriguez-Galvez, Borja
    Thobaben, Ragnar
    Skoglund, Mikael
    Roy, Daniel M.
    Dziugaite, Gintare Karolina
    INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 201, 2023, 201 : 663 - 706
  • [49] STOCHASTIC QUASI-NEWTON METHOD FOR NONCONVEX STOCHASTIC OPTIMIZATION
    Wang, Xiao
    Ma, Shiqian
    Goldfarb, Donald
    Liu, Wei
    SIAM JOURNAL ON OPTIMIZATION, 2017, 27 (02) : 927 - 956
  • [50] Stochastic Nested Variance Reduction for Nonconvex Optimization
    Zhou, Dongruo
    Xu, Pan
    Gu, Quanquan
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21