A FUNCTIONAL MODEL METHOD FOR NONCONVEX NONSMOOTH CONDITIONAL STOCHASTIC OPTIMIZATION\ast

被引:0
|
作者
Ruszczynski, Andrzej [1 ]
Yang, Shangzhe [1 ]
机构
[1] Rutgers State Univ, Dept Management Sci & Informat Syst, Piscataway, NJ 08854 USA
关键词
conditional stochastic optimization; nonsmooth optimization; stochastic subgradi- ent methods; reparametrization; ALGORITHMS; APPROXIMATIONS;
D O I
10.1137/23M1617965
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We consider stochastic optimization problems involving an expected value of a nonlinear function of a base random vector and a conditional expectation of another function depending on the base random vector, a dependent random vector, and the decision variables. We call such problems conditional stochastic optimization problems. They arise in many applications, such as uplift modeling, reinforcement learning, and contextual optimization. We propose a specialized single lems with a Lipschitz smooth outer function and a generalized differentiable inner function. In the method, we approximate the inner conditional expectation with a rich parametric model whose mean squared error satisfies a stochastic version of a \Lojasiewicz condition. The model is used by an inner learning algorithm. The main feature of our approach is that unbiased stochastic estimates of the directions used by the method can be generated with one observation from the joint distribution per iteration, which makes it applicable to real-time learning. The directions, however, are not gradients or subgradients of any overall objective function. We prove the convergence of the method with probability one, using the method of differential inclusions and a specially designed Lyapunov function, involving a stochastic generalization of the Bregman distance. Finally, a numerical illustration demonstrates the viability of our approach.
引用
收藏
页码:3064 / 3087
页数:24
相关论文
共 50 条
  • [21] A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
    Yang, Minghan
    Milzarek, Andre
    Wen, Zaiwen
    Zhang, Tong
    MATHEMATICAL PROGRAMMING, 2022, 194 (1-2) : 257 - 303
  • [22] An inertial stochastic Bregman generalized alternating direction method of multipliers for nonconvex and nonsmooth optimization
    Liu, Longhui
    Han, Congying
    Guo, Tiande
    Liao, Shichen
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 276
  • [23] Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization∗
    Li, Zhize
    Li, Jian
    Journal of Machine Learning Research, 2022, 23
  • [24] Distributed Stochastic Consensus Optimization With Momentum for Nonconvex Nonsmooth Problems
    Wang, Zhiguo
    Zhang, Jiawei
    Chang, Tsung-Hui
    Li, Jian
    Luo, Zhi-Quan
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 4486 - 4501
  • [25] Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
    Li, Zhize
    Li, Jian
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [26] Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
    Li, Zhize
    Li, Jian
    arXiv, 2022,
  • [27] STOCHASTIC GENERALIZED-DIFFERENTIABLE FUNCTIONS IN THE PROBLEM OF NONCONVEX NONSMOOTH STOCHASTIC OPTIMIZATION
    NORKIN, VI
    CYBERNETICS, 1986, 22 (06): : 804 - 809
  • [28] A Nonconvex Proximal Bundle Method for Nonsmooth Constrained Optimization
    Shen, Jie
    Guo, Fang-Fang
    Xu, Na
    Complexity, 2024, 2024
  • [29] A new trust region method for nonsmooth nonconvex optimization
    Hoseini, N.
    Nobakhtian, S.
    OPTIMIZATION, 2018, 67 (08) : 1265 - 1286
  • [30] A TRUST-REGION METHOD FOR NONSMOOTH NONCONVEX OPTIMIZATION
    Chen, Ziang
    Milzarek, Andre
    Wen, Zaiwen
    JOURNAL OF COMPUTATIONAL MATHEMATICS, 2023, 41 (04): : 683 - 716