MANIFOLD SAMPLING FOR l1 NONCONVEX OPTIMIZATION

被引:13
|
作者
Larson, Jeffrey [1 ]
Menickelly, Matt [1 ,2 ]
Wild, Stefan M. [1 ]
机构
[1] Argonne Natl Lab, Math & Comp Sci Div, Lemont, IL 60439 USA
[2] Lehigh Univ, Dept Ind & Syst Engn, HS Mohler Lab, Bethlehem, PA 18015 USA
关键词
composite nonsmooth optimization; gradient sampling; derivative-free optimization; DERIVATIVE-FREE OPTIMIZATION; TRUST-REGION ALGORITHMS; NONSMOOTH OPTIMIZATION; CONVERGENCE;
D O I
10.1137/15M1042097
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We present a new algorithm, called manifold sampling, for the unconstrained minimization of a nonsmooth composite function h o F when h has known structure. In particular, by classifying points in the domain of the nonsmooth function h into manifolds, we adapt search directions within a trust-region framework based on knowledge of manifolds intersecting the current trust region. We motivate this idea through a study of l(1) functions, where it is trivial to classify objective function manifolds using zeroth-order information from the constituent functions F-i, and give an explicit statement of a manifold sampling algorithm in this case. We prove that all cluster points of iterates generated by this algorithm are stationary in the Clarke sense. We prove a similar result for a stochastic variant of the algorithm. Additionally, our algorithm can accept iterates that are points where h is nondifferentiable and requires only an approximation of gradients of F at the trust-region center. Numerical results for several variants of the algorithm show that using manifold information from additional points near the current iterate can improve practical performance. The best variants are also shown to be competitive, particularly in terms of robustness, with other nonsmooth, derivative-free solvers.
引用
收藏
页码:2540 / 2563
页数:24
相关论文
共 50 条
  • [1] MANIFOLD SAMPLING FOR OPTIMIZATION OF NONCONVEX FUNCTIONS THAT ARE PIECEWISE LINEAR COMPOSITIONS OF SMOOTH COMPONENTS
    Khan, Kamil A.
    Larson, Jeffrey
    Wild, Stefan M.
    SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (04) : 3001 - 3024
  • [2] THE l1 PENALTY FUNCTION METHOD FOR NONCONVEX DIFFERENTIABLE OPTIMIZATION PROBLEMS WITH INEQUALITY CONSTRAINTS
    Antczak, Tadeusz
    ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH, 2010, 27 (05) : 559 - 576
  • [3] A nonconvex l1(l1 - l2) model for image restoration with impulse noise
    Liu, Jingjing
    Ni, Anqi
    Ni, Guoxi
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2020, 378
  • [4] MANIFOLD SAMPLING FOR OPTIMIZING NONSMOOTH NONCONVEX COMPOSITIONS
    Larson, Jeffrey
    Menickelly, Matt
    Zhou, Baoyu
    SIAM JOURNAL ON OPTIMIZATION, 2021, 31 (04) : 2638 - 2664
  • [5] Sampling Rates for l1 -Synthesis
    Maerz, Maximilian
    Boyer, Claire
    Kahn, Jonas
    Weiss, Pierre
    FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2023, 23 (06) : 2089 - 2150
  • [6] NONCONVEX L1/2 REGULARIZATION FOR SPARSE PORTFOLIO SELECTION
    Xu, Fengmin
    Wang, Guan
    Gao, Yuelin
    PACIFIC JOURNAL OF OPTIMIZATION, 2014, 10 (01): : 163 - 176
  • [7] Breaking the l1 Recovery Thresholds with Reweighted l1 Optimization
    Xu, Weiyu
    Khajehnejad, M. Amin
    Avestimehr, A. Salman
    Hassibi, Babak
    2009 47TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, VOLS 1 AND 2, 2009, : 1026 - 1030
  • [8] Subgradient and Sampling Algorithms for l1 Regression
    Clarkson, Kenneth L.
    PROCEEDINGS OF THE SIXTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2005, : 257 - 266
  • [9] Manifold optimization-based analysis dictionary learning with an l1/2-norm regularizer
    Li, Zhenni
    Ding, Shuxue
    Li, Yujie
    Yang, Zuyuan
    Xie, Shengli
    Chen, Wuhui
    NEURAL NETWORKS, 2018, 98 : 212 - 222
  • [10] A nonconvex TVq - l1 regularization model and the ADMM based algorithm
    Fang, Zhuang
    Tang Liming
    Liang, Wu
    Liu Hanxin
    SCIENTIFIC REPORTS, 2022, 12 (01):