An extended delayed weighted gradient algorithm for solving strongly convex optimization problems

被引:1
|
作者
Andreani, R. [1 ]
Oviedo, H. [2 ]
Raydan, M. [3 ]
Secchin, L. D. [4 ]
机构
[1] Univ Estadual Campinas, Dept Appl Math, IMECC UNICAMP, BR-13083859 Campinas, SP, Brazil
[2] Fundacao Getulio Vargas FGV EMAp, Escola Matemat Aplicada, Rio De Janeiro, RJ, Brazil
[3] FCT NOVA, Ctr Math & Applicat NovaMath, P-2829516 Caparica, Portugal
[4] Univ Fed Espirito Santo, Dept Appl Math, Rodovia BR 101,Km 60, BR-29932540 Sao Mateus, ES, Brazil
基金
巴西圣保罗研究基金会;
关键词
Gradient methods; Conjugate gradient methods; Strongly convex functions; Large-scale optimization; BARZILAI; SOFTWARE;
D O I
10.1016/j.cam.2022.114525
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
The recently developed delayed weighted gradient method (DWGM) is competitive with the well-known conjugate gradient (CG) method for the minimization of strictly convex quadratic functions. As well as the CG method, DWGM has some key optimality and orthogonality properties that justify its practical performance. The main difference with the CG method is that, instead of minimizing the objective function on the entire explored subspace, DWGM minimizes the 2-norm of the gradient vector on the same subspace. The main purpose of this study is to extend DWGM for solving strongly convex nonquadratic minimization problems while keeping a low computational cost per iteration. We incorporate the scheme into a tolerant line search globalization strategy, and we show that it exhibits q-linear convergence to the unique global solution. We compare the proposed extended DWGM with state-of-the-art methods for large-scale unconstrained minimization problems. We use some well-known strongly convex test problems, but also solve some regularized logistic regression problems that appear in machine learning. Our numerical results illustrate that the proposed scheme is promising and exhibits a fast convergence behavior. Moreover, we show through numerical experiments on CUTEst problems that the proposed extended DWGM can be very effective in accelerating the convergence of a well-established Barzilai-Borwein-type method when the iterates get close to minimizers of non-convex functions. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] A Parallel Algorithm for Solving Large Convex Minimax Problems
    Arora, Ramnik
    Upadhyay, Utkarsh
    Tulshyan, Rupesh
    Dutta, J.
    SIMULATED EVOLUTION AND LEARNING, 2010, 6457 : 35 - +
  • [42] A Gradient-free Penalty ADMM for Solving Distributed Convex Optimization Problems with Feasible Set Constraints
    Liu, Chenyang
    Dou, Xiaohua
    Cheng, Songsong
    Fan, Yuan
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 672 - 677
  • [44] Improved Gradient-Based Algorithm for Solving Aeroassisted Vehicle Trajectory Optimization Problems
    Chai, Runqi
    Savvaris, Al
    Tsourdos, Antonios
    Chai, Senchun
    Xia, Yuanqing
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2017, 40 (08) : 2093 - 2101
  • [45] Solving Nonconvex Optimal Control Problems by Convex Optimization
    Liu, Xinfu
    Lu, Ping
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2014, 37 (03) : 750 - 765
  • [46] On solving convex optimization problems with linear ascending constraints
    Zizhuo Wang
    Optimization Letters, 2015, 9 : 819 - 838
  • [47] An improved three-term conjugate gradient algorithm for solving unconstrained optimization problems
    Deng, Songhai
    Wan, Zhong
    OPTIMIZATION, 2015, 64 (12) : 2679 - 2691
  • [48] LINEAR CONVERGENCE ANALYSIS FOR A NONMONOTONE PROJECTED GRADIENT ALGORITHM SOLVING MULTIOBJECTIVE OPTIMIZATION PROBLEMS
    Zhao, X. P.
    Jolaoso, L. O.
    Shehu, Y.
    Yao, J. -Ch.
    JOURNAL OF NONLINEAR AND CONVEX ANALYSIS, 2022, 23 (11) : 2663 - 2675
  • [49] SOLVING MULTIOBJECTIVE MIXED INTEGER CONVEX OPTIMIZATION PROBLEMS
    De Santis, Marianna
    Eichfelder, Gabriele
    Niebling, Julia
    Rocktaeschel, Stefan
    SIAM JOURNAL ON OPTIMIZATION, 2020, 30 (04) : 3122 - 3145
  • [50] An Optimal Algorithm for Bandit Convex Optimization with Strongly-Convex and Smooth Loss
    Ito, Shinji
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2229 - 2238