EFFICIENT REGULARIZED PROXIMAL QUASI/NEWTON METHODS FOR LARGE-SCALE NONCONVEX COMPOSITE OPTIMIZATION PROBLEMS

被引:0
|
作者
Kanzow, Christian [1 ]
Lechner, Theresa [1 ]
机构
[1] Univ Wurzburg, Inst Math, Emil Fischer Str 30, D-97074 Wurzburg, Germany
来源
PACIFIC JOURNAL OF OPTIMIZATION | 2024年 / 20卷 / 03期
关键词
composite minimization; regularization; quadratic approximation; proximal quasi-Newton method; global convergence; limited memory methods; proximity operator; local error bound; QUASI-NEWTON MATRICES; CONVEX-OPTIMIZATION; GRADIENT METHODS; LINE-SEARCH; ALGORITHM; MINIMIZATION; SHRINKAGE; LASSO; SUM;
D O I
10.61208/pjo-2023-036
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Optimization problems with composite functions consist of an objective function which is the sum of a smooth and a (convex) nonsmooth term. This particular structure is exploited by the class of proximal gradient methods and some of their generalizations like proximal Newton and quasi-Newton methods. In this paper, we propose a regularized proximal quasi-Newton method whose main features are: (a) the method is globally convergent to stationary points, (b) the globalization is controlled by a regularization parameter, no line search is required, (c) the method can be implemented very efficiently based on a simple observation which combines recent ideas for the computation of quasi-Newton proximity operators and compact representations of limited-memory quasi-Newton updates. Numerical examples for the solution of convex and nonconvex composite optimization problems indicate that the method outperforms several existing methods.
引用
收藏
页数:32
相关论文
共 50 条