Convergence of supermemory gradient method

被引:2
|
作者
Shi Z.-J. [1 ,2 ,3 ,4 ]
Shen J. [1 ,4 ]
机构
[1] Department of Computer and Information Science, University of Michigan, Dearborn
[2] College of Operations Reaserch and Management, Qufu Normal University, Rizhao
[3] Department of Computer and Information Science, University of Michigan-Dearborn, Michigan
来源
J. Appl. Math. Comp. | 2007年 / 1-2卷 / 367-376期
基金
美国国家科学基金会;
关键词
Global convergence; Supermemory gradient method; Unconstrained optimization;
D O I
10.1007/BF02832325
中图分类号
学科分类号
摘要
In this paper we consider the global convergence of a new supermemory gradient method for unconstrained optimization problems. New trust region radius is proposed to make the new method converge stably and averagely, and it will be suitable to solve large scale minimization problems. Some global convergence results are obtained under some mild conditions. Numerical results show that this new method is effective and stable in practical computation. © 2007 Korean Society for Computational & Applied Mathematics and Korean SIGCAM.
引用
收藏
页码:367 / 376
页数:9
相关论文
共 50 条