General lower bounds for evolutionary algorithms

被引:0
|
作者
Teytaud, Olivier [1 ]
Gelly, Sylvain [1 ]
机构
[1] Univ Paris Sud, CNRS, UMR 8623, LRI,TAO Inria, F-91405 Orsay, France
来源
PARALLEL PROBLEM SOLVING FROM NATURE - PPSN IX, PROCEEDINGS | 2006年 / 4193卷
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Evolutionary optimization, among which genetic optimization, is a general framework for optimization. It is known (i) easy to use (ii) robust (iii) derivative-free (iv) unfortunately slow. Recent work [8] in particular show that the convergence rate of some widely used evolution strategies (evolutionary optimization for continuous domains) can not be faster than linear (i.e. the logarithm of the distance to the optimum can not decrease faster than linearly), and that the constant in the linear convergence (i.e. the constant C such that the distance to the optimum after n steps is upper bounded by C-n) unfortunately converges quickly to I as the dimension increases to infinity. We here show a very wide generalization of this result: all comparison-based algorithms have such a limitation. Note that our result also concerns methods like the Hooke & Jeeves algorithm, the simplex method, or any direct search method that only compares the values to previously seen values of the fitness. But it does not cover methods that use the value of the fitness (see [5] for cases in which the fitness-values are used), even if these methods do not use gradients. The former results deal with convergence with respect to the number of comparisons performed, and also include a very wide family of algorithms with respect to the number of function-evaluations. However, there is still place for faster convergence rates, for more original algorithms using the full ranking information of the population and not only selections among the population. We prove that, at least in some particular cases, using the full ranking information can improve these lower bounds, and ultimately provide superlinear convergence results.
引用
收藏
页码:21 / 31
页数:11
相关论文
共 50 条
  • [21] LOWER BOUNDS TO RANDOMIZED ALGORITHMS FOR GRAPH PROPERTIES
    YAO, ACC
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 1991, 42 (03) : 267 - 287
  • [22] Adversary lower bounds for nonadaptive quantum algorithms
    Koiran, Pascal
    Landes, Juergen
    Portier, Natacha
    Yao, Penghui
    LOGIC, LANGUAGE, INFORMATION AND COMPUTATION, 2008, 5110 : 226 - +
  • [23] Testing Graph Clusterability: Algorithms and Lower Bounds
    Chiplunkar, Ashish
    Kapralov, Michael
    Khanna, Sanjeev
    Mousavifar, Aida
    Peres, Yuval
    2018 IEEE 59TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2018, : 497 - 508
  • [24] Lower Bounds and Faster Algorithms for Equality Constraints
    Jonsson, Peter
    Lagerkvist, Victor
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1784 - 1790
  • [25] Flows on few paths: Algorithms and lower bounds
    Martens, Maren
    Skutella, Martin
    NETWORKS, 2006, 48 (02) : 68 - 76
  • [26] Sketching Algorithms and Lower Bounds for Ridge Regression
    Kacham, Praneeth
    Woodruff, David P.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10539 - 10556
  • [27] LOWER BOUNDS OF TIME COMPLEXITY OF SOME ALGORITHMS
    HONG, J
    SCIENTIA SINICA, 1979, 22 (08): : 890 - 900
  • [28] Adversary lower bounds for nonadaptive quantum algorithms
    Koiran, Pascal
    Landes, Juergen
    Portier, Natacha
    Yao, Penghui
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2010, 76 (05) : 347 - 355
  • [29] ON LOWER BOUNDS OF TIME COMPLEXITY OF SOME ALGORITHMS
    洪加威
    Science China Mathematics, 1979, (08) : 890 - 900
  • [30] Quantum algorithms and lower bounds for convex optimization
    Chakrabarti, Shouvanik
    Childs, Andrew M.
    Li, Tongyang
    Wu, Xiaodi
    QUANTUM, 2020, 4