Incremental particle swarm optimization for large-scale dynamic optimization with changing variable interactions

被引:6
|
作者
Liu, Xiao-Fang [1 ,2 ]
Zhan, Zhi-Hui [3 ,4 ]
Zhang, Jun [3 ,5 ,6 ]
机构
[1] Nankai Univ, Inst Robot & Automatic Informat Syst, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[2] Nankai Univ, Tianjin Key Lab Intelligent Robot, Tianjin 300350, Peoples R China
[3] Nankai Univ, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[4] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[5] Zhejiang Normal Univ, Jinhua 321004, Peoples R China
[6] Hanyang Univ, Ansan 15588, South Korea
基金
中国国家自然科学基金;
关键词
Dynamic optimization; Particle swarm optimization; Evolutionary computation; Information reuse; DIFFERENTIAL EVOLUTION; COEVOLUTION; STRATEGY; MEMORY; OPTIMA;
D O I
10.1016/j.asoc.2023.110320
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cooperative coevolutionary algorithms have been developed for large-scale dynamic optimization problems via divide-and-conquer mechanisms. Interacting decision variables are divided into the same subproblem for optimization. Their performance greatly depends on problem decomposition and response abilities to environmental changes. However, existing algorithms usually adopt offline decomposition and hence are insufficient to adapt to changes in the underlying interaction structure of decision variables. Quick online decomposition then becomes a crucial issue, along with solution reconstruction for new subproblems. This paper proposes incremental particle swarm optimization to address the two issues. In the proposed method, the incremental differential grouping obtains accurate groupings by iteratively performing edge contractions on the interaction graph of historical groups. A recombination-based sampling strategy is developed to generate high-quality solutions from historical solutions for new subproblems. In order to coordinate with the multimodal property of the problem, swarms are restarted after convergence to search for multiple high-quality solutions. Experimental results on problem instances up to 1000-D show the superiority of the proposed method to state-of -the-art algorithms in terms of solution optimality. The incremental differential grouping can obtain accurate groupings using less function evaluations.& COPY; 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Transfer-Based Particle Swarm Optimization for Large-Scale Dynamic Optimization With Changing Variable Interactions
    Liu, Xiao-Fang
    Zhan, Zhi-Hui
    Zhang, Jun
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2024, 28 (06) : 1633 - 1643
  • [2] Cooperative Particle Swarm Optimization With a Bilevel Resource Allocation Mechanism for Large-Scale Dynamic Optimization
    Liu, Xiao-Fang
    Zhang, Jun
    Wang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (02) : 1000 - 1011
  • [3] Gene Targeting Particle Swarm Optimization for Large-Scale Optimization Problem
    Tang, Zhi-Fan
    Luo, Liu-Yue
    Xu, Xin-Xin
    Li, Jian-Yu
    Xu, Jing
    Zhong, Jing-Hui
    Zhang, Jun
    Zhan, Zhi-Hui
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 620 - 625
  • [4] Cooperative Particle Swarm Optimization Decomposition Methods for Large-scale Optimization
    Clark, Mitchell
    Ombuki-Berman, Beatrice
    Aksamit, Nicholas
    Engelbrecht, Andries
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 1582 - 1591
  • [5] A particle swarm optimizer with dynamic balance of convergence and diversity for large-scale optimization
    Li, Dongyang
    Wang, Lei
    Guo, Weian
    Zhang, Maoqing
    Hu, Bo
    Wu, Qidi
    APPLIED SOFT COMPUTING, 2023, 132
  • [6] Adaptive Granularity Learning Distributed Particle Swarm Optimization for Large-Scale Optimization
    Wang, Zi-Jia
    Zhan, Zhi-Hui
    Kwong, Sam
    Jin, Hu
    Zhang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (03) : 1175 - 1188
  • [7] Particle swarm optimization with convergence speed controller for large-scale numerical optimization
    Huang, Han
    Lv, Liang
    Ye, Shujin
    Hao, Zhifeng
    SOFT COMPUTING, 2019, 23 (12) : 4421 - 4437
  • [8] Superiority combination learning distributed particle swarm optimization for large-scale optimization
    Wang, Zi-Jia
    Yang, Qiang
    Zhang, Yu -Hui
    Chen, Shu-Hong
    Wang, Yuan -Gen
    APPLIED SOFT COMPUTING, 2023, 136
  • [9] Particle swarm optimization with convergence speed controller for large-scale numerical optimization
    Han Huang
    Liang Lv
    Shujin Ye
    Zhifeng Hao
    Soft Computing, 2019, 23 : 4421 - 4437
  • [10] Bi-directional learning particle swarm optimization for large-scale optimization
    Liu, Shuai
    Wang, Zi-Jia
    Wang, Yuan-Gen
    Kwong, Sam
    Zhang, Jun
    APPLIED SOFT COMPUTING, 2023, 149