Superiority combination learning distributed particle swarm optimization for large-scale optimization

被引:13
|
作者
Wang, Zi-Jia [1 ]
Yang, Qiang [2 ]
Zhang, Yu -Hui [3 ]
Chen, Shu-Hong [1 ]
Wang, Yuan -Gen [1 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Artificial Intelligence, Nanjing 210044, Peoples R China
[3] Dongguan Univ Technol, Sch Comp Sci & Technol, Dongguan, Peoples R China
关键词
Superiority combination learning strategy; Particle swarm optimization; Large-scale optimization; Master-slave multi-subpopulation; distributed; COOPERATIVE COEVOLUTION; EVOLUTIONARY;
D O I
10.1016/j.asoc.2023.110101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale optimization problems (LSOPs) have become increasingly significant and challenging in the evolutionary computation (EC) community. This article proposes a superiority combination learning distributed particle swarm optimization (SCLDPSO) for LSOPs. In algorithm design, a master-slave multi-subpopulation distributed model is adopted, which can obtain the full communication and information exchange among different subpopulations, further achieving the diversity enhancement. Moreover, a superiority combination learning (SCL) strategy is proposed, where each worse particle in the poor-performance subpopulation randomly selects two well-performance subpopulations with better particles for learning. In the learning process, each well-performance subpopulation generates a learning particle by merging different dimensions of different particles, which can fully combine the superiorities of all the particles in the current well-performance subpopulation. The worse particle can significantly improve itself by learning these two superiority combination particles from the well -performance subpopulations, leading to a successful search. Experimental results show that SCLDPSO performs better than or at least comparable with other state-of-the-art large-scale optimization algorithms on both CEC2010 and CEC2013 large-scale optimization test suites, including the winner of the competition on large-scale optimization. Besides, the extended experiments with increasing dimensions to 2000 show the scalability of SCLDPSO. At last, an application in large-scale portfolio optimization problems further illustrates the applicability of SCLDPSO.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Adaptive Granularity Learning Distributed Particle Swarm Optimization for Large-Scale Optimization
    Wang, Zi-Jia
    Zhan, Zhi-Hui
    Kwong, Sam
    Jin, Hu
    Zhang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (03) : 1175 - 1188
  • [2] Heterogeneous cognitive learning particle swarm optimization for large-scale optimization problems
    Zhang, En
    Nie, Zihao
    Yang, Qiang
    Wang, Yiqiao
    Liu, Dong
    Jeon, Sang-Woon
    Zhang, Jun
    INFORMATION SCIENCES, 2023, 633 : 321 - 342
  • [3] Bi-directional learning particle swarm optimization for large-scale optimization
    Liu, Shuai
    Wang, Zi-Jia
    Wang, Yuan-Gen
    Kwong, Sam
    Zhang, Jun
    APPLIED SOFT COMPUTING, 2023, 149
  • [4] Multiple-strategy learning particle swarm optimization for large-scale optimization problems
    Wang, Hao
    Liang, Mengnan
    Sun, Chaoli
    Zhang, Guochen
    Xie, Liping
    COMPLEX & INTELLIGENT SYSTEMS, 2021, 7 (01) : 1 - 16
  • [5] Multiple-strategy learning particle swarm optimization for large-scale optimization problems
    Hao Wang
    Mengnan Liang
    Chaoli Sun
    Guochen Zhang
    Liping Xie
    Complex & Intelligent Systems, 2021, 7 : 1 - 16
  • [6] Dynamic Group Learning Distributed Particle Swarm Optimization for Large-Scale Optimization and Its Application in Cloud Workflow Scheduling
    Wang, Zi-Jia
    Zhan, Zhi-Hui
    Yu, Wei-Jie
    Lin, Ying
    Zhang, Jie
    Gu, Tian-Long
    Zhang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (06) : 2715 - 2729
  • [7] Gene Targeting Particle Swarm Optimization for Large-Scale Optimization Problem
    Tang, Zhi-Fan
    Luo, Liu-Yue
    Xu, Xin-Xin
    Li, Jian-Yu
    Xu, Jing
    Zhong, Jing-Hui
    Zhang, Jun
    Zhan, Zhi-Hui
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 620 - 625
  • [8] Cooperative Particle Swarm Optimization Decomposition Methods for Large-scale Optimization
    Clark, Mitchell
    Ombuki-Berman, Beatrice
    Aksamit, Nicholas
    Engelbrecht, Andries
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 1582 - 1591
  • [9] A reinforcement learning level-based particle swarm optimization algorithm for large-scale optimization
    Wang, Feng
    Wang, Xujie
    Sun, Shilei
    INFORMATION SCIENCES, 2022, 602 : 298 - 312
  • [10] A Distributed Quantum-Behaved Particle Swarm Optimization Using Opposition-Based Learning on Spark for Large-Scale Optimization Problem
    Zhang, Zhaojuan
    Wang, Wanliang
    Pan, Gaofeng
    MATHEMATICS, 2020, 8 (11) : 1 - 21