Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks

被引:12
|
作者
Zhang, Huisheng [1 ]
Zhang, Ying [1 ]
Zhu, Shuai [1 ]
Xu, Dongpo [2 ]
机构
[1] Dalian Maritime Univ, Sch Sci, Dalian 116026, Peoples R China
[2] Northeast Normal Univ, Sch Math & Stat, Changchun 130024, Peoples R China
基金
中国国家自然科学基金;
关键词
Fully complex-valued neural networks; Mini-batch gradient algorithm; Convergence; Wirtinger calculus; BACKPROPAGATION ALGORITHM; PERFORMANCE BOUNDS; MOMENTUM; BOUNDEDNESS; ESTIMATORS; LMS;
D O I
10.1016/j.neucom.2020.04.114
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates the fully complex mini-batch gradient algorithm for training complex-valued neural networks. Mini-batch gradient method has been widely used in neural network training, however, its convergence analysis is usually restricted to real-valued neural networks and of probability nature. By introducing a new Taylor mean value theorem for analytic functions, in this paper we establish determin-istic convergence results for the fully complex mini-batch gradient algorithm under mild conditions. The deterministic convergence here means that the algorithm will deterministically converge, and both the weak convergence and strong convergence will be proved. Benefited from the newly introduced mean value theorem, our results are of global nature in that they are valid for arbitrarily given initial values of the weights. The theoretical findings are validated with a simulation example. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:185 / 193
页数:9
相关论文
共 50 条
  • [31] A data-reusing gradient descent algorithm for complex-valued recurrent neural networks
    Goh, SL
    Mandic, DP
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2003, 2774 : 340 - 350
  • [32] A new learning algorithm with logarithmic performance index for complex-valued neural networks
    Savitha, R.
    Suresh, S.
    Sundararajan, N.
    Saratchandran, P.
    NEUROCOMPUTING, 2009, 72 (16-18) : 3771 - 3781
  • [33] Online censoring-based learning algorithms for fully complex-valued neural networks
    Menguc, Engin Cemal
    Mandic, Danilo P.
    NEUROCOMPUTING, 2025, 623
  • [34] Improving the capacity of complex-valued neural networks with a modified gradient descent learning rule
    Lee, DL
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (02): : 439 - 443
  • [35] Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode
    Park, Hyeyoung
    Lee, Kwanyong
    APPLIED SCIENCES-BASEL, 2019, 9 (21):
  • [36] Complex-Valued Logic for Neural Networks
    Kagan, Evgeny
    Rybalov, Alexander
    Yager, Ronald
    2018 IEEE INTERNATIONAL CONFERENCE ON THE SCIENCE OF ELECTRICAL ENGINEERING IN ISRAEL (ICSEE), 2018,
  • [37] A Fully Complex-Valued Neural Network for Rapid Solution of Complex-Valued Systems of Linear Equations
    Xiao, Lin
    Meng, Weiwei
    Lu, Rongbo
    Yang, Xi
    Liao, Bolin
    Ding, Lei
    ADVANCES IN NEURAL NETWORKS - ISNN 2015, 2015, 9377 : 444 - 451
  • [38] Natural Gradient Descent for Training Stochastic Complex-Valued Neural Networks
    Nitta, Tohru
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2014, 5 (07) : 193 - 198
  • [39] A hybrid complex spectral conjugate gradient learning algorithm for complex-valued data processing
    Zhang, Ke
    Zhang, Huisheng
    Wang, Xue
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [40] Complex-Valued Feedforward Neural Networks Learning Without Backpropagation
    Guo, Wei
    Huang, He
    Huang, Tingwen
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 100 - 107