Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks

被引:12
|
作者
Zhang, Huisheng [1 ]
Zhang, Ying [1 ]
Zhu, Shuai [1 ]
Xu, Dongpo [2 ]
机构
[1] Dalian Maritime Univ, Sch Sci, Dalian 116026, Peoples R China
[2] Northeast Normal Univ, Sch Math & Stat, Changchun 130024, Peoples R China
基金
中国国家自然科学基金;
关键词
Fully complex-valued neural networks; Mini-batch gradient algorithm; Convergence; Wirtinger calculus; BACKPROPAGATION ALGORITHM; PERFORMANCE BOUNDS; MOMENTUM; BOUNDEDNESS; ESTIMATORS; LMS;
D O I
10.1016/j.neucom.2020.04.114
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates the fully complex mini-batch gradient algorithm for training complex-valued neural networks. Mini-batch gradient method has been widely used in neural network training, however, its convergence analysis is usually restricted to real-valued neural networks and of probability nature. By introducing a new Taylor mean value theorem for analytic functions, in this paper we establish determin-istic convergence results for the fully complex mini-batch gradient algorithm under mild conditions. The deterministic convergence here means that the algorithm will deterministically converge, and both the weak convergence and strong convergence will be proved. Benefited from the newly introduced mean value theorem, our results are of global nature in that they are valid for arbitrarily given initial values of the weights. The theoretical findings are validated with a simulation example. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:185 / 193
页数:9
相关论文
共 50 条
  • [21] Is a Complex-Valued Stepsize Advantageous in Complex-Valued Gradient Learning Algorithms?
    Zhang, Huisheng
    Mandic, Danilo P.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (12) : 2730 - 2735
  • [22] Deterministic Mini-batch Sequencing for Training Deep Neural Networks
    Banerjee, Subhankar
    Chakraborty, Shayok
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 6723 - 6731
  • [23] Complex-valued neural networks
    Department of Electrical Engineering and Information Systems, University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
    IEEJ Trans. Electron. Inf. Syst., 1 (2-8):
  • [24] Fully coupled and feedforward neural networks with complex-valued neurons
    Zurada, Jacek M.
    Aizenberg, Igor
    ADVANCES IN INTELLIGENT AND DISTRIBUTED COMPUTING, 2008, 78 : 41 - +
  • [25] A Structural Optimization Algorithm for Complex-Valued Neural Networks
    Dong, Zhongying
    Huang, He
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1530 - 1535
  • [26] A complex-valued RTRL algorithm for recurrent neural networks
    Goh, SL
    Mandic, DP
    NEURAL COMPUTATION, 2004, 16 (12) : 2699 - 2713
  • [27] Convergence Analysis of Three Classes of Split-Complex Gradient Algorithms for Complex-Valued Recurrent Neural Networks
    Xu, Dongpo
    Zhang, Huisheng
    Liu, Lijun
    NEURAL COMPUTATION, 2010, 22 (10) : 2655 - 2677
  • [28] Improving Gradient Regularization using Complex-Valued Neural Networks
    Yeats, Eric
    Chen, Yiran
    Li, Hai
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [29] Enhanced Gradient Descent Algorithms for Complex-Valued Neural Networks
    Popa, Calin-Adrian
    16TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING (SYNASC 2014), 2014, : 272 - 279
  • [30] Complex-valued Function Approximation using a Fully Complex-valued RBF (FC-RBF) Learning Algorithm
    Savitha, R.
    Suresh, S.
    Sundararajan, N.
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 320 - +