Realtime learning algorithm for recurrent neural networks

被引:0
|
作者
机构
[1] Uchiyama, Tadasu
[2] Shimohara, Katsunori
来源
Uchiyama, Tadasu | 1600年 / 22期
关键词
Computer Metatheory - Computational Complexity - Computer Systems; Digital - Real Time Operation - Mathematical Techniques - Algorithms - Neural Networks;
D O I
暂无
中图分类号
学科分类号
摘要
In recent years, recurrent networks have been attracting attention as a processing mechanism of spatiotemporal patterns by neural networks. Although learning algorithms based on the steepest descent method have been proposed by Williams and Zipser and by Pearlmutter, that of Williams and Zipser has a problem of increased computational complexity for large-scale networks, and Pearlmutter's has a flaw in that learning cannot be completed in a real time. In this paper, gradients of objective functionals are derived by variational calculus and a real-time computation method of gradients is proposed. With this method, it is expected that the real-time learning, requiring less computational complexity compared to conventional learning algorithms, will become possible.
引用
收藏
相关论文
共 50 条
  • [41] Existence and learning of oscillations in recurrent neural networks
    Townley, S
    Ilchmann, A
    Weiss, MG
    Mcclements, W
    Ruiz, AC
    Owens, DH
    Prätzel-Wolters, D
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (01): : 205 - 214
  • [42] Learning Device Models with Recurrent Neural Networks
    Clemens, John
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [43] Stable reinforcement learning with recurrent neural networks
    James Nate KNIGHT
    Charles ANDERSON
    Journal of Control Theory and Applications, 2011, 9 (03) : 410 - 420
  • [44] Convergence result for learning in recurrent neural networks
    Kuan, Chung-Ming
    Hornik, Kurt
    White, Halbert
    Neural Computation, 1994, 6 (03)
  • [45] Learning to Adaptively Scale Recurrent Neural Networks
    Hu, Hao
    Wang, Liqiang
    Qi, Guo-Jun
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3822 - 3829
  • [46] Learning Question Similarity with Recurrent Neural Networks
    Ye, Borui
    Feng, Guangyu
    Cheriton, David R.
    Cui, Anqi
    Li, Ming
    2017 IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (IEEE ICBK 2017), 2017, : 111 - 118
  • [47] Learning and bifurcation diagram of recurrent neural networks
    Sato, S
    Gohara, K
    PROGRESS IN CONNECTIONIST-BASED INFORMATION SYSTEMS, VOLS 1 AND 2, 1998, : 363 - 366
  • [48] Inaccessibility in online learning of recurrent neural networks
    Saito, A
    Taiji, M
    Ikegami, T
    PHYSICAL REVIEW LETTERS, 2004, 93 (16) : 168101 - 1
  • [49] Learning Morphological Transformations with Recurrent Neural Networks
    Biswas, Saurav
    Breuel, Thomas
    INNS CONFERENCE ON BIG DATA 2015 PROGRAM, 2015, 53 : 335 - 344
  • [50] Learning Multiple Timescales in Recurrent Neural Networks
    Alpay, Tayfun
    Heinrich, Stefan
    Wermter, Stefan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2016, PT I, 2016, 9886 : 132 - 139