Training winner-take-all simultaneous recurrent neural networks

被引:10
|
作者
Cai, Xindi [1 ]
Prokhorov, Danil V.
Wunsch, Donald C., II
机构
[1] Amer Power Convers Corp, OFallon, MO 63368 USA
[2] Toyota Tech Ctr, Ann Arbor, MI 48105 USA
[3] Univ Missouri, Appl Computat Intelligence Lab, Dept Elect & Comp Engn, Rolla, MO 65409 USA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2007年 / 18卷 / 03期
基金
美国国家科学基金会;
关键词
backpropagation through time (BPTT); extended Kalman filter (EKF); simultaneous recurrent network (SRN); winner-take-all (WTA);
D O I
10.1109/TNN.2007.891685
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The winner-take-all (WTA) network is useful in database management, very large scale integration (VLSI) design, and digital processing. The synthesis procedure of WTA on single-layer fully connected architecture with sigmoid transfer function is still not fully explored. We discuss the use of simultaneous recurrent networks (SRNs) trained by Kalman filter algorithms for the task of finding the maximum among N numbers. The simulation demonstrates the effectiveness of our training approach under conditions of a shared-weight SRN architecture. A more general SRN also succeeds in solving a real classification application on car engine data.
引用
收藏
页码:674 / 684
页数:11
相关论文
共 50 条