Recalling of many-valued functions by successive iteration on bottleneck networks

被引:0
|
作者
Hiraoka, K [1 ]
Yoshizawa, S [1 ]
机构
[1] Univ Tokyo, Dept Informat Engn, Bunkyo Ku, Tokyo 113, Japan
关键词
autoassociative neural network; bottleneck network; many-valued function; successive iteration; spurious memory;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the present study, we treat the learning of many-valued functions by bottleneck networks. This technique has the advantages that (1) there is no need to decide the number of multiplicity a priori and that (2) it can be easily applied to high dimensional cases. In the work of the same topic [6], the relaxation method was used for recalling, and it was reported that this method needs too many steps for the convergence. The present study shows that the successive iteration method works better. The basis of the assertion is the fact that the bottleneck network is equivalent to the orthogonal projection to a surface. In our simulation, the recalling process is over four times faster than the relaxation method.
引用
收藏
页码:1389 / 1392
页数:4
相关论文
共 50 条