LINEAR-PROGRAMMING, RECURRENT ASSOCIATIVE MEMORIES, AND FEEDFORWARD NEURAL NETWORKS

被引:5
|
作者
MOORE, JE
KIM, M
SEO, JG
WU, Y
KALABA, R
机构
[1] UNIV SO CALIF,DEPT CIVIL ENGN,LOS ANGELES,CA 90089
[2] UNIV SO CALIF,DEPT BIOMED ENGN,LOS ANGELES,CA 90089
[3] UNIV SO CALIF,DEPT ELECT ENGN,LOS ANGELES,CA 90089
[4] UNIV SO CALIF,DEPT ECON,LOS ANGELES,CA 90089
关键词
D O I
10.1016/0898-1221(91)90036-4
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Many optimization procedures presume the availability of an initial approximation in the neighborhood of a local or global optimum. Unfortunately, finding a set of good starting conditions is itself a nontrivial proposition. Our previous papers [1,2] describe procedures that use simple and recurrent associative memories to identify approximate solutions to closely related linear programs. In this paper, we compare the performance of a recurrent associative memory to that of a feed-forward neural network trained with the same data. The neural network's performance is much less promising than that of the associative memory. Modest infeasibilities exist in the estimated solutions provided by the associative memory, but the basic variables defining the optimal solutions to the linear programs are readily apparent.
引用
收藏
页码:71 / 90
页数:20
相关论文
共 50 条