In the present study, we treat the learning of many-valued functions by bottleneck networks. This technique has the advantages that (1) there is no need to decide the number of multiplicity a priori and that (2) it can be easily applied to high dimensional cases. In the work of the same topic [6], the relaxation method was used for recalling, and it was reported that this method needs too many steps for the convergence. The present study shows that the successive iteration method works better. The basis of the assertion is the fact that the bottleneck network is equivalent to the orthogonal projection to a surface. In our simulation, the recalling process is over four times faster than the relaxation method.