AN EFFICIENT MAPPING OF BOLTZMANN MACHINE COMPUTATIONS ONTO DISTRIBUTED-MEMORY MULTIPROCESSORS

被引:0
|
作者
OH, DH
NANG, JH
YOON, H
MAENG, SR
机构
[1] KOREA ADV INST SCI & TECHNOL,DEPT COMP SCI,YUSUNG KU,TAEJON 305701,SOUTH KOREA
[2] KOREA ADV INST SCI & TECHNOL,CTR ARTIFICIAL INTELLIGENCE RES,YUSUNG KU,TAEJON 305701,SOUTH KOREA
来源
MICROPROCESSING AND MICROPROGRAMMING | 1992年 / 33卷 / 04期
关键词
NEURAL NETWORK; BOLTZMANN MACHINE; PARALLEL CONVERGENCE ALGORITHM; PARALLEL LEARNING ALGORITHM; PARALLEL PROCESSING; DISTRIBUTED-MEMORY MULTIPROCESSOR; SPEED-UP ANALYSES;
D O I
10.1016/0165-6074(92)90024-2
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, an efficient mapping scheme of Boltzmann Machine computations onto a distributed-memory multiprocessor, which exploits the synchronous spatial parallelism, is presented. In this scheme, the neurons in the Boltzmann Machine are partitioned into p disjoint sets, and each set is mapped on a processor of a p-processor system. A parallel convergence and learning algorithms of Boltzmann Machine, necessary communication pattern among the processors, and their time complexities when neurons are partitioned and mapped onto a distributed-memory multiprocessor are investigated. An expected p-processor speed-up of the parallelizing scheme over a single processor is also analyzed theoretically. It can be used as a basis in determining the most cost-effective or optimal number of processors with respect to the communication capabilities and interconnection topologies of given distributed-memory multiprocessor.
引用
收藏
页码:223 / 236
页数:14
相关论文
共 50 条