PARALLEL ALGORITHMS FOR HIDDEN MARKOV-MODELS ON THE ORTHOGONAL MULTIPROCESSOR

被引:4
|
作者
LEE, SW [1 ]
HSU, WH [1 ]
机构
[1] NATL TSING HUA UNIV, DEPT ELECT ENGN, HSINCHU 30043, TAIWAN
关键词
HMM; OMP; VITERBI ALGORITHM; FORWARD PROBABILITY; BACKWARD PROBABILITY; DYNAMIC PROGRAMMING;
D O I
10.1016/0031-3203(92)90103-P
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents parallel implementations of several Hidden Markov Model (HMM) algorithms on the Orthogonal MultiProcessor (OMP) architecture. In many applications of HMM, input feature vector, model topology, and model parameters are different from one to another. Developing HMM algorithms on a scalable and general purpose multiprocessor architecture will reduce the complexity of the algorithms and improve performance. Parallel model training, recognition, and Viterbi algorithm for HMM are investigated. It shows linear speed-up over conventional uniprocessor methods. The result can be applied to a lot of applications where HMM is used and real time performance is required.
引用
收藏
页码:219 / 232
页数:14
相关论文
共 50 条