Convergence of Distributed Asynchronous Learning Vector Quantization Algorithms

被引:0
|
作者
Patra, Benoit [1 ,2 ]
机构
[1] Univ Paris 06, LSTA, F-75252 Paris 05, France
[2] LOKAD SAS, F-75017 Paris, France
关键词
k-means; vector quantization; distributed; asynchronous; stochastic optimization; scalability; distributed consensus; TRAINING DISTORTION; CONSISTENCY; THEOREM; RATES;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motivated by the problem of effectively executing clustering algorithms on very large data sets, we address a model for large scale distributed clustering methods. To this end, we briefly recall some standards on the quantization problem and some results on the almost sure convergence of the competitive learning vector quantization (CLVQ) procedure. A general model for linear distributed asynchronous algorithms well adapted to several parallel computing architectures is also discussed. Our approach brings together this scalable model and the CLVQ algorithm, and we call the resulting technique the distributed asynchronous learning vector quantization algorithm (DALVQ). An in-depth analysis of the almost sure convergence of the DALVQ algorithm is performed. A striking result is that we prove that the multiple versions of the quantizers distributed among the processors in the parallel architecture asymptotically reach a consensus almost surely. Furthermore, we also show that these versions converge almost surely towards the same nearly optimal value for the quantization criterion.
引用
收藏
页码:3431 / 3466
页数:36
相关论文
共 50 条