Large-scale Sparse Tensor Decomposition Using a Damped Gauss-Newton Method

被引:5
|
作者
Ranadive, Teresa M. [1 ]
Baskaran, Muthu M. [2 ]
机构
[1] Lab Phys Sci, College Pk, MD 20740 USA
[2] Reservoir Labs Inc, New York, NY 10012 USA
关键词
Big data analytics; high performance computing; damped Gauss-Newton; sparse tensor decomposition; LINE SEARCH; ALGORITHMS;
D O I
10.1109/hpec43674.2020.9286202
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
CANDECOMP/PARAFAC (CP) tensor decomposition is a popular unsupervised machine learning method with numerous applications. This process involves modeling a high-dimensional, multi-modal array (a tensor) as the sum of several low-dimensional components. In order to decompose a tensor, one must solve an optimization problem, whose objective is often given by the sum of the squares of the tensor and decomposition model entry differences. One algorithm occasionally utilized to solve such problems is CP-OPT-DGN, a damped Gauss-Newton all-at-once optimization method for CP tensor decomposition. However, there are currently no published results that consider the decomposition of large-scale (with up to billions of non-zeros), sparse tensors using this algorithm. This work considers the decomposition of large-scale tensors using an efficiently implemented CP-OPT-DGN method. It is observed that CP-OPT-DGN significantly outperforms CP-ALS (CP-Alternating Least Squares) and CP-OPT-QNR (a quasi-Newton-Raphson all-at-once optimization method for CP tensor decomposition), two other widely used tensor decomposition algorithms, in terms of accuracy and latent behavior detection.
引用
收藏
页数:8
相关论文
共 50 条