Accelerated parallel and distributed algorithm using limited internal memory for nonnegative matrix factorization

被引:0
|
作者
Duy Khuong Nguyen
Tu Bao Ho
机构
[1] Japan Advanced Institute of Science and Technology,University of Engineering and Technology
[2] Vietnam National University,John von Neumann Institute
[3] Vietnam National University,undefined
来源
关键词
Non-negative matrix factorization; Accelerated anti-lopsided algorithm; Cooridinate descent algorithm; Parallel and distributed algorithm;
D O I
暂无
中图分类号
学科分类号
摘要
Nonnegative matrix factorization (NMF) is a powerful technique for dimension reduction, extracting latent factors and learning part-based representation. For large datasets, NMF performance depends on some major issues such as fast algorithms, fully parallel distributed feasibility and limited internal memory. This research designs a fast fully parallel and distributed algorithm using limited internal memory to reach high NMF performance for large datasets. Specially, we propose a flexible accelerated algorithm for NMF with all its L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_1$$\end{document}L2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_2$$\end{document} regularized variants based on full decomposition, which is a combination of exact line search, greedy coordinate descent, and accelerated search. The proposed algorithm takes advantages of these algorithms to converges linearly at an over-bounded rate (1-μL)(1-μrL)2r\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(1-\frac{\mu }{L})(1 - \frac{\mu }{rL})^{2r}$$\end{document} in optimizing each factor matrix when fixing the other factor one in the sub-space of passive variables, where r is the number of latent components, and μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu $$\end{document} and L are bounded as 12≤μ≤L≤r\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{2} \le \mu \le L \le r$$\end{document}. In addition, the algorithm can exploit the data sparseness to run on large datasets with limited internal memory of machines, which is is advanced compared to fast block coordinate descent methods and accelerated methods. Our experimental results are highly competitive with seven state-of-the-art methods about three significant aspects of convergence, optimality and average of the iteration numbers.
引用
收藏
页码:307 / 328
页数:21
相关论文
共 50 条
  • [21] Nonnegative Matrix Factorization Using Nonnegative Polynomial Approximations
    Debals, Otto
    Van Barel, Marc
    De Lathauwer, Lieven
    IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (07) : 948 - 952
  • [22] Nonnegative Tensor Factorization Accelerated Using GPGPU
    Antikainen, Jukka
    Havel, Jiri
    Josth, Radovan
    Herout, Adam
    Zemcik, Pavel
    Hauta-Kasari, Markku
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2011, 22 (07) : 1135 - 1141
  • [23] Parallel Nonnegative Matrix Factorization via Newton Iteration
    Flatz, Markus
    Vajtersic, Marian
    PARALLEL PROCESSING LETTERS, 2016, 26 (03)
  • [24] Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints
    Sembiring, Pasukat
    INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY (ICONICT), 2017, 930
  • [25] Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
    Guan, Naiyang
    Wei, Lei
    Luo, Zhigang
    Tao, Dacheng
    PLOS ONE, 2013, 8 (10):
  • [26] Accelerated sparse nonnegative matrix factorization for unsupervised feature learning
    Xie, Ting
    Zhang, Hua
    Liu, Ruihua
    Xiao, Hanguang
    PATTERN RECOGNITION LETTERS, 2022, 156 : 46 - 52
  • [27] Sequential and parallel feature extraction in hyperspectral data using Nonnegative Matrix Factorization
    Robila, Stefan A.
    Maciak, Lukasz G.
    2007 IEEE LONG ISLAND SYSTEMS, APPLICATIONS AND TECHNOLOGY CONFERENCE, 2007, : 18 - 24
  • [28] Parallel Hierarchical Clustering using Rank-Two Nonnegative Matrix Factorization
    Manning, Lawton
    Ballard, Grey
    Kannan, Ramakrishnan
    Park, Haesun
    2020 IEEE 27TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS (HIPC 2020), 2020, : 141 - 150
  • [29] Accelerated SVD-based initialization for nonnegative matrix factorization
    Esposito, Flavia
    Atif, Syed Muhammad
    Gillis, Nicolas
    COMPUTATIONAL & APPLIED MATHEMATICS, 2024, 43 (06):
  • [30] Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent
    Tang, Zunyi
    Ding, Shuxue
    Li, Zhenni
    Jiang, Linlin
    ABSTRACT AND APPLIED ANALYSIS, 2013,