A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries

被引:4
|
作者
Liu, Yiguang [1 ]
Lei, Yinjie [2 ]
Li, Chunguang [3 ]
Xu, Wenzheng [1 ]
Pu, Yifei [1 ]
机构
[1] Sichuan Univ, Vis & Image Proc Lab, Coll Comp Sci, Chengdu 610065, Peoples R China
[2] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610064, Peoples R China
[3] Zhejiang Univ, Dept Informat Sci & Elect Engn, Hangzhou 310027, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-rank matrix decomposition; random submatrix; complexity; memory-space; precision; APPROXIMATION;
D O I
10.1109/TIP.2015.2458176
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A random submatrix method (RSM) is proposed to calculate the low-rank decomposition (U) over cap (mxr)(V) over cap (T)(nxr) (r < m, n) of the matrix Y is an element of R-mxn (assuming m > n generally) with known entry percentage 0 < rho <= 1. RSM is very fast as only O(mr(2) rho(r)) or O(n(3) rho(3r)) floating-point operations (flops) are required, compared favorably with O(mnr + r(2)(m + n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max (n(2), mr + nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k x n rho(k) or m rho(l) x l, where k or l takes r + 1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the n rho(k) - k null vectors or the l - r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen n rho(k) or l columns of the ground truth of <(V)over cap>(T). If enough submatrices are randomly chosen, (V) over cap and (U) over cap can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 x 1024 and on real data sets such as dinosaur indicate that RSM is 4.30 similar to 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best.
引用
收藏
页码:4502 / 4511
页数:10
相关论文
共 50 条
  • [1] Low-rank approximation of large-scale matrices via randomized methods
    Hatamirad, Sarvenaz
    Pedram, Mir Mohsen
    JOURNAL OF SUPERCOMPUTING, 2018, 74 (02): : 830 - 844
  • [2] Low-rank approximation of large-scale matrices via randomized methods
    Sarvenaz Hatamirad
    Mir Mohsen Pedram
    The Journal of Supercomputing, 2018, 74 : 830 - 844
  • [3] COMPUTING LOW-RANK APPROXIMATIONS OF LARGE-SCALE MATRICES WITH THE TENSOR NETWORK RANDOMIZED SVD
    Batselier, Kim
    Yu, Wenjian
    Daniel, Luca
    Wong, Ngai
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2018, 39 (03) : 1221 - 1244
  • [4] Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset
    Bouwmans, Thierry
    Sobral, Andrews
    Javed, Sajid
    Jung, Soon Ki
    Zahzah, El-Hadi
    COMPUTER SCIENCE REVIEW, 2017, 23 : 1 - 71
  • [5] Exemplar-based large-scale low-rank matrix decomposition for collaborative prediction
    Lei, Hengxin
    Liu, Jinglei
    Yu, Yong
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 2023, 100 (03) : 615 - 640
  • [6] Low-rank and sparse matrices fitting algorithm for low-rank representation
    Zhao, Jianxi
    Zhao, Lina
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2020, 79 (02) : 407 - 425
  • [7] Decomposition matrices for low-rank unitary groups
    Dudas, Olivier
    Malle, Gunter
    PROCEEDINGS OF THE LONDON MATHEMATICAL SOCIETY, 2015, 110 : 1517 - 1557
  • [8] Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers
    Yao, Quanming
    Kwok, James T.
    Wang, Taifeng
    Liu, Tie-Yan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (11) : 2628 - 2643
  • [9] Low-rank variance estimation in large-scale GMRF models
    Malioutov, Dmitry M.
    Johnson, Jason K.
    Willsky, Alan S.
    2006 IEEE International Conference on Acoustics, Speech and Signal Processing, Vols 1-13, 2006, : 3127 - 3130
  • [10] A generalized Nyström method with subspace iteration for low-rank approximations of large-scale nonsymmetric matrices
    Wang, Yatian
    Wu, Nian-Ci
    Liu, Yuqiu
    Xiang, Hua
    APPLIED MATHEMATICS LETTERS, 2025, 166