Generalized Low-Rank Approximations of Matrices Revisited

被引:28
|
作者
Liu, Jun [1 ]
Chen, Songcan [1 ]
Zhou, Zhi-Hua [2 ]
Tan, Xiaoyang [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Dept Comp Sci & Engn, Nanjing 210016, Peoples R China
[2] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2010年 / 21卷 / 04期
基金
美国国家科学基金会;
关键词
Dimensionality reduction; singular value decomposition (SVD); generalized low-rank approximations of matrices (GLRAM); reconstruction error; REPRESENTATION;
D O I
10.1109/TNN.2010.2040290
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Compared to singular value decomposition (SVD), generalized low-rank approximations of matrices (GLRAM) can consume less computation time, obtain higher compression ratio, and yield competitive classification performance. GLRAM has been successfully applied to applications such as image compression and retrieval, and quite a few extensions have been successively proposed. However, in literature, some basic properties and crucial problems with regard to GLRAM have not been explored or solved yet. For this sake, we revisit GLRAM in this paper. First, we reveal such a close relationship between GLRAM and SVD that GLRAM's objective function is identical to SVD's objective function except the imposed constraints. Second, we derive a lower bound of GLRAM's objective function, and discuss when the lower bound can be touched. Moreover, from the viewpoint of minimizing the lower bound, we answer one open problem raised by Ye (Machine Learning, 2005), i.e., a theoretical justification of the experimental phenomenon that, under given number of reduced dimension, the lowest reconstruction error is obtained when the left and right transformations have equal number of columns. Third, we explore when and why GLRAM can perform well in terms of compression, which is a fundamental problem concerning the usability of GLRAM.
引用
收藏
页码:621 / 632
页数:12
相关论文
共 50 条
  • [31] Correction to: ALORA: Affine Low-Rank Approximations
    Alan Ayala
    Xavier Claeys
    Laura Grigori
    Journal of Scientific Computing, 2019, 80 (3) : 1997 - 1997
  • [32] Two Rank Approximations for Low-Rank Based Subspace Clustering
    Xu, Fei
    Peng, Chong
    Hu, Yunhong
    He, Guoping
    2017 10TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI), 2017,
  • [33] Learning-Based Low-Rank Approximations
    Indyk, Piotr
    Vakilian, Ali
    Yuan, Yang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [34] Robust low-rank data matrix approximations
    XingDong Feng
    XuMing He
    Science China Mathematics, 2017, 60 : 189 - 200
  • [35] Parametric PDEs: sparse or low-rank approximations?
    Bachmayr, Markus
    Cohen, Albert
    Dahmen, Wolfgang
    IMA JOURNAL OF NUMERICAL ANALYSIS, 2018, 38 (04) : 1661 - 1708
  • [36] Robust low-rank data matrix approximations
    Feng XingDong
    He XuMing
    SCIENCE CHINA-MATHEMATICS, 2017, 60 (02) : 189 - 200
  • [37] Connectome Smoothing via Low-Rank Approximations
    Tang, Runze
    Ketcha, Michael
    Badea, Alexandra
    Calabrese, Evan D.
    Margulies, Daniel S.
    Vogelstein, Joshua T.
    Priebe, Carey E.
    Sussman, Daniel L.
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (06) : 1446 - 1456
  • [38] Low-rank approximations of nonseparable panel models
    Fernandez-Val, Ivan
    Freeman, Hugo
    Weidner, Martin
    ECONOMETRICS JOURNAL, 2021, 24 (02): : C40 - C77
  • [39] Robust low-rank data matrix approximations
    FENG XingDong
    HE XuMing
    Science China(Mathematics), 2017, 60 (02) : 189 - 200
  • [40] Fast computation of low-rank matrix approximations
    Achlioptas, Dimitris
    McSherry, Frank
    JOURNAL OF THE ACM, 2007, 54 (02)