L1-Norm Low-Rank Matrix Factorization by Variational Bayesian Method

被引:86
|
作者
Zhao, Qian [1 ,2 ]
Meng, Deyu [1 ]
Xu, Zongben [1 ]
Zuo, Wangmeng [3 ]
Yan, Yan [4 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Inst Informat & Syst Sci, Xian 710049, Peoples R China
[2] Beijing Ctr Math & Informat Interdisciplinary Sci, Beijing 100048, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
[4] Univ Trento, Dept Informat Engn & Comp Sci, I-38123 Trento, Italy
基金
中国国家自然科学基金;
关键词
Background subtraction; face reconstruction; low-rank matrix factorization (LRMF); outlier detection; robustness; variational inference; PRINCIPAL COMPONENT ANALYSIS; FACE RECOGNITION; ROBUST; ALGORITHM;
D O I
10.1109/TNNLS.2014.2387376
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The L-1-norm low-rank matrix factorization (LRMF) has been attracting much attention due to its wide applications to computer vision and pattern recognition. In this paper, we construct a new hierarchical Bayesian generative model for the L-1-norm LRMF problem and design a mean-field variational method to automatically infer all the parameters involved in the model by closed-form equations. The variational Bayesian inference in the proposed method can be understood as solving a weighted LRMF problem with different weights on matrix elements based on their significance and with L-2-regularization penalties on parameters. Throughout the inference process of our method, the weights imposed on the matrix elements can be adaptively fitted so that the adverse influence of noises and outliers embedded in data can be largely suppressed, and the parameters can be appropriately regularized so that the generalization capability of the problem can be statistically guaranteed. The robustness and the efficiency of the proposed method are substantiated by a series of synthetic and real data experiments, as compared with the state-of-the-art L-1-norm LRMF methods. Especially, attributed to the intrinsic generalization capability of the Bayesian methodology, our method can always predict better on the unobserved ground truth data than existing methods.
引用
收藏
页码:825 / 839
页数:15
相关论文
共 50 条
  • [1] Low-rank matrix decomposition in L1-norm by dynamic systems
    Liu, Yiguang
    Liu, Bingbing
    Pu, Yifei
    Chen, Xiaohui
    Cheng, Hong
    IMAGE AND VISION COMPUTING, 2012, 30 (11) : 915 - 921
  • [2] Practical Low-Rank Matrix Approximation under Robust L1-Norm
    Zheng, Yinqiang
    Liu, Guangcan
    Sugimoto, Shigeki
    Yan, Shuicheng
    Okutomi, Masatoshi
    2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 1410 - 1417
  • [3] L1-Norm Low-Rank Matrix Decomposition by Neural Networks and Mollifiers
    Liu, Yiguang
    Yang, Songfan
    Wu, Pengfei
    Li, Chunguang
    Yang, Menglong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (02) : 273 - 283
  • [4] On the Complexity of Robust PCA and l1-Norm Low-Rank Matrix Approximation
    Gillis, Nicolas
    Vavasis, Stephen A.
    MATHEMATICS OF OPERATIONS RESEARCH, 2018, 43 (04) : 1072 - 1084
  • [5] L1-norm low-rank linear approximation for accelerating deep neural networks: L1-norm low-rank linear approximation for accelerating deep neural networks
    Zhao Z.
    Wang H.
    Sun H.
    He Z.
    He, Zhihai (hezhi@missouri.edu), 2020, Elsevier B.V., Netherlands (400) : 216 - 226
  • [6] Efficient Low-Rank Matrix Factorization Based on l1,ε-Norm for Online Background Subtraction
    Liu, Qi
    Li, Xiaopeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4900 - 4904
  • [7] L1-norm low-rank linear approximation for accelerating deep neural networks
    Zhao, Zhiqun
    Wang, Hengyou
    Sun, Hao
    He, Zhihai
    NEUROCOMPUTING, 2020, 400 : 216 - 226
  • [8] Logarithmic Norm Regularized Low-Rank Factorization for Matrix and Tensor Completion
    Chen, Lin
    Jiang, Xue
    Liu, Xingzhao
    Zhou, Zhixin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 3434 - 3449
  • [9] LOW-RANK MATRIX COMPLETION BY VARIATIONAL SPARSE BAYESIAN LEARNING
    Babacan, S. Derin
    Luessi, Martin
    Molina, Rafael
    Katsaggelos, Aggelos K.
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 2188 - 2191
  • [10] On the L1-norm Approximation of a Matrix by Another of Lower Rank
    Tsagkarakis, Nicholas
    Markopoulos, Panos P.
    Pados, Dimitris A.
    2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 768 - 773