Analysis dictionary learning using block coordinate descent framework with proximal operators

被引:7
|
作者
Li, Zhenni [1 ]
Ding, Shuxue [1 ]
Hayashi, Takafumi [2 ]
Li, Yujie [3 ]
机构
[1] Univ Aizu, Sch Comp Sci & Engn, Aizu Wakamatsu, Fukushima 9658580, Japan
[2] Niigata Univ, Grad Sch Sci & Technol, Niigata 9502181, Japan
[3] AIST, Ctr Artificial Intelligence, Tsukuba, Ibaraki 3058560, Japan
关键词
Sparse representation model; Analysis dictionary learning; Block coordinate descent framework; Incoherence; Proximal operator; SPARSE REPRESENTATION; K-SVD; IMAGE; ALGORITHM;
D O I
10.1016/j.neucom.2017.02.014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we propose two analysis dictionary learning algorithms for sparse representation with analysis model. The problem is formulated with the l(1)-norm regularizer and with two penalty terms on the analysis dictionary: the term of -log det(Omega(T)Omega) and the coherence penalty term. As the processing scheme, we employ a block coordinate descent framework, so that the overall problem is transformed into a set of minimizations of univariate subproblems with respect to a single-vector variable. Each subproblem is still nonsmooth, but it can be solved by a proximal operator and then the closed-form solutions can be obtained directly and explicitly. In particular, the coherence penalty, excluding excessively similar or repeated dictionary atoms, is solved at the same time as the dictionary update, thereby reducing the complexity. Furthermore, a scheme with a group of atoms is introduced in one proposed algorithm, which has a lower complexity. According to our analysis and simulation study, the main advantages of the proposed algorithms are their greater dictionary recovery ratios especially in the low-cosparsity case, and their faster running time of reaching the stable values of the dictionary recovery ratios and the recovery cosparsity compared with state-of-the-art algorithms. In addition, one proposed algorithm performs well in image denoising and in noise cancellation. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:165 / 180
页数:16
相关论文
共 50 条
  • [21] On the complexity analysis of randomized block-coordinate descent methods
    Zhaosong Lu
    Lin Xiao
    Mathematical Programming, 2015, 152 : 615 - 642
  • [22] On the complexity analysis of randomized block-coordinate descent methods
    Lu, Zhaosong
    Xiao, Lin
    MATHEMATICAL PROGRAMMING, 2015, 152 (1-2) : 615 - 642
  • [23] Sparse dictionary learning by block proximal gradient with global convergence
    Zhu, Tao
    NEUROCOMPUTING, 2019, 367 : 226 - 235
  • [24] Incoherent dictionary learning with log-regularizer based on proximal operators
    Li, Zhenni
    Ding, Shuxue
    Hayashi, Takafumi
    Li, Yujie
    DIGITAL SIGNAL PROCESSING, 2017, 63 : 86 - 99
  • [25] Asynchronous Parallel Incremental Block-Coordinate Descent for Decentralized Machine Learning
    Chen, Hao
    Ye, Yu
    Xiao, Ming
    Skoglund, Mikael
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (04) : 1252 - 1259
  • [26] A Unified-Model via Block Coordinate Descent for Learning the Importance of Filter
    Li, Qinghua
    Zhang, Xue
    Li, Cuiping
    Chen, Hong
    PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 192 - 200
  • [27] Rapid Convex Optimization of Centroidal Dynamics using Block Coordinate Descent
    Shah, Paarth
    Meduri, Avadesh
    Merkt, Wolfgang
    Khadiv, Majid
    Havoutis, Ioannis
    Righetti, Ludovic
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 1658 - 1665
  • [28] A Block-Coordinate Descent EMO Algorithm: Theoretical and Empirical Analysis
    Doerr, Benjamin
    Knowles, Joshua
    Neumann, Aneta
    Neumann, Frank
    PROCEEDINGS OF THE 2024 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2024, 2024, : 493 - 501
  • [29] Better Worst-case Complexity Analysis of the Block Coordinate Descent Method for Large Scale Machine Learning
    Shi, Ziqiang
    Liu, Rujie
    2017 16TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2017, : 889 - 892
  • [30] A Fast Algorithm for Learning Overcomplete Dictionary for Sparse Representation Based on Proximal Operators
    Li, Zhenni
    Ding, Shuxue
    Li, Yujie
    NEURAL COMPUTATION, 2015, 27 (09) : 1951 - 1982