Nonconvex Low-Rank and Total-Variation Regularized Model and Algorithm for Image Deblurring

被引:0
|
作者
Sun T. [1 ]
Li D.-S. [1 ]
机构
[1] College of Computer, National University of Defense Technology, Changsha
来源
Jisuanji Xuebao/Chinese Journal of Computers | 2020年 / 43卷 / 04期
关键词
Alternating minimization; Image deblurring; Low-rank; Nonconvex model; Total variation;
D O I
10.11897/SP.J.1016.2020.00643
中图分类号
TN911 [通信理论];
学科分类号
081002 ;
摘要
Non-blind image deblurring aims to reconstruct the original image for noised linear convolution transform with some known kernel. If the noise is Gaussian, one might use the least square minimization with the observed image and the kernel for this task. However, in most cases, the convolution operator makes the problem very ill-posed and hard to solve directly. To this end, regularizations, which are recruited to characterize known statistical priors about the original images, are then developed to help. Among them, two frequently used ones are the low-rank and total-variation regularizations. The earlier related works employed them separately, that is, just using one of them, not both. Until several years ago, people have considered combing these two regularizations together. Existing results show that the hybrid regularized model performs much better than the single one. However, the current composite regularization just uses the convex methodology. The nonconvex implementations are still missing. Considering nonconvex regularizations can beat convex ones in various cases, in this paper, we propose a novel nonconvex hybrid model in which, the L1/2 and Schatten-1/2 norms are used. We use these two nonconvex functions due to that their proximal maps are easy to calculate even without convexity. Both L1/2 and Schatten-1/2 norms enjoy closed-form proximal maps. The proposed nonconvex hybrid regularized model is naturally a nonconvex linearly constrained problem which can be solved by the alternating directions of multipliers. However, the nonconvexity breaks the theoretical guarantees. Thus, we turn to solve the penalty problem rather than the original form. The alternating minimization method applied to the penalty then yields the proposed algorithm, in which, each substep just involves very simple computations. In our algorithm, an important parameter is the penalty parameter. If it is infinity, the penalty is then identical to the original problem. But the large penalty parameter will make the algorithm iterate slowly. Thus, to improve the speed and narrow the penalty problem and the original one, for the penalty parameter, we use a warm-up technique, that is, increasing the penalty parameter in the iterations. The convergence of the algorithm is proved under very mild assumptions, which can be easily satisfied in applications. The numerical experiments are conducted on six natural test images. The performance of the proposed algorithm verifies the convergence theory. Comparisons with other algorithms demonstrate the efficiency of our algorithm. © 2020, Science Press. All right reserved.
引用
收藏
页码:643 / 652
页数:9
相关论文
共 25 条
  • [1] Candes E.J., Li X., Ma Y., Et al., Robust Principal Component Analysis?, Journal of the ACM (JACM), 58, 3, (2011)
  • [2] Rudin L.I., Osher S., Fatemi E., Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena, 60, 1-4, pp. 259-268, (1992)
  • [3] Hintermuller M., Wu T., Nonconvex TVq-models in image restoration: Analysis and a trust-region regularization-based superlinearly convergent solver, SIAM Journal on Imaging Sciences, 6, 3, pp. 1385-1415, (2013)
  • [4] Nie F., Huang H., Ding C., Low-rank matrix recovery via efficient schatten p-norm minimization, Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 655-661, (2012)
  • [5] Combettes P.L., Wajs V.R., Signal recovery by proximal forward-backward splitting, Multiscale Modeling & Simulation, 4, 4, pp. 1168-1200, (2005)
  • [6] Lu C., Tang J., Yan S., Lin Z., Generalized nonconvex nonsmooth low-rank minimization, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4130-4137, (2014)
  • [7] Lu Z., Zhang Y., Schatten-p quasi-norm regularized matrix optimization via iterative reweighted singular value minimization, (2015)
  • [8] Sun T., Jiang H., Cheng L., Convergence of proximal iteratively reweighted nuclear norm algorithm for image processing, IEEE Transactions on Image Processing, 26, 12, pp. 5632-5644, (2017)
  • [9] Xu Z., Chang X., Xu F., Et al., L<sub>1/2</sub> regularization: A thresholding representation theory and a fast solver, IEEE Transactions on Neural Networks and Learning Systems, 23, 7, pp. 1013-1027, (2012)
  • [10] Lu C., Zhu C., Xu C., Et al., Generalized singular value thresholding, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 1805-1811, (2015)