A nonconvex TVq - l1 regularization model and the ADMM based algorithm

被引:0
|
作者
Fang, Zhuang [1 ]
Tang Liming [1 ]
Liang, Wu [1 ]
Liu Hanxin [1 ]
机构
[1] Hubei Minzu Univ, Sch Math & Stat, Enshi 445000, Peoples R China
来源
SCIENTIFIC REPORTS | 2022年 / 12卷 / 01期
关键词
TOTAL GENERALIZED VARIATION; IMAGE-RESTORATION; OPTIMIZATION; CONVERGENCE; EFFICIENT; SPARSE; RECOVERY; FILTER;
D O I
10.1038/s41598-022-11938-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The total variation (TV) regularization with l(1) fidelity is a popular method to restore the image contaminated by salt and pepper noise, but it often suffers from limited performance in edge-preserving. To solve this problem, we propose a nonconvex TVq - l(1) regularization model in this paper, which utilizes a nonconvex l(q)-norm (0 < q < 1) defined in total variation (TV) domain (called TVq regularizer) to regularize the restoration, and uses l(1) fidelity to measure the noise. Compared to the traditional TV model, the proposed model can more effectively preserve edges and contours since it provides a more sparse representation of the restoration in TV domain. An alternating direction method of multipliers (ADMM) combining with majorization-minimization (MM) scheme and proximity operator is introduced to numerically solve the proposed model. In particular, a sufficient condition for the convergence of the proposed algorithm is provided. Numerical results validate the proposed model and algorithm, which can effectively remove salt and pepper noise while preserving image edges and contours. In addition, compared with several state-of-the-art variational regularization models, the proposed model shows the best performance in terms of peak signal to noise ratio (PSNR) and mean structural similarity index (MSSIM). We can obtain about 0.5 dB PSNR and 0.06 MSSIM improvements against all compared models.
引用
收藏
页数:26
相关论文
共 50 条
  • [21] The Group-Lasso: l1,∞ Regularization versus l1,2 Regularization
    Vogt, Julia E.
    Roth, Volker
    PATTERN RECOGNITION, 2010, 6376 : 252 - 261
  • [22] The convergence analysis of SpikeProp algorithm with smoothing L1/2 regularization
    Zhao, Junhong
    Zurada, Jacek M.
    Yang, Jie
    Wu, Wei
    NEURAL NETWORKS, 2018, 103 : 19 - 28
  • [23] A modified L1/2 regularization algorithm for electrical impedance tomography
    Fan, Wenru
    Wang, Chi
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2020, 31 (01)
  • [24] Improving the Performance of the PNLMS Algorithm Using l1 Norm Regularization
    Das, Rajib Lochan
    Chakraborty, Mrityunjoy
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2016, 24 (07) : 1280 - 1290
  • [25] L1/2 Regularization: Convergence of Iterative Half Thresholding Algorithm
    Zeng, Jinshan
    Lin, Shaobo
    Wang, Yao
    Xu, Zongben
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (09) : 2317 - 2329
  • [26] Image restoration based on L1 + L1 model
    Liu, Ruihua
    International Journal of Signal Processing, Image Processing and Pattern Recognition, 2014, 7 (05) : 273 - 286
  • [27] MULTIPLE-INFLATION POISSON MODEL WITH L1 REGULARIZATION
    Su, Xiaogang
    Fan, Juanjuan
    Levine, Richard A.
    Tan, Xianming
    Tripathi, Arvind
    STATISTICA SINICA, 2013, 23 (03) : 1071 - 1090
  • [28] A novel iterative soft thresholding algorithm for L1 regularization based SAR image enhancement
    Hui Bi
    Guoan Bi
    Science China Information Sciences, 2019, 62
  • [29] A novel iterative soft thresholding algorithm for L1 regularization based SAR image enhancement
    Hui BI
    Guoan BI
    Science China(Information Sciences), 2019, 62 (04) : 212 - 214
  • [30] A novel iterative soft thresholding algorithm for L1 regularization based SAR image enhancement
    Bi, Hui
    Bi, Guoan
    SCIENCE CHINA-INFORMATION SCIENCES, 2019, 62 (04)