Dual Prior Unfolding for Snapshot Compressive Imaging

被引:2
|
作者
Zhang, Jiancheng [1 ]
Zeng, Haijin [2 ]
Cao, Jiezhang [3 ]
Chen, Yongyong [4 ]
Yu, Dengxiu [1 ]
Zhao, Yin-Ping [1 ]
机构
[1] Northwestern Polytech Univ, Xian, Peoples R China
[2] IMEC UGent, Ghent, Belgium
[3] Swiss Fed Inst Technol, Zurich, Switzerland
[4] Harbin Inst Technol Shenzhen, Shenzhen, Peoples R China
基金
芬兰科学院; 中国国家自然科学基金; 中国博士后科学基金;
关键词
ALGORITHMS;
D O I
10.1109/CVPR52733.2024.02432
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, deep unfolding methods have achieved remarkable success in the realm of Snapshot Compressive Imaging (SCI) reconstruction. However, the existing methods all follow the iterative framework of a single image prior, which limits the efficiency of the unfolding methods and makes it a problem to use other priors simply and effectively. To break out of the box, we derive an effective Dual Prior Unfolding (DPU), which achieves the joint utilization of multiple deep priors and greatly improves iteration efficiency. Our unfolding method is implemented through two parts, i.e., Dual Prior Framework (DPF) and Focused Attention (FA). In brief, in addition to the normal image prior, DPF introduces a residual into the iteration formula and constructs a degraded prior for the residual by considering various degradations to establish the unfolding framework. To improve the effectiveness of the image prior based on self-attention, FA adopts a novel mechanism inspired by PCA denoising to scale and filter attention, which lets the attention focus more on effective features with little computation cost. Besides, an asymmetric backbone is proposed to further improve the efficiency of hierarchical self-attention. Remarkably, our 5-stage DPU achieves state-of-the-art (SOTA) performance with the least FLOPs and parameters compared to previous methods, while our 9-stage DPU significantly outperforms other unfolding methods with less computational requirement. https://github.com/ZhangJC-2k/DPU
引用
收藏
页码:25742 / 25752
页数:11
相关论文
共 50 条
  • [11] Ensemble Learning Priors Driven Deep Unfolding for Scalable Video Snapshot Compressive Imaging
    Yang, Chengshuai
    Zhang, Shiyu
    Yuan, Xin
    COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 : 600 - 618
  • [12] Snapshot Compressive Imaging Using Domain-Factorized Deep Video Prior
    Miao, Yu-Chun
    Zhao, Xi-Le
    Wang, Jian-Li
    Fu, Xiao
    Wang, Yao
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2024, 10 : 93 - 102
  • [13] Dual-Window Multiscale Transformer for Hyperspectral Snapshot Compressive Imaging
    Luo, Fulin
    Chen, Xi
    Gong, Xiuwen
    Wu, Weiwen
    Guo, Tan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3972 - 3980
  • [14] Snapshot compressive hyperspectral imaging via dual spectral filter array
    Zhang, Yang
    Liu, Xinyu
    Wang, Chang
    Xu, Zhou
    Zhang, Qiangbo
    Zheng, Zhenrong
    OPTOELECTRONIC IMAGING AND MULTIMEDIA TECHNOLOGY IX, 2022, 12317
  • [15] Sampling for Snapshot Compressive Imaging
    Hu, Minghao
    Wu, Zongliang
    Huang, Qian
    Yuan, Xin
    Brady, David
    Intelligent Computing, 2023, 2
  • [16] Degradation-aware deep unfolding network with transformer prior for video compressive imaging
    Yin, Jianfu
    Wang, Nan
    Hu, Binliang
    Wang, Yao
    Wang, Quan
    SIGNAL PROCESSING, 2025, 227
  • [17] Snapshot compressive imaging using aberrations
    Vera, Esteban
    Meza, Pablo
    OPTICS EXPRESS, 2018, 26 (02): : 1206 - 1218
  • [18] Dual-domain deep unfolding Transformer for spectral compressive imaging reconstruction
    Zhou, Han
    Lian, Yusheng
    Liu, Zilong
    Li, Jin
    Cao, Xuheng
    Ma, Chao
    Tian, Jieyu
    OPTICS AND LASERS IN ENGINEERING, 2025, 186
  • [19] Controlled aberrations for snapshot compressive imaging
    Vera, Esteban
    Meza, Pablo
    COMPUTATIONAL IMAGING II, 2017, 10222
  • [20] Shearlet Enhanced Snapshot Compressive Imaging
    Yang, Peihao
    Kong, Linghe
    Liu, Xiao-Yang
    Yuan, Xin
    Chen, Guihai
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 6466 - 6481