Single Pixel Imaging Based on Generative Adversarial Network Optimized With Multiple Prior Information

被引:6
|
作者
Sun, Shida [1 ]
Yan, Qiurong [1 ]
Zheng, Yongjian [1 ]
Zhen Wei [1 ]
Lin, Jian [1 ]
Cai, Yilin [1 ]
机构
[1] Nanchang Univ, Sch Informat Engn, Nanchang 330031, Jiangxi, Peoples R China
来源
IEEE PHOTONICS JOURNAL | 2022年 / 14卷 / 04期
基金
中国国家自然科学基金;
关键词
Image reconstruction; Generative adversarial networks; Imaging; Generators; Training; Image coding; Loss measurement; Compressed sensing (CS); single pixel imaging (SPI) system; photon counting; generative adversarial networks (GAN); deep learning; multiple prior information; SIGNAL RECOVERY; RECONSTRUCTION;
D O I
10.1109/JPHOT.2022.3184947
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Reconstructing high-quality images at low measurement rate is one of the research objectives for single-pixel imaging (SPI). Deep learning based compressed reconstruction methods have been shown to avoid the huge iterative computation of traditional methods, while achieving better reconstruction results. Benefiting from improved modeling capabilities under the constant game of generation and identification, Generative Adversarial Networks (GANs) has achieved great success in image generation and reconstruction. In this paper, we proposed a GAN-based compression reconstruction network, MPIGAN. In order to obtain multiple prior information from the dataset and thus improving the accuracy of the model, multiple Autoencoders are trained as regularization terms to be added to the loss function of the generative network, and then adversarial training is performed with a multi-label classification network. Experimental results show that our scheme can significantly improve reconstruction quality at a very low measurement rate, and reconstruction results are better than the existing network.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Single Pixel Imaging Based on Generative Adversarial Network Optimized With Multiple Prior Information
    Sun, Shida
    Yan, Qiurong
    Zheng, Yongjian
    Wei, Zhen
    Lin, Jian
    Cai, Yilin
    IEEE Photonics Journal, 2022, 14 (04)
  • [2] Generative adversarial network-based single-pixel imaging
    Zhao, Ming
    Li, Fengqiang
    Huo, Fengyue
    Tian, Zhiming
    JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY, 2022, 30 (08) : 648 - 656
  • [3] Single Pixel Imaging Based on Multiple Prior Deep Unfolding Network
    Zou, Quan
    Yan, Qiurong
    Dai, Qianling
    Wang, Ao
    Yang, Bo
    Li, Yi
    Yan, Jinwei
    IEEE PHOTONICS JOURNAL, 2024, 16 (04):
  • [4] A demosaicing method for compressive color single-pixel imaging based on a generative adversarial network
    Qu, Gang
    Meng, Xiangfeng
    Yin, Yongkai
    Yang, Xiulun
    OPTICS AND LASERS IN ENGINEERING, 2022, 155
  • [5] Adaptive Coarse-to-Fine Single Pixel Imaging With Generative Adversarial Network Based Reconstruction
    Woo, Bing Hong
    Tham, Mau-Luen
    Chua, Sing Yee
    IEEE ACCESS, 2023, 11 : 31024 - 31035
  • [6] Optimizing the quality of Fourier single-pixel imaging via generative adversarial network
    Hu, Yangdi
    Cheng, Zhengdong
    Fan, Xiaochun
    Liang, Zhenyu
    Zhai, Xiang
    OPTIK, 2021, 227
  • [7] Generative adversarial network with the discriminator using measurements as an auxiliary input for single-pixel imaging
    Dai, Qianling
    Yan, Qiurong
    Zou, Quan
    Li, Yi
    Yan, Jinwei
    OPTICS COMMUNICATIONS, 2024, 560
  • [8] VGenNet: Variable Generative Prior Enhanced Single Pixel Imaging
    Zhang, Xiangyu
    Deng, Chenjin
    Wang, Chenglong
    Wang, Fei
    Situ, Guohai
    ACS PHOTONICS, 2023, 10 (07) : 2363 - 2373
  • [9] DEEP LEARNING RECONSTRUCTION FOR SINGLE PIXEL IMAGING WITH GENERATIVE ADVERSARIAL NETWORKS
    Guven, Baturalp
    Gungor, Alper
    Bahceci, M. Umut
    Cukur, Tolga
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2060 - 2064
  • [10] Image Superresolution in Single-Pixel Imaging with Generative Adversarial Networks
    D. V. Babukhin
    A. A. Reutov
    D. V. Sych
    Bulletin of the Lebedev Physics Institute, 2025, 52 (1) : 14 - 21