CS-MRI Reconstruction Using an Improved GAN with Dilated Residual Networks and Channel Attention Mechanism

被引:3
|
作者
Li, Xia [1 ]
Zhang, Hui [1 ]
Yang, Hao [1 ]
Li, Tie-Qiang [2 ,3 ]
机构
[1] China Jiliang Univ, Coll Informat Engn, Hangzhou 310018, Peoples R China
[2] Karolinska Inst, Dept Clin Sci Intervent & Technol, S-14186 Stockholm, Sweden
[3] Karolinska Univ Hosp, Dept Med Radiat & Nucl Med, S-17176 Stockholm, Sweden
基金
浙江省自然科学基金;
关键词
compressed sensing MRI; GAN; U-net; dilated residual blocks; channel attention mechanism; GENERATIVE ADVERSARIAL NETWORK;
D O I
10.3390/s23187685
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Compressed sensing (CS) MRI has shown great potential in enhancing time efficiency. Deep learning techniques, specifically generative adversarial networks (GANs), have emerged as potent tools for speedy CS-MRI reconstruction. Yet, as the complexity of deep learning reconstruction models increases, this can lead to prolonged reconstruction time and challenges in achieving convergence. In this study, we present a novel GAN-based model that delivers superior performance without the model complexity escalating. Our generator module, built on the U-net architecture, incorporates dilated residual (DR) networks, thus expanding the network's receptive field without increasing parameters or computational load. At every step of the downsampling path, this revamped generator module includes a DR network, with the dilation rates adjusted according to the depth of the network layer. Moreover, we have introduced a channel attention mechanism (CAM) to distinguish between channels and reduce background noise, thereby focusing on key information. This mechanism adeptly combines global maximum and average pooling approaches to refine channel attention. We conducted comprehensive experiments with the designed model using public domain MRI datasets of the human brain. Ablation studies affirmed the efficacy of the modified modules within the network. Incorporating DR networks and CAM elevated the peak signal-to-noise ratios (PSNR) of the reconstructed images by about 1.2 and 0.8 dB, respectively, on average, even at 10x CS acceleration. Compared to other relevant models, our proposed model exhibits exceptional performance, achieving not only excellent stability but also outperforming most of the compared networks in terms of PSNR and SSIM. When compared with U-net, DR-CAM-GAN's average gains in SSIM and PSNR were 14% and 15%, respectively. Its MSE was reduced by a factor that ranged from two to seven. The model presents a promising pathway for enhancing the efficiency and quality of CS-MRI reconstruction.
引用
收藏
页数:16
相关论文
共 45 条
  • [31] Photovoltaic power station identification using refined encoder-decoder network with channel attention and chained residual dilated convolutions
    Jie, Yongshi
    Yue, Anzhi
    Liu, Shunxi
    Huang, Qingqing
    Chen, Jingbo
    Meng, Yu
    Deng, Yupeng
    Yu, Zongyang
    JOURNAL OF APPLIED REMOTE SENSING, 2020, 14 (01)
  • [32] Novel load balancing mechanism for cloud networks using dilated and attention-based federated learning with Coati Optimization
    Atul B. Kathole
    Viomesh Kumar Singh
    Ankur Goyal
    Shiv Kant
    Amit Sadanand Savyanavar
    Swapnaja Amol Ubale
    Prince Jain
    Mohammad Tariqul Islam
    Scientific Reports, 15 (1)
  • [33] ICA-CNN: Gesture Recognition Using CNN With Improved Channel Attention Mechanism and Multimodal Signals
    Shen, Shu
    Wang, Xuebin
    Wu, Mengshi
    Gu, Kang
    Chen, Xinrong
    Geng, Xinyu
    IEEE SENSORS JOURNAL, 2023, 23 (04) : 4052 - 4059
  • [34] SARA-GAN: Self-Attention and Relative Average Discriminator Based Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction
    Yuan, Zhenmou
    Jiang, Mingfeng
    Wang, Yaming
    Wei, Bo
    Li, Yongming
    Wang, Pin
    Menpes-Smith, Wade
    Niu, Zhangming
    Yang, Guang
    FRONTIERS IN NEUROINFORMATICS, 2020, 14
  • [35] Improved Memristive Binarized Neural Networks Using Transformer_DCBNN Architecture with CBAM Attention Mechanism
    Guo, Yi
    Duan, Shu-Kai
    Wang, Li-Dan
    ADVANCES IN NEURAL NETWORKS-ISNN 2024, 2024, 14827 : 431 - 439
  • [36] Unpaired Stain Style Transfer Using Invertible Neural Networks Based on Channel Attention and Long-Range Residual
    Lan, Junlin
    Cai, Shaojin
    Xue, Yuyang
    Gao, Qinquan
    Du, Min
    Zhang, Hejun
    Wu, Zhida
    Deng, Yanglin
    Huang, Yuxiu
    Tong, Tong
    Chen, Gang
    IEEE ACCESS, 2021, 9 : 11282 - 11295
  • [37] Limited View Tomographic Reconstruction Using a Cascaded Residual Dense Spatial-Channel Attention Network With Projection Data Fidelity Layer
    Zhou, Bo
    Zhou, S. Kevin
    Duncan, James S.
    Liu, Chi
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (07) : 1792 - 1804
  • [38] Super-Resolution Reconstruction of 3T-Like Images From 0.35T MRI Using a Hybrid Attention Residual Network
    Jiang, Jialiang
    Qi, Fulang
    Du, Huiyu
    Xu, Jianan
    Zhou, Yufu
    Gao, Dayong
    Qiu, Bensheng
    IEEE ACCESS, 2022, 10 : 32810 - 32821
  • [39] Artificial Intelligence-Based Brain Tumor Segmentation Using Adaptive Hybrid CNN and Classification by Multi-Scale Dilated MobileNet with Attention Mechanism for MRI Images
    Subhashini, K.
    Thangakumar, J.
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024,
  • [40] One-shot multi-object tracking using CNN-based networks with spatial-channel attention mechanism
    Li, Guofa
    Chen, Xin
    Li, Mingjun
    Li, Wenbo
    Li, Shen
    Guo, Gang
    Wang, Huaizhi
    Deng, Hao
    OPTICS AND LASER TECHNOLOGY, 2022, 153