Deep learning for improving ZTE MRI images in free breathing

被引:3
|
作者
Papp, D. [1 ]
Castillo, T. Jose M. [1 ]
Wielopolski, P. A. [1 ]
Ciet, P. [1 ,2 ]
Veenland, Jifke F. [1 ,3 ]
Kotek, G. [1 ]
Hernandez-Tamames, J. [1 ]
机构
[1] Erasmus MC, Dept Radiol & Nucl Med, Rotterdam, Netherlands
[2] Sophia Childrens Univ Hosp, Erasmus Med Ctr, Dept Pediat Pulmonol & Allergol, Rotterdam, Netherlands
[3] Erasmus MC, Dept Med Informat, Rotterdam, Netherlands
关键词
Magnetic resonance imaging; Zero TE; Lung; Fully convolutional neural networks; ULTRASHORT ECHO TIME; COMPUTED-TOMOGRAPHY; LUNG; AIRWAYS;
D O I
10.1016/j.mri.2023.01.019
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Introduction: Despite a growing interest in lung MRI, its broader use in a clinical setting remains challenging. Several factors limit the image quality of lung MRI, such as the extremely short T2 and T2* relaxation times of the lung parenchyma and cardiac and breathing motion. Zero Echo Time (ZTE) sequences are sensitive to short T2 and T2* species paving the way to improved "CT-like" MR images. To overcome this limitation, a retro-spective respiratory gated version of ZTE (ZTE4D) which can obtain images in 16 different respiratory phases during free breathing was developed. Initial performance of ZTE4D have shown motion artifacts. To improve image quality, deep learning with fully convolutional neural networks (FCNNs) has been proposed. CNNs has been widely used for MR imaging, but it has not been used for improving free-breathing lung imaging yet. Our proposed pipeline facilitates the clinical work with patients showing difficulties/uncapable to perform breath -holding, or when the different gating techniques are not efficient due to the irregular respiratory pace. Materials and methods: After signed informed consent and IRB approval, ZTE4D free breathing and breath-hold ZTE3D images were obtained from 10 healthy volunteers on a 1.5 T MRI scanner (GE Healthcare Signa Artist, Waukesha, WI). ZTE4D acquisition captured all 16 phases of the respiratory cycle. For the ZTE breath-hold, the subjects were instructed to hold their breath in 5 different inflation levels ranging from full expiration to full inspiration. The training dataset consisting of ZTE-BH images of 10 volunteers was split into 8 volunteers for training, 1 for validation and 1 for testing. In total 800 ZTE breath-hold images were constructed by adding Gaussian noise and performing image transformations (translations, rotations) to imitate the effect of motion in the respiratory cycle, and blurring from varying diaphragm positions, as it appears for ZTE4D. These sets were used to train a FCNN model to remove the artificially added noise and transformations from the ZTE breath-hold images and reproduce the original quality of the images. Mean squared error (MSE) was used as loss function. The remaining 2 healthy volunteer's ZTE4D images were used to test the model and qualitatively assess the predicted images. Results: Our model obtained a MSE of 0.09% on the training set and 0.135% on the validation set. When tested on unseen data the predicted images from our model improved the contrast of the pulmonary parenchyma against air filled regions (airways or air trapping). The SNR of the lung parenchyma was quantitatively improved by a factor of 1.98 and the CNR lung-blood, which is indicating the visibility of the intrapulmonary vessels, was improved by 4.2%. Our network was able to reduce ghosting artifacts, such as diaphragm movement and blurring, and enhancing image quality. Discussion: Free-breathing 3D and 4D lung imaging with MRI is feasible, however its quality is not yet acceptable for clinical use. This can be improved with deep learning techniques. Our FCNN improves the visual image quality and reduces artifacts of free-breathing ZTE4D. Our main goal was rather to remove ghosting artifacts from the ZTE4D images, to improve diagnostic quality of the images. As main results of the network, diaphragm contour increased with sharper edges by visual inspection and less blurring of the anatomical structures and lung parenchyma. Conclusion: With FCNNs, image quality of free breathing ZTE4D lung MRI can be improved and enable better visualization of the lung parenchyma in different respiratory phases.
引用
收藏
页码:97 / 104
页数:8
相关论文
共 50 条
  • [41] Brain Tumor Detection based on Multiple Deep Learning Models for MRI Images
    Kumar G.D.
    Mohanty S.N.
    EAI Endorsed Transactions on Pervasive Health and Technology, 2024, 10
  • [42] A deep learning method for automatic segmentation of the bony orbit in MRI and CT images
    Jared Hamwood
    Beat Schmutz
    Michael J. Collins
    Mark C. Allenby
    David Alonso-Caneiro
    Scientific Reports, 11
  • [43] Design and Analysis of Deep Learning Method for Fragmenting Brain Tissue in MRI Images
    Yang, Ting
    Sun, Jiabao
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (01) : 95 - 111
  • [44] Deep Learning on MRI Images for Diagnosis of Lung Cancer Spinal Bone Metastasis
    Fan, Xiaojie
    Zhang, Xiaoyu
    Zhang, Zibo
    Jiang, Yifang
    CONTRAST MEDIA & MOLECULAR IMAGING, 2021, 2021
  • [45] Super-resolution of brain tumor MRI images based on deep learning
    Zhou, Zhiyi
    Ma, Anbang
    Feng, Qiuting
    Wang, Ran
    Cheng, Lilin
    Chen, Xin
    Yang, Xi
    Liao, Keman
    Miao, Yifeng
    Qiu, Yongming
    JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS, 2022, 23 (11):
  • [46] A deep learning method for automatic segmentation of the bony orbit in MRI and CT images
    Hamwood, Jared
    Schmutz, Beat
    Collins, Michael J.
    Allenby, Mark C.
    Alonso-Caneiro, David
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [47] Automatic Localization of the Left Ventricle in Cardiac MRI Images Using Deep Learning
    Emad, Omar
    Yassine, Inas A.
    Fahmy, Ahmed S.
    2015 37TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2015, : 683 - 686
  • [48] Correcting synthetic MRI contrast-weighted images using deep learning
    Kumar, Sidharth
    Saber, Hamidreza
    Charron, Odelin
    Freeman, Leorah
    Tamir, Jonathan I.
    MAGNETIC RESONANCE IMAGING, 2024, 106 : 43 - 54
  • [49] Comparison of deep learning models for brain tumor classification using MRI images
    Cinar, Necip
    Kaya, Buket
    Kaya, Mehmet
    2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA), 2022, : 1382 - 1385
  • [50] A Federated Deep Learning Framework for 3D Brain MRI Images
    Fan, Zhipeng
    Su, Jianpo
    Gao, Kai
    Hu, Dewen
    Ling-Li Zeng
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,