Deep learning for improving ZTE MRI images in free breathing

被引:3
|
作者
Papp, D. [1 ]
Castillo, T. Jose M. [1 ]
Wielopolski, P. A. [1 ]
Ciet, P. [1 ,2 ]
Veenland, Jifke F. [1 ,3 ]
Kotek, G. [1 ]
Hernandez-Tamames, J. [1 ]
机构
[1] Erasmus MC, Dept Radiol & Nucl Med, Rotterdam, Netherlands
[2] Sophia Childrens Univ Hosp, Erasmus Med Ctr, Dept Pediat Pulmonol & Allergol, Rotterdam, Netherlands
[3] Erasmus MC, Dept Med Informat, Rotterdam, Netherlands
关键词
Magnetic resonance imaging; Zero TE; Lung; Fully convolutional neural networks; ULTRASHORT ECHO TIME; COMPUTED-TOMOGRAPHY; LUNG; AIRWAYS;
D O I
10.1016/j.mri.2023.01.019
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Introduction: Despite a growing interest in lung MRI, its broader use in a clinical setting remains challenging. Several factors limit the image quality of lung MRI, such as the extremely short T2 and T2* relaxation times of the lung parenchyma and cardiac and breathing motion. Zero Echo Time (ZTE) sequences are sensitive to short T2 and T2* species paving the way to improved "CT-like" MR images. To overcome this limitation, a retro-spective respiratory gated version of ZTE (ZTE4D) which can obtain images in 16 different respiratory phases during free breathing was developed. Initial performance of ZTE4D have shown motion artifacts. To improve image quality, deep learning with fully convolutional neural networks (FCNNs) has been proposed. CNNs has been widely used for MR imaging, but it has not been used for improving free-breathing lung imaging yet. Our proposed pipeline facilitates the clinical work with patients showing difficulties/uncapable to perform breath -holding, or when the different gating techniques are not efficient due to the irregular respiratory pace. Materials and methods: After signed informed consent and IRB approval, ZTE4D free breathing and breath-hold ZTE3D images were obtained from 10 healthy volunteers on a 1.5 T MRI scanner (GE Healthcare Signa Artist, Waukesha, WI). ZTE4D acquisition captured all 16 phases of the respiratory cycle. For the ZTE breath-hold, the subjects were instructed to hold their breath in 5 different inflation levels ranging from full expiration to full inspiration. The training dataset consisting of ZTE-BH images of 10 volunteers was split into 8 volunteers for training, 1 for validation and 1 for testing. In total 800 ZTE breath-hold images were constructed by adding Gaussian noise and performing image transformations (translations, rotations) to imitate the effect of motion in the respiratory cycle, and blurring from varying diaphragm positions, as it appears for ZTE4D. These sets were used to train a FCNN model to remove the artificially added noise and transformations from the ZTE breath-hold images and reproduce the original quality of the images. Mean squared error (MSE) was used as loss function. The remaining 2 healthy volunteer's ZTE4D images were used to test the model and qualitatively assess the predicted images. Results: Our model obtained a MSE of 0.09% on the training set and 0.135% on the validation set. When tested on unseen data the predicted images from our model improved the contrast of the pulmonary parenchyma against air filled regions (airways or air trapping). The SNR of the lung parenchyma was quantitatively improved by a factor of 1.98 and the CNR lung-blood, which is indicating the visibility of the intrapulmonary vessels, was improved by 4.2%. Our network was able to reduce ghosting artifacts, such as diaphragm movement and blurring, and enhancing image quality. Discussion: Free-breathing 3D and 4D lung imaging with MRI is feasible, however its quality is not yet acceptable for clinical use. This can be improved with deep learning techniques. Our FCNN improves the visual image quality and reduces artifacts of free-breathing ZTE4D. Our main goal was rather to remove ghosting artifacts from the ZTE4D images, to improve diagnostic quality of the images. As main results of the network, diaphragm contour increased with sharper edges by visual inspection and less blurring of the anatomical structures and lung parenchyma. Conclusion: With FCNNs, image quality of free breathing ZTE4D lung MRI can be improved and enable better visualization of the lung parenchyma in different respiratory phases.
引用
收藏
页码:97 / 104
页数:8
相关论文
共 50 条
  • [1] Fractionated deep-inspiration breath-hold ZTE Compared with Free-breathing four-dimensional ZTE for detecting pulmonary nodules in oncological patients underwent PET/MRI
    Chang, Chih-Yung
    Lee, Tse-Hao
    Liu, Ren-Shyan
    Li, Chien-Ying
    Yang, Bang-Hung
    Chang, Wen-Yi
    Lin, Tzu-Ping
    Chang, Chi-Wei
    Yao, Shan-Fan
    Wei, Tzu-Chun
    Lin, Chien-Yuan
    Shieh, Charng-Chyi
    Lu, Chia-Feng
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [2] Fractionated deep-inspiration breath-hold ZTE Compared with Free-breathing four-dimensional ZTE for detecting pulmonary nodules in oncological patients underwent PET/MRI
    Chih-Yung Chang
    Tse-Hao Lee
    Ren-Shyan Liu
    Chien-Ying Li
    Bang-Hung Yang
    Wen-Yi Chang
    Tzu-Ping Lin
    Chi-Wei Chang
    Shan-Fan Yao
    Tzu-Chun Wei
    Chien-Yuan Lin
    Charng-Chyi Shieh
    Chia-Feng Lu
    Scientific Reports, 11
  • [3] Applications of Deep Learning to MRI Images: A Survey
    Liu, Jin
    Pan, Yi
    Li, Min
    Chen, Ziyue
    Tang, Lu
    Lu, Chengqian
    Wang, Jianxin
    BIG DATA MINING AND ANALYTICS, 2018, 1 (01): : 1 - 18
  • [4] Applications of Deep Learning to MRI Images:A Survey
    Jin Liu
    Yi Pan
    Min Li
    Ziyue Chen
    Lu Tang
    Chengqian Lu
    Jianxin Wang
    Big Data Mining and Analytics, 2018, 1 (01) : 1 - 18
  • [5] Free-breathing Accelerated Cardiac MRI Using Deep Learning: Validation in Children and Young Adults
    Zucker, Evan J.
    Sandino, Christopher M.
    Kino, Aya
    Lai, Peng
    Vasanawala, Shreyas S.
    RADIOLOGY, 2021, 300 (03) : 539 - 548
  • [6] Improving liver lesions classification on CT/MRI images based on Hounsfield Units attenuation and deep learning
    Phan, Anh-Cang
    Cao, Hung-Phi
    Le, Thi-Nguu-Huynh
    Trieu, Thanh-Ngoan
    Phan, Thuong-Cang
    GENE EXPRESSION PATTERNS, 2023, 47
  • [7] Improving breast cancer diagnostics with deep learning for MRI
    Witowski, Jan
    Heacock, Laura
    Reig, Beatriu
    Kang, Stella K.
    Lewin, Alana
    Pysarenko, Kristine
    Patel, Shalin
    Samreen, Naziya
    Rudnicki, Wojciech
    Luczynska, Elzbieta
    Popiela, Tadeusz
    Moy, Linda
    Geras, Krzysztof J.
    SCIENCE TRANSLATIONAL MEDICINE, 2022, 14 (664)
  • [8] Towards Improving Location Identification by Deep Learning on Images
    Slavescu, Radu Razvan
    Szakacs, Laszlo
    2018 IEEE 14TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP), 2018, : 345 - 351
  • [9] Identifying Periampullary Regions in MRI Images Using Deep Learning
    Tang, Yong
    Zheng, Yingjun
    Chen, Xinpei
    Wang, Weijia
    Guo, Qingxi
    Shu, Jian
    Wu, Jiali
    Su, Song
    FRONTIERS IN ONCOLOGY, 2021, 11
  • [10] Brain Tumor Segmentation Using Deep Learning on MRI Images
    Mostafa, Almetwally M.
    Zakariah, Mohammed
    Aldakheel, Eman Abdullah
    DIAGNOSTICS, 2023, 13 (09)