Patch-based generative adversarial neural network models for head and neck MR-only planning

被引:72
|
作者
Klages, Peter [1 ]
Benslimane, Ilyes [1 ]
Riyahi, Sadegh [1 ]
Jiang, Jue [1 ]
Hunt, Margie [1 ]
Deasy, Joseph O. [1 ]
Veeraraghavan, Harini [1 ]
Tyagi, Neelam [1 ]
机构
[1] Mem Sloan Kettering Canc Ctr, Med Phys, 1275 York Ave, New York, NY 10021 USA
关键词
conditional generative adversarial networks (cGAN); CycleGAN; generative adversarial networks (GAN); MR-Guided Radiotherapy; pix2pix; synthetic CT generation; SYNTHETIC CT; RADIOTHERAPY; DELINEATION; IMAGES;
D O I
10.1002/mp.13927
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose To evaluate pix2pix and CycleGAN and to assess the effects of multiple combination strategies on accuracy for patch-based synthetic computed tomography (sCT) generation for magnetic resonance (MR)-only treatment planning in head and neck (HN) cancer patients. Materials and methods Twenty-three deformably registered pairs of CT and mDixon FFE MR datasets from HN cancer patients treated at our institution were retrospectively analyzed to evaluate patch-based sCT accuracy via the pix2pix and CycleGAN models. To test effects of overlapping sCT patches on estimations, we (a) trained the models for three orthogonal views to observe the effects of spatial context, (b) we increased effective set size by using per-epoch data augmentation, and (c) we evaluated the performance of three different approaches for combining overlapping Hounsfield unit (HU) estimations for varied patch overlap parameters. Twelve of twenty-three cases corresponded to a curated dataset previously used for atlas-based sCT generation and were used for training with leave-two-out cross-validation. Eight cases were used for independent testing and included previously unseen image features such as fused vertebrae, a small protruding bone, and tumors large enough to deform normal body contours. We analyzed the impact of MR image preprocessing including histogram standardization and intensity clipping on sCT generation accuracy. Effects of mDixon contrast (in-phase vs water) differences were tested with three additional cases. The sCT generation accuracy was evaluated using mean absolute error (MAE) and mean error (ME) in HU between the plan CT and sCT images. Dosimetric accuracy was evaluated for all clinically relevant structures in the independent testing set and digitally reconstructed radiographs (DRRs) were evaluated with respect to the plan CT images. Results The cross-validated MAEs for the whole-HN region using pix2pix and CycleGAN were 66.9 +/- 7.3 vs 82.3 +/- 6.4 HU, respectively. On the independent testing set with additional artifacts and previously unseen image features, whole-HN region MAEs were 94.0 +/- 10.6 and 102.9 +/- 14.7 HU for pix2pix and CycleGAN, respectively. For patients with different tissue contrast (water mDixon MR images), the MAEs increased to 122.1 +/- 6.3 and 132.8 +/- 5.5 HU for pix2pix and CycleGAN, respectively. Our results suggest that combining overlapping sCT estimations at each voxel reduced both MAE and ME compared to single-view non-overlapping patch results. Absolute percent mean/max dose errors were 2% or less for the PTV and all clinically relevant structures in our independent testing set, including structures with image artifacts. Quantitative DRR comparison between planning CTs and sCTs showed agreement of bony region positions to The dosimetric and MAE based accuracy, along with the similarity between DRRs from sCTs, indicate that pix2pix and CycleGAN are promising methods for MR-only treatment planning for HN cancer. Our methods investigated for overlapping patch-based HU estimations also indicate that combining transformation estimations of overlapping patches is a potential method to reduce generation errors while also providing a tool to potentially estimate the MR to CT aleatoric model transformation uncertainty. However, because of small patient sample sizes, further studies are required.
引用
收藏
页码:626 / 642
页数:17
相关论文
共 50 条
  • [41] Patch-Based Oil Painting Forgery Detection Based on Brushstroke Analysis Using Generative Adversarial Networks and Depth Visualization
    Azimi, Elhamsadat
    Ashtari, Amirsaman
    Ahn, Jaehong
    APPLIED SCIENCES-BASEL, 2025, 15 (01):
  • [42] Structurally-constrained optical-flow-guided adversarial generation of synthetic CT for MR-only radiotherapy treatment planning
    Vajpayee, Rajat
    Agrawal, Vismay
    Krishnamurthi, Ganapathy
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [43] Automated delineation for MR-only prostate radiotherapy using a 2.5D convolutional neural network
    Holland, C.
    Wyatt, J.
    Pearson, R.
    Wintle, T.
    Maxwell, R.
    RADIOTHERAPY AND ONCOLOGY, 2021, 161 : S1398 - S1399
  • [44] Structurally-constrained optical-flow-guided adversarial generation of synthetic CT for MR-only radiotherapy treatment planning
    Rajat Vajpayee
    Vismay Agrawal
    Ganapathy Krishnamurthi
    Scientific Reports, 12
  • [45] Assessing multiple MRI sequences in deep learning-based synthetic CT generation for MR-only radiation therapy of head and neck cancers
    Antunes, Jacob
    Young, Tony
    Pittock, Dane
    Jacobs, Paul
    Nelson, Aaron
    Piper, Jon
    Deshpande, Shrikant
    RADIOTHERAPY AND ONCOLOGY, 2025, 205
  • [46] Generative Adversarial Network Based Heuristics for Sampling-Based Path Planning
    Tianyi Zhang
    Jiankun Wang
    Max Q.-H.Meng
    IEEE/CAA Journal of Automatica Sinica, 2022, 9 (01) : 64 - 74
  • [47] Generative Adversarial Network Based Heuristics for Sampling-Based Path Planning
    Zhang, Tianyi
    Wang, Jiankun
    Meng, Max Q. -H.
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (01) : 64 - 74
  • [48] Tissue Segmentation-Based MR Electron Density Mapping Method for MR-Only Radiation Treatment Planning of Brain
    Yu, H.
    Lee, Y.
    Ruschin, M.
    Karam, I.
    Sahgal, A.
    MEDICAL PHYSICS, 2015, 42 (06) : 3574 - 3574
  • [49] Towards MR only Simulation: MR Based Digitally Reconstructed Radiograph of Head and Neck
    Yu, H.
    Mah, K.
    Balogh, J.
    MEDICAL PHYSICS, 2012, 39 (07) : 4639 - 4639
  • [50] Patch-Based Deep Convolutional Neural Network for Corneal Ulcer Area Segmentation
    Sun, Qichao
    Deng, Lijie
    Liu, Jianwei
    Huang, Haixiang
    Yuan, Jin
    Tang, Xiaoying
    FETAL, INFANT AND OPHTHALMIC MEDICAL IMAGE ANALYSIS, 2017, 10554 : 101 - 108