Information fusion for fully automated segmentation of head and neck tumors from PET and CT images

被引:5
|
作者
Shiri, Isaac [1 ]
Amini, Mehdi [1 ]
Yousefirizi, Fereshteh [2 ]
Sadr, Alireza Vafaei [3 ,4 ]
Hajianfar, Ghasem [1 ]
Salimi, Yazdan [1 ]
Mansouri, Zahra [1 ]
Jenabi, Elnaz [5 ]
Maghsudi, Mehdi [6 ]
Mainta, Ismini [1 ]
Becker, Minerva [7 ]
Rahmim, Arman [2 ,8 ]
Zaidi, Habib [1 ,9 ,10 ,11 ,12 ]
机构
[1] Geneva Univ Hosp, Div Nucl Med & Mol Imaging, Geneva, Switzerland
[2] BC Canc Res Inst, Dept Integrat Oncol, Vancouver, BC, Canada
[3] RWTH Aachen Univ Hosp, Inst Pathol, Aachen, Germany
[4] Penn State Univ, Coll Med, Dept Publ Hlth Sci, Hershey, PA USA
[5] Univ Tehran Med Sci, Shariati Hosp, Res Ctr Nucl Med, Tehran, Iran
[6] Iran Univ Med Sci, Rajaie Cardiovasc Med & Res Ctr, Tehran, Iran
[7] Geneva Univ Hosp, Serv Radiol, Geneva, Switzerland
[8] Univ British Columbia, Dept Radiol & Phys, Vancouver, BC, Canada
[9] Univ Geneva, Geneva Univ Neuroctr, Geneva, Switzerland
[10] Univ Groningen, Univ Med Ctr Groningen, Dept Nucl Med & Mol Imaging, Groningen, Netherlands
[11] Univ Southern Denmark, Dept Nucl Med, Odense, Denmark
[12] Geneva Univ Hosp, Div Nucl Med & Mol Imaging, CH-1211 Geneva, Switzerland
基金
加拿大自然科学与工程研究理事会; 瑞士国家科学基金会;
关键词
deep learning; fusion; head and neck cancer; PET; CT; segmentation; VISIBLE IMAGES; FDG-PET; CLASSIFICATION; PERFORMANCE;
D O I
10.1002/mp.16615
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
BackgroundPET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PurposeOur study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. MethodsThe current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 x 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. ResultsIn single modalities, PET had a reasonable performance with a Dice score of 0.77 & PLUSMN; 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 & PLUSMN; 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak, SUVmean and SUVmedian. ConclusionPET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.
引用
收藏
页码:319 / 333
页数:15
相关论文
共 50 条
  • [1] A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers
    Groendahl, Aurora Rosvoll
    Knudtsen, Ingerid Skjei
    Bao Ngoc Huynh
    Mulstad, Martine
    Moe, Yngve Mardal
    Knuth, Franziska
    Tomic, Oliver
    Indahl, Ulf Geir
    Torheim, Turid
    Dale, Einar
    Malinen, Eirik
    Futsaether, Cecilia Marie
    PHYSICS IN MEDICINE AND BIOLOGY, 2021, 66 (06):
  • [2] Fully Automated Head and Neck Malignant Lesions Segmentation using Multimodality PET/CT imaging and A Deep Convolutional Network
    Shiri, Isaac
    Amini, Mehdi
    Arabi, Hossein
    Zaidi, Habib
    JOURNAL OF NUCLEAR MEDICINE, 2021, 62
  • [3] Weighted Fusion Transformer for Dual PET/CT Head and Neck Tumor Segmentation
    Mahdi, Mohammed A.
    Ahamad, Shahanawaj
    Saad, Sawsan A.
    Dafhalla, Alaa
    Qureshi, Rizwan
    Alqushaibi, Alawi
    IEEE ACCESS, 2024, 12 : 110905 - 110919
  • [4] Fully-Automated Segmentation of the Striatum in the PET/MR Images Using Data Fusion
    Klyuzhin, Ivan S.
    Gonzalez, Marjorie
    Sossi, Vesna
    2012 IEEE NUCLEAR SCIENCE SYMPOSIUM AND MEDICAL IMAGING CONFERENCE RECORD (NSS/MIC), 2012, : 2235 - 2240
  • [5] PET/CT imaging in head and neck tumors
    Rödel, R
    Straehler-Pohl, HJ
    Palmedo, H
    Reichmann, K
    Jaeger, U
    Reinhardt, MJ
    Biersack, HJ
    RADIOLOGE, 2004, 44 (11): : 1055 - 1059
  • [6] Automatic Segmentation of Head and Neck Tumors and Nodal Metastases in PET-CT scans
    Andrearczyk, Vincent
    Oreiller, Valentin
    Vallieres, Martin
    Castelli, Joel
    Elhalawani, Hesham
    Jreige, Mario
    Boughdad, Sarah
    Prior, John O.
    Depeursinge, Adrien
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 33 - 43
  • [7] Comparison of automatic tumour segmentation approaches for head and neck cancers in PET/CT images
    Groendahl, A. Rosvoll
    Mulstad, M.
    Moe, Y. Mardal
    Knudtsen, I. Skjei
    Torheim, T.
    Tomic, O.
    Indahl, U. G.
    Malinen, E.
    Dale, E.
    Futsaether, C. M.
    RADIOTHERAPY AND ONCOLOGY, 2019, 133 : S557 - S557
  • [8] Comparison of deep learning networks for fully automated head and neck tumor delineation on multi-centric PET/CT images
    Wang, Yiling
    Lombardo, Elia
    Huang, Lili
    Avanzo, Michele
    Fanetti, Giuseppe
    Franchin, Giovanni
    Zschaeck, Sebastian
    Weingaertner, Julian
    Belka, Claus
    Riboldi, Marco
    Kurz, Christopher
    Landry, Guillaume
    RADIATION ONCOLOGY, 2024, 19 (01)
  • [9] Comparison of deep learning networks for fully automated head and neck tumor delineation on multi-centric PET/CT images
    Yiling Wang
    Elia Lombardo
    Lili Huang
    Michele Avanzo
    Giuseppe Fanetti
    Giovanni Franchin
    Sebastian Zschaeck
    Julian Weingärtner
    Claus Belka
    Marco Riboldi
    Christopher Kurz
    Guillaume Landry
    Radiation Oncology, 19
  • [10] PET and CT fusion scans in the early detection and the evaluation of recurrences in head and neck tumors
    McGurk, M
    ORAL ONCOLOGY, 2005, 1 (01) : 50 - 50