Multimodal image translation via deep learning inference model trained in video domain

被引:1
|
作者
Fan, Jiawei [1 ,2 ,3 ]
Liu, Zhiqiang [4 ]
Yang, Dong [1 ,2 ,3 ]
Qiao, Jian [1 ,2 ,3 ]
Zhao, Jun [1 ,2 ,3 ]
Wang, Jiazhou [1 ,2 ,3 ]
Hu, Weigang [1 ,2 ,3 ]
机构
[1] Fudan Univ, Dept Radiat Oncol, Shanghai Canc Ctr, Shanghai 200032, Peoples R China
[2] Fudan Univ, Shanghai Med Coll, Dept Oncol, Shanghai 200032, Peoples R China
[3] Shanghai Key Lab Radiat Oncol, Shanghai 200032, Peoples R China
[4] Chinese Acad Med Sci & Peking Union Med Coll, Canc Hosp, Natl Clin Res Ctr Canc, Natl Canc Ctr, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Video domain; Deep learning; Medical image translation; GAN;
D O I
10.1186/s12880-022-00854-x
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background Current medical image translation is implemented in the image domain. Considering the medical image acquisition is essentially a temporally continuous process, we attempt to develop a novel image translation framework via deep learning trained in video domain for generating synthesized computed tomography (CT) images from cone-beam computed tomography (CBCT) images. Methods For a proof-of-concept demonstration, CBCT and CT images from 100 patients were collected to demonstrate the feasibility and reliability of the proposed framework. The CBCT and CT images were further registered as paired samples and used as the input data for the supervised model training. A vid2vid framework based on the conditional GAN network, with carefully-designed generators, discriminators and a new spatio-temporal learning objective, was applied to realize the CBCT-CT image translation in the video domain. Four evaluation metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC), and structural similarity (SSIM), were calculated on all the real and synthetic CT images from 10 new testing patients to illustrate the model performance. Results The average values for four evaluation metrics, including MAE, PSNR, NCC, and SSIM, are 23.27 +/- 5.53, 32.67 +/- 1.98, 0.99 +/- 0.0059, and 0.97 +/- 0.028, respectively. Most of the pixel-wise hounsfield units value differences between real and synthetic CT images are within 50. The synthetic CT images have great agreement with the real CT images and the image quality is improved with lower noise and artifacts compared with CBCT images. Conclusions We developed a deep-learning-based approach to perform the medical image translation problem in the video domain. Although the feasibility and reliability of the proposed framework were demonstrated by CBCT-CT image translation, it can be easily extended to other types of medical images. The current results illustrate that it is a very promising method that may pave a new path for medical image translation research.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Video Frame Prediction via Deep Learning
    Yilmaz, M. Akin
    Tekalp, A. Murat
    2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2020,
  • [42] Video Fingerprinting via Deep Metric Learning
    Li X.
    Xu L.
    Yang Y.
    Fei S.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2020, 32 (09): : 1411 - 1419
  • [43] Multimodal image fusion via coupled feature learning
    Veshki, Farshad G.
    Ouzir, Nora
    Vorobyov, Sergiy A.
    Ollila, Esa
    SIGNAL PROCESSING, 2022, 200
  • [44] Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain
    Hermessi, Haithem
    Mourali, Olfa
    Zagrouba, Ezzeddine
    NEURAL COMPUTING & APPLICATIONS, 2018, 30 (07): : 2029 - 2045
  • [45] Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain
    Haithem Hermessi
    Olfa Mourali
    Ezzeddine Zagrouba
    Neural Computing and Applications, 2018, 30 : 2029 - 2045
  • [46] Comparison of Deep Learning Image-to-image Models for Medical Image Translation
    Yang, Zeyu
    Zoellner, Frank G.
    BILDVERARBEITUNG FUR DIE MEDIZIN 2024, 2024, : 344 - 349
  • [47] Sym-Parameterized Dynamic Inference for Mixed-Domain Image Translation
    Chang, Simyung
    Park, SeongUk
    Yang, John
    Kwak, Nojun
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4802 - 4810
  • [48] Tumor Phylogeny Topology Inference via Deep Learning
    Azer, Erfan Sadeqi
    Ebrahimabadi, Mohammad Haghir
    Malikic, Salem
    Khardon, Roni
    Sahinalp, S. Cenk
    ISCIENCE, 2020, 23 (11)
  • [49] Unsupervised Natural Language Inference via Decoupled Multimodal Contrastive Learning
    Cui, Wanyun
    Zheng, Guangyu
    Wang, Wei
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 5511 - 5520
  • [50] HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks
    Szeto, Ryan
    El-Khamy, Mostafa
    Lee, Jungwon
    Corso, Jason J.
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3079 - 3088