Deep Transfer Learning-Based Multi-Modal Digital Twins for Enhancement and Diagnostic Analysis of Brain MRI Image

被引:4
|
作者
Wang, Jinxia [1 ]
Qiao, Liang [2 ]
Lv, Haibin [3 ]
Lv, Zhihan [4 ]
机构
[1] Shaanxi Fash Engn Univ, Sch Art & Design, Xian 712046, Shaanxi, Peoples R China
[2] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266071, Shandong, Peoples R China
[3] Minist Nat Resources, North Sea Bur, North China Sea Offshore Engn Survey Inst, Qingdao 266061, Shandong, Peoples R China
[4] Uppsala Univ, Fac Arts, S-75105 Uppsala, Sweden
关键词
Medical diagnostic imaging; Magnetic resonance imaging; Diseases; Superresolution; Predictive models; Convolutional neural networks; Mathematical models; Digital twins; deep transfer learning; multimodal image fusion; MRI image enhancement; adaptive medical image fusion; RECONSTRUCTION;
D O I
10.1109/TCBB.2022.3168189
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Objective: it aims to adopt deep transfer learning combined with Digital Twins (DTs) in Magnetic Resonance Imaging (MRI) medical image enhancement. Methods: MRI image enhancement method based on metamaterial composite technology is proposed by analyzing the application status of DTs in medical direction and the principle of MRI imaging. On the basis of deep transfer learning, MRI super-resolution deep neural network structure is established. To address the problem that different medical imaging methods have advantages and disadvantages, a multi-mode medical image fusion algorithm based on adaptive decomposition is proposed and verified by experiments. Results: the optimal Peak Signal to Noise Ratio (PSNR) of 34.11dB can be obtained by introducing modified linear element and loss function of deep transfer learning neural network structure. The Structural Similarity Coefficient (SSIM) is 85.24%. It indicates that the MRI truthfulness and sharpness obtained by adding composite metasurface are improved greatly. The proposed medical image fusion algorithm has the highest overall score in the subjective evaluation of the six groups of fusion image results. Group III had the highest score in Magnetic Resonance Imaging- Positron Emission Computed Tomography (MRI-PET) image fusion, with a score of 4.67, close to the full score of 5. As for the objective evaluation in group I of Magnetic Resonance Imaging- Single Photon Emission Computed Tomography (MRI-SPECT) images, the Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE) and Spectral Angle Mapper (SAM) are the highest, which are 39.2075, 116.688, and 0.594, respectively. Mutual Information (MI) is 5.8822. Conclusion: the proposed algorithm has better performance than other algorithms in preserving spatial details of MRI images and color information direction of SPECT images, and the other five groups have achieved similar results.
引用
收藏
页码:2407 / 2419
页数:13
相关论文
共 50 条
  • [1] Deep learning-based multi-modal computing with feature disentanglement for MRI image synthesis
    Fei, Yuchen
    Zhan, Bo
    Hong, Mei
    Wu, Xi
    Zhou, Jiliu
    Wang, Yan
    MEDICAL PHYSICS, 2021, 48 (07) : 3778 - 3789
  • [2] Effective deep learning-based multi-modal retrieval
    Wang, Wei
    Yang, Xiaoyan
    Ooi, Beng Chin
    Zhang, Dongxiang
    Zhuang, Yueting
    VLDB JOURNAL, 2016, 25 (01): : 79 - 101
  • [3] Effective deep learning-based multi-modal retrieval
    Wei Wang
    Xiaoyan Yang
    Beng Chin Ooi
    Dongxiang Zhang
    Yueting Zhuang
    The VLDB Journal, 2016, 25 : 79 - 101
  • [4] Applying deep learning-based multi-modal for detection of coronavirus
    Rani, Geeta
    Oza, Meet Ganpatlal
    Dhaka, Vijaypal Singh
    Pradhan, Nitesh
    Verma, Sahil
    Rodrigues, Joel J. P. C.
    MULTIMEDIA SYSTEMS, 2022, 28 (04) : 1251 - 1262
  • [5] Applying deep learning-based multi-modal for detection of coronavirus
    Geeta Rani
    Meet Ganpatlal Oza
    Vijaypal Singh Dhaka
    Nitesh Pradhan
    Sahil Verma
    Joel J. P. C. Rodrigues
    Multimedia Systems, 2022, 28 : 1251 - 1262
  • [6] Multi-modal haptic image recognition based on deep learning
    Han, Dong
    Nie, Hong
    Chen, Jinbao
    Chen, Meng
    Deng, Zhen
    Zhang, Jianwei
    SENSOR REVIEW, 2018, 38 (04) : 486 - 493
  • [7] Deep learning-based multi-modal approach for predicting brain radionecrosis after proton therapy
    Seetha, Sithin Thulasi
    Fontana, Giulia
    Bazani, Alessia
    Riva, Giulia
    Molinelli, Silvia
    Goodyear, Christina Amanda
    Ciccone, Lucia Pia
    Iannalfi, Alberto
    Orlandi, Ester
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S5027 - S5030
  • [8] Multi-modal Learning-based Pre-operative Targeting in Deep Brain Stimulation Procedures
    Liu, Yuan
    Dawant, Benoit M.
    2016 3RD IEEE EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS, 2016, : 17 - 20
  • [9] Multi-Modal Deep Learning-Based Violin Bowing Action Recognition
    Liu, Bao-Yun
    Jen, Yi-Hsin
    Sun, Shih-Wei
    Su, Li
    Chang, Pao-Chi
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
  • [10] Deep Learning Based Multi-modal Cardiac MR Image Segmentation
    Zheng, Rencheng
    Zhao, Xingzhong
    Zhao, Xingming
    Wang, He
    STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART: MULTI-SEQUENCE CMR SEGMENTATION, CRT-EPIGGY AND LV FULL QUANTIFICATION CHALLENGES, 2020, 12009 : 263 - 270