Deep Transfer Learning-Based Multi-Modal Digital Twins for Enhancement and Diagnostic Analysis of Brain MRI Image

被引:4
|
作者
Wang, Jinxia [1 ]
Qiao, Liang [2 ]
Lv, Haibin [3 ]
Lv, Zhihan [4 ]
机构
[1] Shaanxi Fash Engn Univ, Sch Art & Design, Xian 712046, Shaanxi, Peoples R China
[2] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266071, Shandong, Peoples R China
[3] Minist Nat Resources, North Sea Bur, North China Sea Offshore Engn Survey Inst, Qingdao 266061, Shandong, Peoples R China
[4] Uppsala Univ, Fac Arts, S-75105 Uppsala, Sweden
关键词
Medical diagnostic imaging; Magnetic resonance imaging; Diseases; Superresolution; Predictive models; Convolutional neural networks; Mathematical models; Digital twins; deep transfer learning; multimodal image fusion; MRI image enhancement; adaptive medical image fusion; RECONSTRUCTION;
D O I
10.1109/TCBB.2022.3168189
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Objective: it aims to adopt deep transfer learning combined with Digital Twins (DTs) in Magnetic Resonance Imaging (MRI) medical image enhancement. Methods: MRI image enhancement method based on metamaterial composite technology is proposed by analyzing the application status of DTs in medical direction and the principle of MRI imaging. On the basis of deep transfer learning, MRI super-resolution deep neural network structure is established. To address the problem that different medical imaging methods have advantages and disadvantages, a multi-mode medical image fusion algorithm based on adaptive decomposition is proposed and verified by experiments. Results: the optimal Peak Signal to Noise Ratio (PSNR) of 34.11dB can be obtained by introducing modified linear element and loss function of deep transfer learning neural network structure. The Structural Similarity Coefficient (SSIM) is 85.24%. It indicates that the MRI truthfulness and sharpness obtained by adding composite metasurface are improved greatly. The proposed medical image fusion algorithm has the highest overall score in the subjective evaluation of the six groups of fusion image results. Group III had the highest score in Magnetic Resonance Imaging- Positron Emission Computed Tomography (MRI-PET) image fusion, with a score of 4.67, close to the full score of 5. As for the objective evaluation in group I of Magnetic Resonance Imaging- Single Photon Emission Computed Tomography (MRI-SPECT) images, the Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE) and Spectral Angle Mapper (SAM) are the highest, which are 39.2075, 116.688, and 0.594, respectively. Mutual Information (MI) is 5.8822. Conclusion: the proposed algorithm has better performance than other algorithms in preserving spatial details of MRI images and color information direction of SPECT images, and the other five groups have achieved similar results.
引用
收藏
页码:2407 / 2419
页数:13
相关论文
共 50 条
  • [21] A digital 3D atlas of the marmoset brain based on multi-modal MRI
    Liu, Cirong
    Ye, Frank Q.
    Yen, Cecil Chern-Chyi
    Newman, John D.
    Glen, Daniel
    Leopold, David A.
    Silva, Afonso C.
    NEUROIMAGE, 2018, 169 : 106 - 116
  • [22] Multi-modal Fusion Brain Tumor Detection Method Based on Deep Learning
    Yao Hong-ge
    Shen Xin-xia
    Li Yu
    Yu Jun
    Lei Song-ze
    ACTA PHOTONICA SINICA, 2019, 48 (07)
  • [23] Multi-modal deep convolutional dictionary learning for image denoising
    Sun, Zhonggui
    Zhang, Mingzhu
    Sun, Huichao
    Li, Jie
    Liu, Tingting
    Gao, Xinbo
    NEUROCOMPUTING, 2023, 562
  • [24] Segmentation of Multi-Modal MRI Brain Tumor Sub-Regions Using Deep Learning
    B. Srinivas
    Gottapu Sasibhushana Rao
    Journal of Electrical Engineering & Technology, 2020, 15 : 1899 - 1909
  • [25] Reinforcement Learning-Based Resource Allocation for Streaming in a Multi-Modal Deep Space Network
    Ha, Taeyun
    Oh, Junsuk
    Lee, Donghyun
    Lee, Jeonghwa
    Jeon, Yongin
    Cho, Sungrae
    12TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC 2021): BEYOND THE PANDEMIC ERA WITH ICT CONVERGENCE INNOVATION, 2021, : 201 - 206
  • [26] A Robust Multi-Modal Deep Learning-Based Fault Diagnosis Method for PV Systems
    Afrasiabi, Shahabodin
    Allahmoradi, Sarah
    Afrasiabi, Mousa
    Liang, Xiaodong
    Chung, C. Y.
    Aghaei, Jamshid
    IEEE OPEN ACCESS JOURNAL OF POWER AND ENERGY, 2024, 11 : 583 - 594
  • [27] Multi-modal learning-based algae phyla identification using image and particle modalities
    Kwon, Do Hyuck
    Lee, Min Jun
    Jeong, Heewon
    Park, Sanghun
    Cho, Kyung Hwa
    WATER RESEARCH, 2025, 275
  • [28] Segmentation of Multi-Modal MRI Brain Tumor Sub-Regions Using Deep Learning
    Srinivas, B.
    Rao, Gottapu Sasibhushana
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2020, 15 (04) : 1899 - 1909
  • [29] An efficient deep learning-based video captioning framework using multi-modal features
    Varma, Soumya
    James, Dinesh Peter
    EXPERT SYSTEMS, 2021,
  • [30] Memory based fusion for multi-modal deep learning
    Priyasad, Darshana
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    INFORMATION FUSION, 2021, 67 : 136 - 146