Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images

被引:18
|
作者
Geng, Peng [1 ,2 ]
Sun, Xiuming [3 ]
Liu, Jianhua [4 ]
机构
[1] Shijiazhuang Tiedao Univ, Sch Informat Sci & Technol, Shijiazhuang 050043, Peoples R China
[2] Shijiazhuang Tiedao Univ, Struct Hlth Monitoring & Control Inst, Shijiazhuang 050043, Peoples R China
[3] Zhangjiakou Univ, Sci Dept, Zhangjiakou 075000, Peoples R China
[4] Shijiazhuang Tiedao Univ, Sch Elect & Elect Engn, Shijiazhuang 050043, Peoples R China
关键词
Quaternion wavelet transform; Image fusion; Pulse-coupled neural network (PCNN); Multi-modal medical image; COUPLED NEURAL-NETWORK; FUSION;
D O I
10.1007/s40846-016-0200-6
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and quaternion wavelet transform. The proposed fusion algorithm is capable of combining not only pairs of computed tomography (CT) and magnetic resonance (MR) images, but also pairs of CT and proton-density-weighted MR images, and multi-spectral MR images such as T1 and T2. Experiments on six pairs of multi-modal medical images are conducted to compare the proposed scheme with four existing methods. The performances of various methods are investigated using mutual information metrics and comprehensive fusion performance characterization (total fusion performance, fusion loss, and modified fusion artifacts criteria). The experimental results show that the proposed algorithm not only extracts more important visual information from source images, but also effectively avoids introducing artificial information into fused medical images. It significantly outperforms existing medical image fusion methods in terms of subjective performance and objective evaluation metrics.
引用
收藏
页码:230 / 239
页数:10
相关论文
共 50 条
  • [41] Intelligent analysis for medical multi-modal data
    Multimedia Tools and Applications, 2021, 80 : 17333 - 17333
  • [42] Multi-modal Medical Q&A System
    Zhi, Wang
    PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON COMPUTER AND MULTIMEDIA TECHNOLOGY, ICCMT 2024, 2024, : 414 - 424
  • [43] Multi-modal and Multi-spectral Registration for Natural Images
    Shen, Xiaoyong
    Xu, Li
    Zhang, Qi
    Jia, Jiaya
    COMPUTER VISION - ECCV 2014, PT IV, 2014, 8692 : 309 - 324
  • [44] An overview of multi-modal medical image fusion
    Du, Jiao
    Li, Weisheng
    Lu, Ke
    Xiao, Bin
    NEUROCOMPUTING, 2016, 215 : 3 - 20
  • [45] Multi-modal Medical Image Fusion Based on GAN and the Shift-Invariant Shearlet Transform
    Wang, Lei
    Chang, Chunhong
    Hao, Benli
    Liu, Chunxiang
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 2538 - 2543
  • [46] Multi-resolution image analysis using the quaternion wavelet transform
    Eduardo Bayro-Corrochano
    Numerical Algorithms, 2005, 39 : 35 - 55
  • [47] Multi-focus image fusion using quaternion wavelet transform
    Zheng, Xue-Ni
    Luo, Xiao-Qing
    Zhang, Zhan-Cheng
    Wu, Xiao-Jun
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 883 - 888
  • [48] Deep adaptive registration of multi-modal prostate images
    Guo, Hengtao
    Kruger, Melanie
    Xu, Sheng
    Wood, Bradford J.
    Yan, Pingkun
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2020, 84
  • [49] Variational interpolation of multi-modal ocean satellite images
    Ba, Sileye O.
    Corpetti, Thomas
    Chapron, Bertrand
    Fablet, Ronan
    TRAITEMENT DU SIGNAL, 2012, 29 (3-5) : 433 - 454
  • [50] MINC 2.0: A Flexible Format for Multi-Modal Images
    Vincent, Robert D.
    Neelin, Peter
    Khalili-Mahani, Najmeh
    Janke, Andrew L.
    Fonov, Vladimir S.
    Robbins, Steven M.
    Baghdadi, Leila
    Lerch, Jason
    Sled, John G.
    Adalat, Reza
    MacDonald, David
    Zijdenbos, Alex P.
    Collins, D. Louis
    Evans, Alan C.
    FRONTIERS IN NEUROINFORMATICS, 2016, 10