Cross-Modal Consistency for Single-Modal MR Image Segmentation

被引:1
|
作者
Xu, Wenxuan [1 ]
Li, Cangxin [1 ]
Bian, Yun [3 ]
Meng, Qingquan [1 ]
Zhu, Weifang [1 ]
Shi, Fei [1 ]
Chen, Xinjian [1 ]
Shao, Chengwei [2 ]
Xiang, Dehui [1 ]
机构
[1] Soochow Univ, Sch Elect & Informat Engn, Suzhou 215006, Peoples R China
[2] Navy Mil Med Univ, Changhai Hosp, Dept Radiol, Shanghai, Peoples R China
[3] Navy Mil Med Univ, Changhai Hosp, Dept Radiol, Shanghai 200433, Peoples R China
基金
国家重点研发计划;
关键词
Image segmentation; Pancreas; Imaging; Computed tomography; Training; Feature extraction; Loss measurement; Consistency learning; contrast alignment; single-modal MR Image segmentation; PANCREAS SEGMENTATION; NETWORK;
D O I
10.1109/TBME.2024.3380058
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective: Multi-modal magnetic resonance (MR) image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to obtain multiple modalities for a single patient in clinical applications. To address these issues, a cross-modal consistency framework is proposed for a single-modal MR image segmentation. Methods: To enable single-modal MR image segmentation in the inference stage, a weighted cross-entropy loss and a pixel-level feature consistency loss are proposed to train the target network with the guidance of the teacher network and the auxiliary network. To fuse dual-modal MR images in the training stage, the cross-modal consistency is measured according to Dice similarity entropy loss and Dice similarity contrastive loss, so as to maximize the prediction similarity of the teacher network and the auxiliary network. To reduce the difference in image contrast between different MR images for the same organs, a contrast alignment network is proposed to align input images with different contrasts to reference images with a good contrast. Results: Comprehensive experiments have been performed on a publicly available prostate dataset and an in-house pancreas dataset to verify the effectiveness of the proposed method. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. Conclusion: The proposed image segmentation method can fuse dual-modal MR images in the training stage and only need one-modal MR images in the inference stage. Significance: The proposed method can be used in routine clinical occasions when only single-modal MR image with variable contrast is available for a patient.
引用
收藏
页码:2557 / 2567
页数:11
相关论文
共 50 条
  • [1] Instance Segmentation with Cross-Modal Consistency
    Zhu, Alex Zihao
    Casser, Vincent
    Mahjourian, Reza
    Kretzschmar, Henrik
    Pirk, Soren
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 2009 - 2016
  • [2] Cross-modal and single-modal Probabilistic Category Learning in a Weather Prediction Task
    Sun Xunwei
    Fu Qiufang
    Fu Xiaolan
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2016, 51 : 833 - 833
  • [3] Learning Cross-Modal Deep Representations for Multi-Modal MR Image Segmentation
    Li, Cheng
    Sun, Hui
    Liu, Zaiyi
    Wang, Meiyun
    Zheng, Hairong
    Wang, Shanshan
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, 2019, 11765 : 57 - 65
  • [4] ADVERSARIAL CROSS-MODAL RETRIEVAL VIA LEARNING AND TRANSFERRING SINGLE-MODAL SIMILARITIES
    Wen, Xin
    Han, Zhizhong
    Yin, Xinyu
    Liu, Yu-Shen
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 478 - 483
  • [5] CROSS-MODAL 2D-3D LOCALIZATION WITH SINGLE-MODAL QUERY
    Zhao, Zhipeng
    Yu, Huai
    Lyu, Chenwei
    Ji, Pengliang
    Yang, Xiangli
    Yang, Wen
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6171 - 6174
  • [6] Promoting Single-Modal Optical Flow Network for Diverse Cross-Modal Flow Estimation
    Zhou, Shili
    Tan, Weimin
    Yan, Bo
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3562 - 3570
  • [7] Unpaired Dual-Modal Image Complementation Learning for Single-Modal Medical Image Segmentation
    Xiang, Dehui
    Peng, Tao
    Bian, Yun
    Chen, Lang
    Zeng, Jianbin
    Shi, Fei
    Zhu, Weifang
    Chen, Xinjian
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2025, 72 (02) : 664 - 674
  • [8] Colour-Touch Cross-Modal Correspondence and Its Impact on Single-Modal Judgement in Multimodal Perception
    Yuan, Tianyi
    Rau, Pei-Luen Patrick
    Zhao, Jingyu
    Zheng, Jian
    MULTISENSORY RESEARCH, 2023, 36 (05) : 387 - 411
  • [9] Cross-Modal Image-Text Retrieval with Semantic Consistency
    Chen, Hui
    Ding, Guiguang
    Lin, Zijin
    Zhao, Sicheng
    Han, Jungong
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1749 - 1757
  • [10] Cross-modal transformer with language query for referring image segmentation
    Zhang, Wenjing
    Tan, Quange
    Li, Pengxin
    Zhang, Qi
    Wang, Rong
    NEUROCOMPUTING, 2023, 536 : 191 - 205