Labeled-to-unlabeled distribution alignment for partially-supervised multi-organ medical image segmentation

被引:0
|
作者
Jiang, Xixi [1 ]
Zhang, Dong [1 ]
Li, Xiang [2 ]
Liu, Kangyi [2 ]
Cheng, Kwang-Ting [1 ]
Yang, Xin [2 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Medical image segmentation; Multi-organ segmentation; Partially-supervised learning; Distribution alignment;
D O I
10.1016/j.media.2024.103333
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Partially-supervised multi-organ medical image segmentation aims to develop a unified semantic segmentation model by utilizing multiple partially-labeled datasets, with each dataset providing labels for a single class of organs. However, the limited availability of labeled foreground organs and the absence of supervision to distinguish unlabeled foreground organs from the background pose a significant challenge, which leads to a distribution mismatch between labeled and unlabeled pixels. Although existing pseudo-labeling methods can be employed to learn from both labeled and unlabeled pixels, they are prone to performance degradation in this task, as they rely on the assumption that labeled and unlabeled pixels have the same distribution. In this paper, to address the problem of distribution mismatch, we propose a labeled-to-unlabeled distribution alignment (LTUDA) framework that aligns feature distributions and enhances discriminative capability. Specifically, we introduce a cross-set data augmentation strategy, which performs region-level mixing between labeled and unlabeled organs to reduce distribution discrepancy and enrich the training set. Besides, we propose a prototype-based distribution alignment method that implicitly reduces intra-class variation and increases the separation between the unlabeled foreground and background. This can be achieved by encouraging consistency between the outputs of two prototype classifiers and a linear classifier. Extensive experimental results on the AbdomenCT-1K dataset and a union of four benchmark datasets (including LiTS, MSD-Spleen, KiTS, and NIH82) demonstrate that our method outperforms the state-of-the-art partially-supervised methods by a considerable margin, and even surpasses the fully-supervised methods. The source code is publicly available at LTUDA.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] DAUNet: A deformable aggregation UNet for multi-organ 3D medical image segmentation
    Liu, Qinghao
    Liu, Min
    Zhu, Yuehao
    Liu, Licheng
    Zhang, Zhe
    Wang, Yaonan
    PATTERN RECOGNITION LETTERS, 2025, 191 : 58 - 65
  • [22] Efficient Generative-Adversarial U-Net for Multi-Organ Medical Image Segmentation
    Wang, Haoran
    Wu, Gengshen
    Liu, Yi
    JOURNAL OF IMAGING, 2025, 11 (01)
  • [23] AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation
    Ji, Yuanfeng
    Bai, Haotian
    Ge, Chongjian
    Yang, Jie
    Zhu, Ye
    Zhang, Ruimao
    Li, Zhen
    Zhang, Lingyan
    Ma, Wanling
    Wan, Xiang
    Luo, Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [24] Semi-supervised abdominal multi-organ segmentation by object-redrawing
    Cho, Min Jeong
    Lee, Jae Sung
    MEDICAL PHYSICS, 2024, 51 (11) : 8334 - 8347
  • [25] Semi-Supervised Multi-Organ Segmentation through Quality Assurance Supervision
    Lee, Ho Hin
    Tang, Yucheng
    Tang, Olivia
    Xua, Yuchen
    Chen, Yunqiang
    Gao, Dashan
    Han, Shizhong
    Gao, Riqiang
    Savona, Michael R.
    Abramson, Richard G.
    Huo, Yuankai
    Landman, Bennett A.
    MEDICAL IMAGING 2020: IMAGE PROCESSING, 2021, 11313
  • [26] Hybrid U-Net Model with Visual Transformers for Enhanced Multi-Organ Medical Image Segmentation
    Jiang, Pengsong
    Liu, Wufeng
    Wang, Feihu
    Wei, Renjie
    INFORMATION, 2025, 16 (02)
  • [27] STU3: Multi-organ CT Medical Image Segmentation Model Based on Transformer and UNet
    Zheng, Wenjin
    Li, Bo
    Chen, Wanyi
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT I, 2024, 14473 : 170 - 181
  • [28] An Empirical Study on the Fairness of Foundation Models for Multi-Organ Image Segmentation
    Li, Qing
    Zhang, Yizhe
    Li, Yan
    Lyu, Jun
    Liu, Meng
    Sun, Longyu
    Sun, Mengting
    Li, Qirong
    Mao, Wenyue
    Wu, Xinran
    Zhang, Yajing
    Chu, Yinghua
    Wang, Shuo
    Wang, Chengyan
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XII, 2024, 15012 : 432 - 442
  • [29] DoDNet: Learning to Segment Multi-Organ and Tumors from Multiple Partially Labeled Datasets
    Zhang, Jianpeng
    Xie, Yutong
    Xia, Yong
    Shen, Chunhua
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1195 - 1204
  • [30] Multi-organ Segmentation Based on 2.5D Semi-supervised Learning
    Chen, Hao
    Zhang, Wen
    Yan, Xiaochao
    Chen, Yanbin
    Chen, Xin
    Wu, Mengjun
    Pan, Lin
    Zheng, Shaohua
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13816 LNCS : 74 - 86