Enhancing Tiny Tissues Segmentation via Self-Distillation

被引:3
|
作者
Zhou, Chuan [1 ]
Chen, Yuchu [1 ]
Fan, Minghao [1 ]
Wen, Yang [1 ]
Chen, Hang [1 ]
Chen, Leiting [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Key Lab Digital Media Technol Sichuan Prov, Chengdu, Peoples R China
[2] Univ Elect Sci & Technol China, Inst Elect & Informat Engn Guangdong, Chengdu, Peoples R China
关键词
tiny tissues segmentation; encoder-decoder structured network; self-distillation; NETWORK; IMAGES;
D O I
10.1109/BIBM49941.2020.9313542
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Although the wide deployment of convolutional networks has greatly promoted the progress in the field of medical image segmentation, the performance of these method on tiny tissues, such as cell and fundus vessel, still needs to be improved. Most approaches focus on modifying the network architecture to overcome the problem of missing details in segmented images. In this paper, we try to solve this problem from a new perspective, that is, introducing self-distillation mechanism to fully utilize the features extracted from the network. Our method can be viewed as a combination of a novel loss function and a specific training strategy. It can be easily integrated into most existing encoderdecoder structured networks with few additional computational cost. We conduct experiments on four datasets, which are DRIVE, CHASEDB, GlaS and TNBC, and serval commonly used models to prove the effectiveness of our method. Experiments show that the performance of these models has been improved, which proves that our method is a general method and can be widely used in the field of m edical image segmentation.
引用
收藏
页码:934 / 940
页数:7
相关论文
共 50 条
  • [41] Dynamic image super-resolution via progressive contrastive self-distillation
    Zhang, Zhizhong
    Xie, Yuan
    Zhang, Chong
    Wang, Yanbo
    Qu, Yanyun
    Lin, Shaohui
    Ma, Lizhuang
    Tian, Qi
    PATTERN RECOGNITION, 2024, 153
  • [42] Reminding the incremental language model via data-free self-distillation
    Wang, Han
    Fu, Ruiliu
    Li, Chengzhang
    Zhang, Xuejun
    Zhou, Jun
    Bai, Xing
    Yan, Yonghong
    Zhao, Qingwei
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9298 - 9320
  • [43] SurgiNet: Pyramid Attention Aggregation and Class-wise Self-Distillation for Surgical Instrument Segmentation
    Ni, Zhen-Liang
    Zhou, Xiao-Hu
    Wang, Guan-An
    Yue, Wen-Qian
    Li, Zhen
    Bian, Gui-Bin
    Hou, Zeng-Guang
    MEDICAL IMAGE ANALYSIS, 2022, 76
  • [44] Perturbed Self-Distillation: Weakly Supervised Large-Scale Point Cloud Semantic Segmentation
    Zhang, Yachao
    Qu, Yanyun
    Xie, Yuan
    Li, Zonghao
    Zheng, Shanshan
    Li, Cuihua
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15500 - 15508
  • [45] CM-MaskSD: Cross-Modality Masked Self-Distillation for Referring Image Segmentation
    Wang, Wenxuan
    He, Xingjian
    Zhang, Yisi
    Guo, Longteng
    Shen, Jiachen
    Li, Jiangyun
    Liu, Jing
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6906 - 6916
  • [46] Understanding Self-Distillation in the Presence of Label Noise
    Das, Rudrajit
    Sanghavi, Sujay
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [47] DeSD: Self-Supervised Learning with Deep Self-Distillation for 3D Medical Image Segmentation
    Ye, Yiwen
    Zhang, Jianpeng
    Chen, Ziyang
    Xia, Yong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT IV, 2022, 13434 : 545 - 555
  • [48] Complementary Bi-directional Feature Compression for Indoor 360° Semantic Segmentation with Self-distillation
    Zheng, Zishuo
    Lin, Chunyu
    Nie, Lang
    Liao, Kang
    Shen, Zhijie
    Zhao, Yao
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4490 - 4499
  • [49] Deep Contrastive Representation Learning With Self-Distillation
    Xiao, Zhiwen
    Xing, Huanlai
    Zhao, Bowen
    Qu, Rong
    Luo, Shouxi
    Dai, Penglin
    Li, Ke
    Zhu, Zonghai
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 3 - 15
  • [50] Self-Distillation Amplifies Regularization in Hilbert Space
    Mobahi, Hossein
    Farajtabar, Mehrdad
    Bartlett, Peter L.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33