Enhancing Tiny Tissues Segmentation via Self-Distillation

被引:3
|
作者
Zhou, Chuan [1 ]
Chen, Yuchu [1 ]
Fan, Minghao [1 ]
Wen, Yang [1 ]
Chen, Hang [1 ]
Chen, Leiting [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Key Lab Digital Media Technol Sichuan Prov, Chengdu, Peoples R China
[2] Univ Elect Sci & Technol China, Inst Elect & Informat Engn Guangdong, Chengdu, Peoples R China
关键词
tiny tissues segmentation; encoder-decoder structured network; self-distillation; NETWORK; IMAGES;
D O I
10.1109/BIBM49941.2020.9313542
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Although the wide deployment of convolutional networks has greatly promoted the progress in the field of medical image segmentation, the performance of these method on tiny tissues, such as cell and fundus vessel, still needs to be improved. Most approaches focus on modifying the network architecture to overcome the problem of missing details in segmented images. In this paper, we try to solve this problem from a new perspective, that is, introducing self-distillation mechanism to fully utilize the features extracted from the network. Our method can be viewed as a combination of a novel loss function and a specific training strategy. It can be easily integrated into most existing encoderdecoder structured networks with few additional computational cost. We conduct experiments on four datasets, which are DRIVE, CHASEDB, GlaS and TNBC, and serval commonly used models to prove the effectiveness of our method. Experiments show that the performance of these models has been improved, which proves that our method is a general method and can be widely used in the field of m edical image segmentation.
引用
收藏
页码:934 / 940
页数:7
相关论文
共 50 条
  • [31] Future Augmentation with Self-distillation in Recommendation
    Liu, Chong
    Xie, Ruobing
    Liu, Xiaoyang
    Wang, Pinzheng
    Zheng, Rongqin
    Zhang, Lixin
    Li, Juntao
    Xia, Feng
    Lin, Leyu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 602 - 618
  • [32] Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation
    Xu, Yi-Ge
    Qiu, Xi-Peng
    Zhou, Li-Gao
    Huang, Xuan-Jing
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2023, 38 (04) : 853 - 866
  • [33] Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation
    Yi-Ge Xu
    Xi-Peng Qiu
    Li-Gao Zhou
    Xuan-Jing Huang
    Journal of Computer Science and Technology, 2023, 38 : 853 - 866
  • [34] Image classification based on self-distillation
    Yuting Li
    Linbo Qing
    Xiaohai He
    Honggang Chen
    Qiang Liu
    Applied Intelligence, 2023, 53 : 9396 - 9408
  • [35] Self-Distillation With Augmentation in Feature Space
    Xu, Kai
    Wang, Lichun
    Li, Shuang
    Xin, Jianjia
    Yin, Baocai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9578 - 9590
  • [36] Enhancing Logical Rules Based on Self-Distillation for Document-Level Relation Extraction
    Mao, Yanxu
    Cui, Tiehan
    Ding, Ying
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, NLPCC 2024, 2025, 15359 : 406 - 418
  • [37] Self-Distillation for Randomized Neural Networks
    Hu, Minghui
    Gao, Ruobin
    Suganthan, Ponnuthurai Nagaratnam
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (11) : 1 - 10
  • [38] Image classification based on self-distillation
    Li, Yuting
    Qing, Linbo
    He, Xiaohai
    Chen, Honggang
    Liu, Qiang
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9396 - 9408
  • [39] A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation
    Zhao, Qi
    Liu, Binghao
    Lyu, Shuchang
    Chen, Huojin
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (01) : 177 - 189
  • [40] Reminding the incremental language model via data-free self-distillation
    Han Wang
    Ruiliu Fu
    Chengzhang Li
    Xuejun Zhang
    Jun Zhou
    Xing Bai
    Yonghong Yan
    Qingwei Zhao
    Applied Intelligence, 2023, 53 : 9298 - 9320