Class Similarity Weighted Knowledge Distillation for Continual Semantic Segmentation

被引:26
|
作者
Minh Hieu Phan [1 ]
The-Anh Ta [2 ]
Son Lam Phung [1 ,3 ]
Long Tran-Thanh [4 ]
Bouzerdoum, Abdesselam [1 ,5 ]
机构
[1] Univ Wollongong, Wollongong, NSW, Australia
[2] FPT Software, AIC, Hanoi, Vietnam
[3] VinAI Res, Hanoi, Vietnam
[4] Univ Warwick, Coventry, W Midlands, England
[5] Hamad Bin Khalifa Univ, Ar Rayyan, Qatar
关键词
D O I
10.1109/CVPR52688.2022.01636
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models are known to suffer from the problem of catastrophic forgetting when they incrementally learn new classes. Continual learning for semantic segmentation (CSS) is an emerging field in computer vision. We identify a problem in CSS: A model tends to be confused between old and new classes that are visually similar, which makes it forget the old ones. To address this gap, we propose REMINDER - a new CSS framework and a novel class similarity knowledge distillation (CSW-KD) method. Our CSW-KD method distills the knowledge of a previous model on old classes that are similar to the new one. This provides two main benefits: (i) selectively revising old classes that are more likely to be forgotten, and (ii) better learning new classes by relating them with the previously seen classes. Extensive experiments on Pascal-VOC 2012 and ADE20k datasets show that our approach outperforms state-of-the-art methods on standard CSS settings by up to 7.07% and 8.49%, respectively.
引用
收藏
页码:16845 / 16854
页数:10
相关论文
共 50 条
  • [1] Scale-Hybrid Group Distillation with Knowledge Disentangling for Continual Semantic Segmentation
    Song, Zichen
    Zhang, Xiaoliang
    Shi, Zhaofeng
    SENSORS, 2023, 23 (18)
  • [2] Decomposed Knowledge Distillation for Class-Incremental Semantic Segmentation
    Baek, Donghyeon
    Oh, Youngmin
    Lee, Sanghoon
    Lee, Junghyup
    Ham, Bumsub
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Holistic Weighted Distillation for Semantic Segmentation
    Sun, Wujie
    Chen, Defang
    Wang, Can
    Ye, Deshi
    Feng, Yan
    Chen, Chun
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 396 - 401
  • [4] Class similarity weighted knowledge distillation for few shot incremental learning
    Akmel, Feidu
    Meng, Fanman
    Wu, Qingbo
    Chen, Shuai
    Zhang, Runtong
    Assefa, Maregu
    NEUROCOMPUTING, 2024, 584
  • [5] Layer-Specific Knowledge Distillation for Class Incremental Semantic Segmentation
    Wang, Qilong
    Wu, Yiwen
    Yang, Liu
    Zuo, Wangmeng
    Hu, Qinghua
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1977 - 1989
  • [6] Double Similarity Distillation for Semantic Image Segmentation
    Feng, Yingchao
    Sun, Xian
    Diao, Wenhui
    Li, Jihao
    Gao, Xin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5363 - 5376
  • [7] Structured Knowledge Distillation for Semantic Segmentation
    Liu, Yifan
    Chen, Ke
    Liu, Chris
    Qin, Zengchang
    Luo, Zhenbo
    Wang, Jingdong
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2599 - 2608
  • [8] Incrementer: Transformer for Class-Incremental Semantic Segmentation with Knowledge Distillation Focusing on Old Class
    Shang, Chao
    Li, Hongliang
    Meng, Fanman
    Wu, Qingbo
    Qiu, Heqian
    Wang, Lanxiao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7214 - 7224
  • [9] An Internal-External Constrained Distillation Framework for Continual Semantic Segmentation
    Yan, Qingsen
    Liu, Shengqiang
    Zhang, Xing
    Zhu, Yu
    Sun, Jinqiu
    Zhang, Yanning
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT III, 2024, 14427 : 325 - 336
  • [10] Channel Affinity Knowledge Distillation for Semantic Segmentation
    Li, Huakun
    Zhang, Yuhang
    Tian, Shishun
    Cheng, Pengfei
    You, Rong
    Zou, Wenbin
    2023 IEEE 25TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, MMSP, 2023,