Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones

被引:0
|
作者
Zhang, Yongheng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Comp, 10 Xitucheng Rd, Beijing 100876, Peoples R China
关键词
knowledge distillation; model compression; drone-view image restoration; QUALITY ASSESSMENT;
D O I
10.3390/drones9030209
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Deploying high-performance image restoration models on drones is critical for applications like autonomous navigation, surveillance, and environmental monitoring. However, the computational and memory limitations of drones pose significant challenges to utilizing complex image restoration models in real-world scenarios. To address this issue, we propose the Simultaneous Learning Knowledge Distillation (SLKD) framework, specifically designed to compress image restoration models for resource-constrained drones. SLKD introduces a dual-teacher, single-student architecture that integrates two complementary learning strategies: Degradation Removal Learning (DRL) and Image Reconstruction Learning (IRL). In DRL, the student encoder learns to eliminate degradation factors by mimicking Teacher A, which processes degraded images utilizing a BRISQUE-based extractor to capture degradation-sensitive natural scene statistics. Concurrently, in IRL, the student decoder reconstructs clean images by learning from Teacher B, which processes clean images, guided by a PIQE-based extractor that emphasizes the preservation of edge and texture features essential for high-quality reconstruction. This dual-teacher approach enables the student model to learn from both degraded and clean images simultaneously, achieving robust image restoration while significantly reducing computational complexity. Experimental evaluations across five benchmark datasets and three restoration tasks-deraining, deblurring, and dehazing-demonstrate that, compared to the teacher models, the SLKD student models achieve an average reduction of 85.4% in FLOPs and 85.8% in model parameters, with only a slight average decrease of 2.6% in PSNR and 0.9% in SSIM. These results highlight the practicality of integrating SLKD-compressed models into autonomous systems, offering efficient and real-time image restoration for aerial platforms operating in challenging environments.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] A Task-Efficient Gradient Guide Knowledge Distillation for Pre-train Language Model Compression
    Liu, Xu
    Su, Yila
    Wu, Nier
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 366 - 377
  • [32] UKD-Net: efficient image enhancement with knowledge distillation
    Zhao, Xiaoyan
    Cai, Xiaowen
    Xue, Ying
    Liao, Yipeng
    Lin, Liqun
    Zhao, Tiesong
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [33] KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow
    Murugesan, Balamurali
    Vijayarangan, Sricharan
    Sarveswaran, Kaushik
    Ram, Keerthi
    Sivaprakasam, Mohanasankar
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 515 - 526
  • [34] Model Compression Based on Knowledge Distillation and Its Application in HRRP
    Chen, Xiaojiao
    An, Zhenyu
    Huang, Liansheng
    He, Shiying
    Wang, Zhen
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 1268 - 1272
  • [35] PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation
    Kim, Jangho
    Chang, Simyung
    Kwak, Nojun
    INTERSPEECH 2021, 2021, : 4568 - 4572
  • [36] Uncertainty-Driven Knowledge Distillation for Language Model Compression
    Huang, Tianyu
    Dong, Weisheng
    Wu, Fangfang
    Li, Xin
    Shi, Guangming
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 2850 - 2858
  • [37] A general model compression method for image restoration network
    Xiao, Jie
    Jin, Zhi
    Zhang, Huanrong
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 93 (93)
  • [38] DiffIR: Efficient Diffusion Model for Image Restoration
    Xia, Bin
    Zhang, Yulun
    Wang, Shiyin
    Wang, Yitong
    Wu, Xinglong
    Tian, Yapeng
    Yang, Wenming
    Van Gool, Luc
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 13049 - 13059
  • [39] Efficient Federated Learning for AIoT Applications Using Knowledge Distillation
    Liu, Tian
    Xia, Jun
    Ling, Zhiwei
    Fu, Xin
    Yu, Shui
    Chen, Mingsong
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (08) : 7229 - 7243
  • [40] Self Regulated Learning Mechanism for Data Efficient Knowledge Distillation
    Mishra, Sourav
    Sundaram, Suresh
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,