Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones

被引:0
|
作者
Zhang, Yongheng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Comp, 10 Xitucheng Rd, Beijing 100876, Peoples R China
关键词
knowledge distillation; model compression; drone-view image restoration; QUALITY ASSESSMENT;
D O I
10.3390/drones9030209
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Deploying high-performance image restoration models on drones is critical for applications like autonomous navigation, surveillance, and environmental monitoring. However, the computational and memory limitations of drones pose significant challenges to utilizing complex image restoration models in real-world scenarios. To address this issue, we propose the Simultaneous Learning Knowledge Distillation (SLKD) framework, specifically designed to compress image restoration models for resource-constrained drones. SLKD introduces a dual-teacher, single-student architecture that integrates two complementary learning strategies: Degradation Removal Learning (DRL) and Image Reconstruction Learning (IRL). In DRL, the student encoder learns to eliminate degradation factors by mimicking Teacher A, which processes degraded images utilizing a BRISQUE-based extractor to capture degradation-sensitive natural scene statistics. Concurrently, in IRL, the student decoder reconstructs clean images by learning from Teacher B, which processes clean images, guided by a PIQE-based extractor that emphasizes the preservation of edge and texture features essential for high-quality reconstruction. This dual-teacher approach enables the student model to learn from both degraded and clean images simultaneously, achieving robust image restoration while significantly reducing computational complexity. Experimental evaluations across five benchmark datasets and three restoration tasks-deraining, deblurring, and dehazing-demonstrate that, compared to the teacher models, the SLKD student models achieve an average reduction of 85.4% in FLOPs and 85.8% in model parameters, with only a slight average decrease of 2.6% in PSNR and 0.9% in SSIM. These results highlight the practicality of integrating SLKD-compressed models into autonomous systems, offering efficient and real-time image restoration for aerial platforms operating in challenging environments.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [42] Efficient Neural Data Compression for Machine Type Communications via Knowledge Distillation
    Hussien, Mostafa
    Xu, Yi Tian
    Wu, Di
    Liu, Xue
    Dudek, Gregory
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1169 - 1174
  • [43] Iterative knowledge distillation and pruning for model compression in unsupervised domain adaptation
    Wang, Zhiyuan
    Shi, Long
    Mei, Zhen
    Zhao, Xiang
    Wang, Zhe
    Li, Jun
    PATTERN RECOGNITION, 2025, 164
  • [44] Multi-Granularity Structural Knowledge Distillation for Language Model Compression
    Liu, Chang
    Tao, Chongyang
    Feng, Jiazhan
    Zhao, Dongyan
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1001 - 1011
  • [45] Learning knowledge representation with meta knowledge distillation for single image super-resolution
    Zhu, Han
    Chen, Zhenzhong
    Liu, Shan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [46] Knowledge Distillation Approach for Efficient Internal Language Model Estimation
    Chen, Zhipeng
    Xu, Haihua
    Khassanov, Yerbolat
    He, Yi
    Lu, Lu
    Ma, Zejun
    Wu, Ji
    INTERSPEECH 2023, 2023, : 1339 - 1343
  • [47] A Medical Image Segmentation Method Combining Knowledge Distillation and Contrastive Learning
    Ma, Xiaoxuan
    Shan, Sihan
    Sui, Dong
    Journal of Computers (Taiwan), 2024, 35 (03) : 363 - 377
  • [48] EPSD: Early Pruning with Self-Distillation for Efficient Model Compression
    Chen, Dong
    Liu, Ning
    Zhu, Yichen
    Che, Zhengping
    Ma, Rui
    Zhang, Fachao
    Mou, Xiaofeng
    Chang, Yi
    Tang, Jian
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11258 - 11266
  • [49] Editorial: Introduction to the Issue on Deep Learning for Image/Video Restoration and Compression
    Tekalp, A. Murat
    Covell, Michele
    Timofte, Radu
    Dong, Chao
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2021, 15 (02) : 157 - 161
  • [50] Efficient Diffusion Model for Image Restoration by Residual Shifting
    Yue, Zongsheng
    Wang, Jianyi
    Loy, Chen Change
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (01) : 116 - 130