共 50 条
- [21] Triplet Knowledge Distillation Networks for Model Compression 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
- [22] Analysis of Model Compression Using Knowledge Distillation IEEE ACCESS, 2022, 10 : 85095 - 85105
- [23] EPSD: Early Pruning with Self-Distillation for Efficient Model Compression THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11258 - 11266
- [24] Iterative Transfer Knowledge Distillation and Channel Pruning for Unsupervised Cross-Domain Compression WEB INFORMATION SYSTEMS AND APPLICATIONS, WISA 2024, 2024, 14883 : 3 - 15
- [25] STRUCTURED PRUNING AND QUANTIZATION FOR LEARNED IMAGE COMPRESSION 2024 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2024, : 3730 - 3736
- [26] Compressed MoE ASR Model Based on Knowledge Distillation and Quantization INTERSPEECH 2023, 2023, : 3337 - 3341
- [27] Semantic Segmentation Optimization Algorithm Based on Knowledge Distillation and Model Pruning 2019 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA (ICAIBD 2019), 2019, : 261 - 265
- [28] AN EFFICIENT METHOD FOR MODEL PRUNING USING KNOWLEDGE DISTILLATION WITH FEW SAMPLES 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2515 - 2519
- [29] Using Distillation to Improve Network Performance after Pruning and Quantization PROCEEDINGS OF THE 2019 2ND INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND MACHINE INTELLIGENCE (MLMI 2019), 2019, : 3 - 6