Knowledge Distillation: A Survey

被引:2106
作者
Gou, Jianping [1 ,2 ,3 ]
Yu, Baosheng [3 ]
Maybank, Stephen J. [4 ]
Tao, Dacheng [3 ]
机构
[1] Jiangsu Univ, Sch Comp Sci & Commun Engn, Zhenjiang 212013, Jiangsu, Peoples R China
[2] Jiangsu Univ, Jiangsu Key Lab Secur Technol Ind Cyberspace, Zhenjiang 212013, Jiangsu, Peoples R China
[3] Univ Sydney, Fac Engn, Sch Comp Sci, Darlington, NSW 2008, Australia
[4] Univ London, Birkbeck Coll, Dept Comp Sci & Informat Syst, London, England
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Deep neural networks; Model compression; Knowledge distillation; Knowledge transfer; Teacher– student architecture;
D O I
10.1007/s11263-021-01453-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. The great success of deep learning is mainly due to its scalability to encode large-scale data and to maneuver billions of model parameters. However, it is a challenge to deploy these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the high computational complexity but also the large storage requirements. To this end, a variety of model compression and acceleration techniques have been developed. As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model. It has received rapid increasing attention from the community. This paper provides a comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, teacher-student architecture, distillation algorithms, performance comparison and applications. Furthermore, challenges in knowledge distillation are briefly reviewed and comments on future research are discussed and forwarded.
引用
收藏
页码:1789 / 1819
页数:31
相关论文
共 274 条
[1]  
Aditya S., WACV
[2]  
Aguilar G., AAAI
[3]  
Aguinaldo A., ARXIV PREPRINT ARXIV
[4]  
Ahn Sungsoo., CVPR
[5]  
Ai W., 2020, ICIP, P1586
[6]  
Albanie S., EMOTION RECOGNITION
[7]  
Allen-Zhu Z., NEURIPS
[8]  
Anil R., ICLRS
[9]  
[Anonymous], HIGHLIGHT EVERY STEP, DOI [10.1109/TCYB.2020.3007506, DOI 10.1109/TCYB.2020.3007506]
[10]  
[Anonymous], ARXIV190901688