Cost-effective Distillation of Large Language Models

被引:0
|
作者
Dasgupta, Sayantan [1 ]
Cohn, Trevor [1 ,2 ]
Baldwin, Timothy [1 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic, Australia
[2] Google DeepMind, Seattle, WA USA
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Knowledge distillation (KD) involves training a small "student" model to replicate the strong performance of a high-capacity "teacher" model, enabling efficient deployment in resource-constrained settings. Topperforming methods tend to be task- or architecture-specific and lack generalizability. Several existing approaches require pretraining of the teacher on task-specific datasets, which can be costly for large and unstable for small datasets. Here we propose an approach for improving KD through a novel distillation loss agnostic to the task and model architecture. We successfully apply our method to the distillation of the BERT-base and achieve highly competitive results from the distilled student across a range of GLUE tasks, especially for tasks with smaller datasets.(1)
引用
收藏
页码:7346 / 7354
页数:9
相关论文
共 50 条