共 50 条
- [2] Knowledge distillation for BERT unsupervised domain adaptation Knowledge and Information Systems, 2022, 64 : 3113 - 3128
- [3] BERT Learns to Teach: Knowledge Distillation with Meta Learning PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 7037 - 7049
- [4] Harnessing large language models for data-scarce learning of polymer properties NATURE COMPUTATIONAL SCIENCE, 2025, : 245 - 254
- [6] Towards catchment classification in data-scarce regions ECOHYDROLOGY, 2016, 9 (07) : 1235 - 1247