共 50 条
- [1] Dynamic Knowledge Distillation for Pre-trained Language Models 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 379 - 389
- [2] KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2116 - 2127
- [3] AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation AI OPEN, 2023, 4 : 56 - 63
- [4] Knowledge Base Grounded Pre-trained Language Models via Distillation 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1617 - 1625
- [5] Domain Knowledge Transferring for Pre-trained Language Model via Calibrated Activation Boundary Distillation PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1658 - 1669
- [6] ReAugKD: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1128 - 1136
- [7] Knowledge Enhanced Pre-trained Language Model for Product Summarization NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT II, 2022, 13552 : 263 - 273
- [8] PANLP at MEDIQA 2019: Pre-trained Language Models, Transfer Learning and Knowledge Distillation SIGBIOMED WORKSHOP ON BIOMEDICAL NATURAL LANGUAGE PROCESSING (BIONLP 2019), 2019, : 380 - 388
- [9] Syntax-guided Contrastive Learning for Pre-trained Language Model FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2430 - 2440
- [10] Knowledge Rumination for Pre-trained Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3387 - 3404