共 50 条
- [41] Probing for Hyperbole in Pre-Trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
- [43] ReAugKD: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1128 - 1136
- [44] A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 11239 - 11246
- [46] SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4281 - 4294
- [47] Integrating Knowledge Graph Embeddings and Pre-trained Language Models in Hypercomplex Spaces SEMANTIC WEB, ISWC 2023, PART I, 2023, 14265 : 388 - 407
- [48] Assisted Process Knowledge Graph Building Using Pre-trained Language Models AIXIA 2022 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2023, 13796 : 60 - 74
- [49] Measuring the Knowledge Acquisition-Utilization Gap in Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 4305 - 4319