共 50 条
- [41] Are Pre-trained Convolutions Better than Pre-trained Transformers? 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
- [42] Probing for Hyperbole in Pre-Trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
- [43] Weight Poisoning Attacks on Pre-trained Models 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 2793 - 2806
- [45] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey Machine Intelligence Research, 2023, 20 : 447 - 482
- [46] Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15588 - 15605
- [47] Probing Pre-Trained Language Models for Disease Knowledge FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3023 - 3033
- [48] Analyzing Individual Neurons in Pre-trained Language Models PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880
- [49] Emotional Paraphrasing Using Pre-trained Language Models 2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,