共 50 条
- [1] UOR: Universal Backdoor Attacks on Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 7865 - 7877
- [3] Aliasing Backdoor Attacks on Pre-trained Models PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2707 - 2724
- [4] Character-Level Syntax Infusion in Pre-Trained Models for Chinese Semantic Role Labeling International Journal of Machine Learning and Cybernetics, 2021, 12 : 3503 - 3515
- [5] Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [7] Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3023 - 3032
- [9] Maximum Entropy Loss, the Silver Bullet Targeting Backdoor Attacks in Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 3850 - 3868
- [10] Multi-target Backdoor Attacks for Code Pre-trained Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 7236 - 7254