共 50 条
- [1] UOR: Universal Backdoor Attacks on Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 7865 - 7877
- [2] Universal Adversarial Perturbations for Vision-Language Pre-trained Models PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 862 - 871
- [3] Detection of Speech Related Disorders by Pre-trained Embedding Models Extracted Biomarkers SPEECH AND COMPUTER, SPECOM 2022, 2022, 13721 : 279 - 289
- [5] A Data Cartography based MixUp for Pre-trained Language Models NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4244 - 4250
- [6] Pre-trained Language Models with Limited Data for Intent Classification 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
- [7] Comparison of Pre-trained vs Custom- trained Word Embedding Models for Word Sense Disambiguation ADCAIJ-ADVANCES IN DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE JOURNAL, 2023, 12 (01):
- [8] Refining Pre-Trained Motion Models 2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 4932 - 4938
- [9] Efficiently Robustify Pre-Trained Models 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5482 - 5492