共 50 条
- [1] A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models MOBILE NETWORKS & APPLICATIONS, 2024,
- [4] Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge Distillation 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4529 - 4538
- [5] Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks PROCEEDINGS OF THE 2021 TWENTY SECOND INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2021), 2021, : 319 - 324
- [6] AdHierNet: Enhancing Adversarial Robustness and Interpretability in Text Classification 2024 6TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING, ICNLP 2024, 2024, : 41 - 45
- [10] ENHANCING MODEL ROBUSTNESS BY INCORPORATING ADVERSARIAL KNOWLEDGE INTO SEMANTIC REPRESENTATION 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7708 - 7712