共 50 条
- [22] Towards Interpreting Vulnerability of Object Detection Models via Adversarial Distillation APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2022, 2022, 13285 : 53 - 65
- [24] Learning Differentially Private Diffusion Models via Stochastic Adversarial Distillation COMPUTER VISION-ECCV 2024, PT VII, 2025, 15065 : 55 - 71
- [25] Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks PROCEEDINGS OF THE 2021 TWENTY SECOND INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2021), 2021, : 319 - 324
- [27] An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4466 - 4475
- [29] Boosting Adversarial Robustness using Feature Level Stochastic Smoothing 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 93 - 102
- [30] Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,