共 50 条
- [32] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks? PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
- [33] Mitigating Adversarial Attacks for Deep Neural Networks by Input Deformation and Augmentation 2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 157 - 162
- [37] Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
- [39] Simple Black-Box Adversarial Attacks on Deep Neural Networks 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
- [40] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,