共 9 条
- [1] FoolChecker: A platform to evaluate the robustness of images against adversarial attacks.[J].Liu Hui;Zhao Bo;Huang Linquan;Guo Jiabao;Liu Yifan.Neurocomputing.2020, prep
- [2] Selective Audio Adversarial Example in Evasion Attack on Speech Recognition System..[J].Hyun Kwon;Yongchul Kim;Hyunsoo Yoon;Daeseon Choi.IEEE Trans. Information Forensics and Security.2020,
- [3] Connecting the Digital and Physical World: Improving the Robustness of Adversarial Attacks.[J].Steve T.K. Jan;Joseph Messou;Yen Chen Lin;Jia Bin Huang;Gang Wang.Proceedings of the AAAI Conference on Artificial Intelligence.2019,
- [4] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach.[J].Wang Zhibo;Song Mengkai;Zheng Siyan;Zhang Zhifei;Song Yang;Wang Qian.IEEE Transactions on Dependable and Secure Computing.2019, 3
- [7] Defensive Distillation is Not Robust to Adversarial Examples..[J].Nicholas Carlini;David A. Wagner 0001.CoRR.2016,
- [8] Domain-Adversarial Training of Neural Networks..[J].Yaroslav Ganin;Evgeniya Ustinova;Hana Ajakan;Pascal Germain;Hugo Larochelle;François Laviolette;Mario Marchand;Victor S. Lempitsky.Journal of Machine Learning Research.2016,
- [9] Gradient-based learning applied to document recognition [J]. PROCEEDINGS OF THE IEEE, 1998, 86 (11) : 2278 - 2324