A Robust Approach for Securing Audio Classification Against Adversarial Attacks

被引:38
|
作者
Esmaeilpour, Mohammad [1 ]
Cardinal, Patrick [1 ]
Koerich, Alessandro [1 ]
机构
[1] Univ Quebec, Ecole Technol Super, Dept Software & IT Engn, Montreal, PQ H3C 1K3, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Support vector machines; Machine learning; Robustness; Perturbation methods; Predictive models; Optimization; Two dimensional displays; Spectrograms; environmental sound classification; adversarial attack; K-means plus plus; support vector machines (SVM); convolutional denoising autoencoder;
D O I
10.1109/TIFS.2019.2956591
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Adversarial audio attacks can be considered as a small perturbation unperceptive to human ears that is intentionally added to an audio signal and causes a machine learning model to make mistakes. This poses a security concern about the safety of machine learning models since the adversarial attacks can fool such models toward the wrong predictions. In this paper we first review some strong adversarial attacks that may affect both audio signals and their 2D representations and evaluate the resiliency of deep learning models and support vector machines (SVM) trained on 2D audio representations such as short time Fourier transform, discrete wavelet transform (DWT) and cross recurrent plot against several state-of-the-art adversarial attacks. Next, we propose a novel approach based on pre-processed DWT representation of audio signals and SVM to secure audio systems against adversarial attacks. The proposed architecture has several preprocessing modules for generating and enhancing spectrograms including dimension reduction and smoothing. We extract features from small patches of the spectrograms using the speeded up robust feature (SURF) algorithm which are further used to transform into cluster distance distribution using the K-Means++ algorithm. Finally, SURF-generated vectors are encoded by this codebook and the resulting codewords are used for training a SVM. All these steps yield to a novel approach for audio classification that provides a good tradeoff between accuracy and resilience. Experimental results on three environmental sound datasets show the competitive performance of the proposed approach compared to the deep neural networks both in terms of accuracy and robustness against strong adversarial attacks.
引用
收藏
页码:2147 / 2159
页数:13
相关论文
共 50 条
  • [21] Fuzzy classification boundaries against adversarial network attacks
    Iglesias, Felix
    Milosevic, Jelena
    Zseby, Tanja
    FUZZY SETS AND SYSTEMS, 2019, 368 : 20 - 35
  • [22] GAN Against Adversarial Attacks in Radio Signal Classification
    Wang, Zhaowei
    Liu, Weicheng
    Wang, Hui-Ming
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (12) : 2851 - 2854
  • [23] Black-Box Adversarial Attacks against Audio Forensics Models
    Jiang, Yi
    Ye, Dengpan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [24] Securing blockchain-based timed data release against adversarial attacks
    Wang, Jingzhe
    Palanisamy, Balaji
    JOURNAL OF COMPUTER SECURITY, 2023, 31 (06) : 649 - 677
  • [25] SENTINEL: Securing Indoor Localization Against Adversarial Attacks With Capsule Neural Networks
    Gufran, Danish
    Anandathirtha, Pooja
    Pasricha, Sudeep
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (11) : 4021 - 4032
  • [26] Securing Voice-driven Interfaces against Fake (Cloned) Audio Attacks
    Malik, Hafiz
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 512 - 517
  • [27] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [28] Robust Collective Classification against Structural Attacks
    Zhou, Kai
    Vorobeychik, Yevgeniy
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 250 - 259
  • [29] A Framework for Robust Deep Learning Models Against Adversarial Attacks Based on a Protection Layer Approach
    Al-Andoli, Mohammed Nasser
    Tan, Shing Chiang
    Sim, Kok Swee
    Goh, Pey Yun
    Lim, Chee Peng
    IEEE ACCESS, 2024, 12 : 17522 - 17540
  • [30] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370