ON THE GENERATION AND REMOVAL OF SPEAKER ADVERSARIAL PERTURBATION FOR VOICE-PRIVACY PROTECTION

被引:0
|
作者
Guo, Chenyang [1 ]
Chen, Liping [1 ]
Li, Zhuhai [1 ]
Lee, Kong Aik [2 ]
Ling, Zhen-Hua [1 ]
Guo, Wu [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Hong Kong Polytech Univ, Hong Kong, Peoples R China
关键词
speaker recognition; voice-privacy protection; speaker adversarial perturbation; perturbation removal;
D O I
10.1109/SLT61566.2024.10832243
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Neural networks are commonly known to be vulnerable to adversarial attacks mounted through subtle perturbation on the input data. Recent development in voice-privacy protection has shown the positive use cases of the same technique to conceal speaker's voice attribute with additive perturbation signal generated by an adversarial network. This paper examines the reversibility property where an entity generating the adversarial perturbations is authorized to remove them and restore original speech (e.g., the speaker him/herself). A similar technique could also be used by an investigator to deanonymize a voice-protected speech to restore criminals' identities in security and forensic analysis. In this setting, the perturbation generative module is assumed to be known in the removal process. To this end, a joint training of perturbation generation and removal modules is proposed. Experimental results on the LibriSpeech dataset demonstrated that the subtle perturbations added to the original speech can be predicted from the anonymized speech while achieving the goal of privacy protection. By removing these perturbations from the anonymized sample, the original speech can be restored. Audio samples can be found in https://voiceprivacy.github.io/Perturbation-Generation-Removal/.
引用
收藏
页码:1179 / 1184
页数:6
相关论文
共 50 条
  • [1] Voice Guard: Protecting Voice Privacy with Strong and Imperceptible Adversarial Perturbation in the Time Domain
    Li, Jingyang
    Ye, Dengpan
    Tang, Long
    Chen, Chuanxi
    Hu, Shengshan
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4812 - 4820
  • [2] Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective
    Oh, Seong Joon
    Fritz, Mario
    Schiele, Bernt
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1491 - 1500
  • [3] Adversarial Perturbation Prediction for Real-Time Protection of Speech Privacy
    Zhang, Zhaoyang
    Wang, Shen
    Zhu, Guopu
    Zhan, Dechen
    Huang, Jiwu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 8701 - 8716
  • [4] An Image Privacy Protection Algorithm Based on Adversarial Perturbation Generative Networks
    Tong, Chao
    Zhang, Mengze
    Lang, Chao
    Zheng, Zhigao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (02)
  • [5] Investigation into the Impact of Speaker Adversarial Perturbation on Speech Recognition
    Guo, Chenyang
    Chen, Liping
    Lee, Kong Aik
    Ling, Zhen-Hua
    Guo, Wu
    MAN-MACHINE SPEECH COMMUNICATION, NCMMSC 2024, 2025, 2312 : 191 - 199
  • [6] Protecting image privacy through adversarial perturbation
    Baoyu Liang
    Chao Tong
    Chao Lang
    Qinglong Wang
    Joel J. P. C Rodrigues
    Sergei Kozlov
    Multimedia Tools and Applications, 2022, 81 : 34759 - 34774
  • [7] Protecting image privacy through adversarial perturbation
    Liang, Baoyu
    Tong, Chao
    Lang, Chao
    Wang, Qinglong
    Rodrigues, Joel J. P. C.
    Kozlov, Sergei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (24) : 34759 - 34774
  • [8] Hashing Fake: Producing Adversarial Perturbation for Online Privacy Protection Against Automatic Retrieval Models
    Zhang, Xingwei
    Zheng, Xiaolong
    Mao, Wenji
    Zeng, Daniel Dajun
    Wang, Fei-Yue
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (06) : 3241 - 3251
  • [9] 3D-Aware Adversarial Makeup Generation for Facial Privacy Protection
    Lyu, Yueming
    Jiang, Yue
    He, Ziwen
    Peng, Bo
    Liu, Yunfan
    Dong, Jing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13438 - 13453
  • [10] Towards A Guided Perturbation for Privacy Protection through Detecting Adversarial Examples with Provable Accuracy and Precision
    Lin, Ying
    Qu, Yanzhen
    Zhang, Zhiyuan
    Su, Haorong
    2019 6TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2019), 2019, : 107 - 112