Survey on Privacy Attacks and Defenses in Machine Learning

被引:0
|
作者
Liu R.-X. [1 ,2 ]
Chen H. [1 ,2 ]
Guo R.-Y. [1 ,2 ]
Zhao D. [1 ,2 ]
Liang W.-J. [1 ,2 ]
Li C.-P. [1 ,2 ]
机构
[1] Key Laboratory of Data Engineering and Knowledge Engineering of the Ministry of Education, Renmin University of China, Beijing
[2] School of Information, Renmin University of China, Beijing
来源
Chen, Hong (chong@ruc.edu.cn) | 1600年 / Chinese Academy of Sciences卷 / 31期
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Data management; Machine learning; Privacy attack; Privacy preserving;
D O I
10.13328/j.cnki.jos.005904
中图分类号
学科分类号
摘要
In the era of big data, a rich source of data prompts the development of machine learning technology. However, risks of privacy leakage of models' training data in data collecting and training stages pose essential challenges to data management in the artificial intelligence age. Traditional privacy preserving methods of data management and analysis could not satisfy the complex privacy problems in various stages and scenarios of machine learning. This study surveys the state-of-the-art works of privacy attacks and defenses in machine learning. On the one hand, scenarios of privacy leakage and adversarial models of privacy attacks are illustrated. Also, specific works of privacy attacks are classified with respect to adversarial strategies. On the other hand, 3 main technologies which are commonly applied in privacy preserving of machine learning are introduced and key problems of their applications are pointed out. In addition, 5 defense strategies and corresponding specific mechanisms are elaborated. Finally, future works and challenges of privacy preserving in machine learning are concluded. © Copyright 2020, Institute of Software, the Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:866 / 892
页数:26
相关论文
共 95 条
  • [1] Shokri R., Stronati M., Song C., Shmatikov V., Membership inference attacks against machine learning models, Proc. of the Security & Privacy, (2017)
  • [2] Wang Z., Song M., Zhang Z., Song Y., Wang Q., Qi H., Beyond inferring class representatives: User-level privacy leakage from federated learning, Proc. of the IEEE INFOCOM 2019-IEEE Conf. on Computer Communications, pp. 2512-2520, (2019)
  • [3] Nasr M., Shokri R., Houmansadr A., Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning, Proc. of the Security & Privacy, (2019)
  • [4] Hitaj B., Ateniese G., Perez-Cruz F., Deep models under the GAN: Information leakage from collaborative deep learning, Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security, (2017)
  • [5] Erlingsson U., Pihur V., Korolova A., Rappor: Randomized aggregatable privacy-preserving ordinal response, Proc. of the 2014 ACM SIGSAC Conf. on Computer and Communications Security, (2014)
  • [6] Song C., Ristenpart T., Shmatikov V., Machine learning models that remember too much, Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security, pp. 587-601, (2017)
  • [7] Barreno M., Nelson B., Sears R., Joseph A.D., Tygar J.D., Can machine learning be secure?, Proc. of the 2006 ACM Symp. on Information, Computer and Communications Security, pp. 16-25, (2006)
  • [8] Hayes J., Melis L., Danezis G., De Cristofaro E., LOGAN: Evaluating privacy leakage of generative models using generative adversarial networks, (2017)
  • [9] Fredrikson M., Lantz E., Jha S., Lin S., Page D., Ristenpart T., Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing, Proc. of the UNIX Security Symp., (2014)
  • [10] Fredrikson M., Jha S., Ristenpart T., Model inversion attacks that exploit confidence information and basic countermeasures, Proc. of the 22nd ACM SIGSAC Conf. on Computer and Communications Security, pp. 1322-1333, (2015)