Survey on Privacy Attacks and Defenses in Machine Learning

被引:0
|
作者
Liu R.-X. [1 ,2 ]
Chen H. [1 ,2 ]
Guo R.-Y. [1 ,2 ]
Zhao D. [1 ,2 ]
Liang W.-J. [1 ,2 ]
Li C.-P. [1 ,2 ]
机构
[1] Key Laboratory of Data Engineering and Knowledge Engineering of the Ministry of Education, Renmin University of China, Beijing
[2] School of Information, Renmin University of China, Beijing
来源
Chen, Hong (chong@ruc.edu.cn) | 1600年 / Chinese Academy of Sciences卷 / 31期
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Data management; Machine learning; Privacy attack; Privacy preserving;
D O I
10.13328/j.cnki.jos.005904
中图分类号
学科分类号
摘要
In the era of big data, a rich source of data prompts the development of machine learning technology. However, risks of privacy leakage of models' training data in data collecting and training stages pose essential challenges to data management in the artificial intelligence age. Traditional privacy preserving methods of data management and analysis could not satisfy the complex privacy problems in various stages and scenarios of machine learning. This study surveys the state-of-the-art works of privacy attacks and defenses in machine learning. On the one hand, scenarios of privacy leakage and adversarial models of privacy attacks are illustrated. Also, specific works of privacy attacks are classified with respect to adversarial strategies. On the other hand, 3 main technologies which are commonly applied in privacy preserving of machine learning are introduced and key problems of their applications are pointed out. In addition, 5 defense strategies and corresponding specific mechanisms are elaborated. Finally, future works and challenges of privacy preserving in machine learning are concluded. © Copyright 2020, Institute of Software, the Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:866 / 892
页数:26
相关论文
共 95 条
  • [21] Ye Q.Q., Meng X.F., Zhu M.J., Huo Z., Survey on local differential privacy, Ruan Jian Xue Bao/Journal of Software, 29, 7, pp. 1981-2005, (2018)
  • [22] Dwork C., Smith A., Steinke T., Ullman J., Exposed! A survey of attacks on private data, Annual Review of Statistics and Its Application, 4, pp. 61-84, (2017)
  • [23] Acs G., Melis L., Castelluccia C., De Cristofaro E., Differentially private mixture of generative neural networks, IEEE Trans. on Knowledge and Data Engineering, 31, 6, pp. 1109-1121, (2018)
  • [24] Xie L., Lin K., Wang S., Wang F., Zhou J., Differentially private generative adversarial network, (2018)
  • [25] Bindschaedler V., Shokri R., Synthesizing plausible privacy-preserving location traces, Proc. of the 2016 IEEE Symp. on Security and Privacy (SP), pp. 546-563, (2016)
  • [26] Chaudhuri K., Monteleoni C., Sarwate A.D., Differentially private empirical risk minimization, Journal of Machine Learning Research, 12, pp. 1069-1109, (2011)
  • [27] Abadi M., Chu A., Goodfellow I., McMahan H.B., Mironov I., Talwar K., Zhang L., Deep learning with differential privacy, Proc. of the 2016 ACM SIGSAC Conf. on Computer and Communications Security, pp. 308-318, (2016)
  • [28] Wang N., Xiao X., Yang Y., Zhao J., Hui S.C., Shin H., Yu G., Collecting and analyzing multidimensional data with local differential privacy, Proc. of the 2019 IEEE 35th Int'l Conf. on Data Engineering (ICDE), pp. 638-649, (2019)
  • [29] McMahan H.B., Ramage D., Talwar K., Zhang L., Learning differentially private recurrent language models, (2017)
  • [30] Shokri R., Shmatikov V., Privacy-Preserving deep learning, Proc. of the 22nd ACM SIGSAC Conf. on Computer and Communications Security, pp. 1310-1321, (2015)