共 95 条
- [1] Shokri R., Stronati M., Song C., Shmatikov V., Membership inference attacks against machine learning models, Proc. of the Security & Privacy, (2017)
- [2] Wang Z., Song M., Zhang Z., Song Y., Wang Q., Qi H., Beyond inferring class representatives: User-level privacy leakage from federated learning, Proc. of the IEEE INFOCOM 2019-IEEE Conf. on Computer Communications, pp. 2512-2520, (2019)
- [3] Nasr M., Shokri R., Houmansadr A., Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning, Proc. of the Security & Privacy, (2019)
- [4] Hitaj B., Ateniese G., Perez-Cruz F., Deep models under the GAN: Information leakage from collaborative deep learning, Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security, (2017)
- [5] Erlingsson U., Pihur V., Korolova A., Rappor: Randomized aggregatable privacy-preserving ordinal response, Proc. of the 2014 ACM SIGSAC Conf. on Computer and Communications Security, (2014)
- [6] Song C., Ristenpart T., Shmatikov V., Machine learning models that remember too much, Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security, pp. 587-601, (2017)
- [7] Barreno M., Nelson B., Sears R., Joseph A.D., Tygar J.D., Can machine learning be secure?, Proc. of the 2006 ACM Symp. on Information, Computer and Communications Security, pp. 16-25, (2006)
- [8] Hayes J., Melis L., Danezis G., De Cristofaro E., LOGAN: Evaluating privacy leakage of generative models using generative adversarial networks, (2017)
- [9] Fredrikson M., Lantz E., Jha S., Lin S., Page D., Ristenpart T., Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing, Proc. of the UNIX Security Symp., (2014)
- [10] Fredrikson M., Jha S., Ristenpart T., Model inversion attacks that exploit confidence information and basic countermeasures, Proc. of the 22nd ACM SIGSAC Conf. on Computer and Communications Security, pp. 1322-1333, (2015)