Privacy Enhancing Machine Learning via Removal of Unwanted Dependencies

被引:0
|
作者
Al, Mert [1 ]
Yagli, Semih [1 ]
Kung, Sun-Yuan [1 ]
机构
[1] Princeton Univ, Dept Elect & Comp Engn, Princeton, NJ 08544 USA
关键词
Data privacy; Data models; Privacy; Predictive models; Kernel; Correlation; Training; Adversarial learning; data privacy; dimension reduction; Kernel methods; representation learning; COMPRESSIVE PRIVACY; INFORMATION;
D O I
10.1109/TNNLS.2021.3110831
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid rise of IoT and Big Data has facilitated copious data-driven applications to enhance our quality of life. However, the omnipresent and all-encompassing nature of the data collection can generate privacy concerns. Hence, there is a strong need to develop techniques that ensure the data serve only the intended purposes, giving users control over the information they share. To this end, this article studies new variants of supervised and adversarial learning methods, which remove the sensitive information in the data before they are sent out for a particular application. The explored methods optimize privacy-preserving feature mappings and predictive models simultaneously in an end-to-end fashion. Additionally, the models are built with an emphasis on placing little computational burden on the user side so that the data can be desensitized on device in a cheap manner. Experimental results on mobile sensing and face datasets demonstrate that our models can successfully maintain the utility performances of predictive models while causing sensitive predictions to perform poorly.
引用
收藏
页码:3019 / 3033
页数:15
相关论文
共 50 条
  • [11] Data release for machine learning via correlated differential privacy
    Shen, Hua
    Li, Jiqiang
    Wu, Ge
    Zhang, Mingwu
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
  • [12] Machine learning in precision medicine to preserve privacy via encryption
    Briguglio, William
    Moghaddam, Parisa
    Yousef, Waleed A.
    Traore, Issa
    Mamun, Mohammad
    PATTERN RECOGNITION LETTERS, 2021, 151 : 148 - 154
  • [13] Privacy-enhancing machine learning framework with private aggregation of teacher ensembles
    Zhao, Shengnan
    Zhao, Qi
    Zhao, Chuan
    Jiang, Han
    Xu, Qiuliang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (11) : 9904 - 9920
  • [14] Machine Learning in the Tasks of Identifying Unwanted Content
    Gorodnichev, M. G.
    Vanushina, A. V.
    Moseva, M. S.
    Trubnikova, N., V
    2019 WAVE ELECTRONICS AND ITS APPLICATION IN INFORMATION AND TELECOMMUNICATION SYSTEMS (WECONF), 2019,
  • [15] Robust Machine Learning via Privacy/Rate-Distortion Theory
    Wang, Ye
    Aeron, Shuchin
    Rakin, Adnan Siraj
    Koike-Akino, Toshiaki
    Moulin, Pierre
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 1320 - 1325
  • [16] Enhancing Robustness of Machine Learning Systems via Data Transformations
    Bhagoji, Arjun Nitin
    Cullina, Daniel
    Sitawarin, Chawin
    Mittal, Prateek
    2018 52ND ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2018,
  • [17] Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?
    Zheng, Huadi
    Hu, Haibo
    Han, Ziyang
    IEEE INTELLIGENT SYSTEMS, 2020, 35 (04) : 5 - 14
  • [18] Communication-efficient Federated Learning with Privacy Enhancing via Probabilistic Scheduling
    Zhou, Ziao
    Huang, Shaoming
    Wu, Youlong
    Wen, Dingzhu
    Wang, Ting
    Cai, Haibin
    Shi, Yuanming
    2024 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2024,
  • [19] Privacy-friendly machine learning - Part 2: Privacy attacks and privacy-preserving machine learning
    Stock J.
    Petersen T.
    Behrendt C.-A.
    Federrath H.
    Kreutzburg T.
    Informatik Spektrum, 2022, 45 (3) : 137 - 145
  • [20] Security and Privacy in Machine Learning
    Chandran, Nishanth
    INFORMATION SYSTEMS SECURITY, ICISS 2023, 2023, 14424 : 229 - 248