Differential Privacy in Distributed Optimization With Gradient Tracking

被引:6
|
作者
Huang, Lingying [1 ]
Wu, Junfeng [2 ]
Shi, Dawei [3 ]
Dey, Subhrakanti [4 ]
Shi, Ling [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[3] Beijing Inst Technol, Sch Automat, State Key Lab Intelligent Control & Decis Complex, Beijing 100081, Peoples R China
[4] Uppsala Univ, Dept Elect Engn, SE-75121 Uppsala, Sweden
基金
中国国家自然科学基金;
关键词
Differential privacy (DP); directed graph; distributed optimization; gradient tracking; ALGORITHMS;
D O I
10.1109/TAC.2024.3352328
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optimization with gradient tracking is particularly notable for its superior convergence results among the various distributed algorithms, especially in the context of directed graphs. However, privacy concerns arise when gradient information is transmitted directly which would induce more information leakage. Surprisingly, literature has not adequately addressed the associated privacy issues. In response to the gap, our article proposes a privacy-preserving distributed optimization algorithm with gradient tracking by adding noises to transmitted messages, namely, the decision variables and the estimate of the aggregated gradient. We prove two dilemmas for this kind of algorithm. In the first dilemma, we reveal that this distributed optimization algorithm with gradient tracking cannot achieve epsilon-differential privacy (DP) and exact convergence simultaneously. Building on this, we subsequently highlight that the algorithm fails to achieve epsilon-DP when employing nonsummable stepsizes in the presence of Laplace noises. It is crucial to emphasize that these findings hold true regardless of the size of the privacy metric epsilon. After that, we rigorously analyze the convergence performance and privacy level given summable stepsize sequences under the Laplace distribution since it is only with summable stepsizes that is meaningful for us to study. We derive sufficient conditions that allow for the simultaneous stochastically bounded accuracy and epsilon-DP. Recognizing that several options can meet these conditions, we further derive an upper bound of the mean error's variance and specify the mathematical expression of epsilon under such conditions. Numerical simulations are provided to demonstrate the effectiveness of our proposed algorithm.
引用
收藏
页码:5727 / 5742
页数:16
相关论文
共 50 条
  • [31] Convergence of Distributed Gradient-Tracking-Based Optimization Algorithms with Random Graphs
    Jiexiang Wang
    Keli Fu
    Yu Gu
    Tao Li
    Journal of Systems Science and Complexity, 2021, 34 : 1438 - 1453
  • [32] An enhanced gradient-tracking bound for distributed online stochastic convex optimization
    Alghunaim, Sulaiman A.
    Yuan, Kun
    SIGNAL PROCESSING, 2024, 217
  • [33] Distributed Event-Triggered Stochastic Gradient-Tracking for Nonconvex Optimization
    Ishikawa, Daichi
    Hayashi, Naoki
    Takai, Shigemasa
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2024, E107A (05) : 762 - 769
  • [34] Distributed Data Mining with Differential Privacy
    Zhang, Ning
    Li, Ming
    Lou, Wenjing
    2011 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2011,
  • [35] Distributed Differential Privacy via Shuffling
    Cheu, Albert
    Smith, Adam
    Ullman, Jonathan
    Zeber, David
    Zhilyaev, Maxim
    ADVANCES IN CRYPTOLOGY - EUROCRYPT 2019, PT I, 2019, 11476 : 375 - 403
  • [36] Distributed Linear Bandits With Differential Privacy
    Li, Fengjiao
    Zhou, Xingyu
    Ji, Bo
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (03): : 3161 - 3173
  • [37] Differential privacy distributed optimization algorithm against adversarial attacks for efficiency optimization of complex industrial processes
    Yue, Changyang
    Du, Wenli
    Li, Zhongmei
    Liu, Bing
    Nie, Rong
    Qian, Feng
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [38] A Distributed Stochastic Gradient Tracking Method
    Pu, Shi
    Nedic, Angelia
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 963 - 968
  • [39] The Gradient Tracking Is a Distributed Integral Action
    Notarnicola, Ivano
    Bin, Michelangelo
    Marconi, Lorenzo
    Notarstefano, Giuseppe
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (12) : 7911 - 7918
  • [40] Distributed stochastic gradient tracking methods
    Shi Pu
    Angelia Nedić
    Mathematical Programming, 2021, 187 : 409 - 457