Differential Privacy in Distributed Optimization With Gradient Tracking

被引:6
|
作者
Huang, Lingying [1 ]
Wu, Junfeng [2 ]
Shi, Dawei [3 ]
Dey, Subhrakanti [4 ]
Shi, Ling [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[3] Beijing Inst Technol, Sch Automat, State Key Lab Intelligent Control & Decis Complex, Beijing 100081, Peoples R China
[4] Uppsala Univ, Dept Elect Engn, SE-75121 Uppsala, Sweden
基金
中国国家自然科学基金;
关键词
Differential privacy (DP); directed graph; distributed optimization; gradient tracking; ALGORITHMS;
D O I
10.1109/TAC.2024.3352328
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optimization with gradient tracking is particularly notable for its superior convergence results among the various distributed algorithms, especially in the context of directed graphs. However, privacy concerns arise when gradient information is transmitted directly which would induce more information leakage. Surprisingly, literature has not adequately addressed the associated privacy issues. In response to the gap, our article proposes a privacy-preserving distributed optimization algorithm with gradient tracking by adding noises to transmitted messages, namely, the decision variables and the estimate of the aggregated gradient. We prove two dilemmas for this kind of algorithm. In the first dilemma, we reveal that this distributed optimization algorithm with gradient tracking cannot achieve epsilon-differential privacy (DP) and exact convergence simultaneously. Building on this, we subsequently highlight that the algorithm fails to achieve epsilon-DP when employing nonsummable stepsizes in the presence of Laplace noises. It is crucial to emphasize that these findings hold true regardless of the size of the privacy metric epsilon. After that, we rigorously analyze the convergence performance and privacy level given summable stepsize sequences under the Laplace distribution since it is only with summable stepsizes that is meaningful for us to study. We derive sufficient conditions that allow for the simultaneous stochastically bounded accuracy and epsilon-DP. Recognizing that several options can meet these conditions, we further derive an upper bound of the mean error's variance and specify the mathematical expression of epsilon under such conditions. Numerical simulations are provided to demonstrate the effectiveness of our proposed algorithm.
引用
收藏
页码:5727 / 5742
页数:16
相关论文
共 50 条
  • [1] Triggered Gradient Tracking for asynchronous distributed optimization
    Carnevale, Guido
    Notarnicola, Ivano
    Marconi, Lorenzo
    Notarstefano, Giuseppe
    AUTOMATICA, 2023, 147
  • [2] Distributed Adaptive Gradient Algorithm With Gradient Tracking for Stochastic Nonconvex Optimization
    Han, Dongyu
    Liu, Kun
    Lin, Yeming
    Xia, Yuanqing
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (09) : 6333 - 6340
  • [3] A Snapshot Gradient Tracking for Distributed Optimization over Digraphs
    Che, Keqin
    Yang, Shaofu
    ARTIFICIAL INTELLIGENCE, CICAI 2022, PT III, 2022, 13606 : 348 - 360
  • [4] Compressed Gradient Tracking Algorithm for Distributed Aggregative Optimization
    Chen, Liyuan
    Wen, Guanghui
    Liu, Hongzhe
    Yu, Wenwu
    Cao, Jinde
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (10) : 6576 - 6591
  • [5] Differential Privacy of Online Distributed Optimization under Adversarial Nodes
    Hou, Ming
    Li, Dequan
    Wu, Xiongjun
    Shen, Xiuyu
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 2172 - 2177
  • [6] Event-Triggered Based Differential Privacy Distributed Optimization
    Wang, Pinlin
    Wang, Zhenqian
    Lu, Jinhu
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024,
  • [7] An Accelerated Gradient Tracking Algorithm with Projection Error for Distributed Optimization
    Meng, Xiwang
    Liu, Qingshan
    Xiong, Jiang
    2023 15TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE, ICACI, 2023,
  • [8] Distributed Gradient Tracking for Unbalanced Optimization With Different Constraint Sets
    Cheng, Songsong
    Liang, Shu
    Fan, Yuan
    Hong, Yiguang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (06) : 3633 - 3640
  • [9] GTAdam: Gradient Tracking With Adaptive Momentum for Distributed Online Optimization
    Carnevale, Guido
    Farina, Francesco
    Notarnicola, Ivano
    Notarstefano, Giuseppe
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2023, 10 (03): : 1436 - 1448
  • [10] A stochastic gradient tracking algorithm with adaptive momentum for distributed optimization
    Li, Yantao
    Hu, Hanqing
    Zhang, Keke
    Lu, Qingguo
    Deng, Shaojiang
    Li, Huaqing
    NEUROCOMPUTING, 2025, 637