共 50 条
On Linear Convergence of ADMM for Decentralized Quantile Regression
被引:3
|作者:
Wang, Yue
[1
]
Lian, Heng
[1
]
机构:
[1] City Univ Hong Kong, Dept Math, Hong Kong, Peoples R China
关键词:
ADMM;
linear convergence;
proximal operator;
D O I:
10.1109/TSP.2023.3325622
中图分类号:
TM [电工技术];
TN [电子技术、通信技术];
学科分类号:
0808 ;
0809 ;
摘要:
The alternating direction method of multipliers (ADMM) is a natural method of choice for distributed parameter learning. For smooth and strongly convex consensus optimization problems, it has been shown that ADMM and some of its variants enjoy linear convergence in the distributed setting, much like in the traditional non-distributed setting. The optimization problem associated with parameter estimation in quantile regression is neither smooth nor strongly convex (although is convex) and thus it seems can only have sublinear convergence at best. Although this insinuates slow convergence, we show that, if the local sample size is sufficiently large compared to parameter dimension and network size, distributed estimation in quantile regression actually exhibits linear convergence up to the statistical precision, the precise meaning of which will be explained in the text.
引用
收藏
页码:3945 / 3955
页数:11
相关论文