Novel Power Grid Reduction Method based on L1 Regularization

被引:0
|
作者
Wang, Ye [1 ]
Li, Meng [1 ]
Yi, Xinyang [1 ]
Song, Zhao [2 ]
Orshansky, Michael [1 ]
Caramanis, Constantine [1 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[2] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
基金
美国国家科学基金会;
关键词
D O I
10.1145/2744769.2744877
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Model order reduction exploiting the spectral properties of the admittance matrix, known as the graph Laplacian, to control the approximation accuracy is a promising new class of approaches to power grid analysis. In this paper we introduce a method that allows a dramatic increase in the resulting graph sparsity and can handle large dense input graphs. The method is based on the observation that the information about the realistic ranges of port currents can be used to significantly improve the resulting graph sparsity. In practice, port currents cannot vary unboundedly and the estimates of peak currents are often available early in the design cycle. However, the existing methods including the sampling-based spectral sparsification approach [11] cannot utilize this information. We propose a novel framework of graph Sparsification by L1 regularization on Laplacians (SparseLL) to exploit the available range information to achieve a higher degree of sparsity and better approximation quality. By formulating the power grid reduction as a sparsity-inducing optimization problem, we leverage the recent progress in stochastic approximation and develop a stochastic gradient descent algorithm as an efficient solution. Using established benchmarks for experiments, we demonstrate that SparseLL can achieve an up to 10X edge sparsity improvement compared to the spectral sparsification approach assuming the full range of currents, with an up to 10X accuracy improvement. The running time of our algorithm also scales quite favorably due to the low complexity and fast convergence, which leads us to believe that our algorithm is highly suitable for large-scale dense problems.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Orbital minimization method with l1 regularization
    Lu, Jianfeng
    Thicke, Kyle
    JOURNAL OF COMPUTATIONAL PHYSICS, 2017, 336 : 87 - 103
  • [2] A novel method for financial distress prediction based on sparse neural networks with L1/2 regularization
    Chen, Ying
    Guo, Jifeng
    Huang, Junqin
    Lin, Bin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (07) : 2089 - 2103
  • [3] A novel variational model for pan-sharpening based on L1 regularization
    Chen, Chaoqian
    Meng, Yong
    Luo, Qixiang
    Zhou, Zeming
    REMOTE SENSING LETTERS, 2018, 9 (02) : 170 - 179
  • [4] L1/2 regularization
    ZongBen Xu
    Hai Zhang
    Yao Wang
    XiangYu Chang
    Yong Liang
    Science China Information Sciences, 2010, 53 : 1159 - 1169
  • [5] L1/2 regularization
    XU ZongBen 1
    2 Department of Mathematics
    3 University of Science and Technology
    Science China(Information Sciences), 2010, 53 (06) : 1159 - 1169
  • [7] Impedance inversion based on L1 norm regularization
    Liu, Cai
    Song, Chao
    Lu, Qi
    Liu, Yang
    Feng, Xuan
    Gao, Yue
    JOURNAL OF APPLIED GEOPHYSICS, 2015, 120 : 7 - 13
  • [8] AN l1 - lp DC REGULARIZATION METHOD FOR COMPRESSED SENSING
    Cao, Wenhe
    Ku, Hong-Kun
    JOURNAL OF NONLINEAR AND CONVEX ANALYSIS, 2020, 21 (09) : 1889 - 1901
  • [9] Sorted L1 regularization method for damage detection based on electrical impedance tomography
    Fan, A. Wenru
    Cheng, B. Yu
    REVIEW OF SCIENTIFIC INSTRUMENTS, 2021, 92 (12):
  • [10] Low oversampling Staggered SAR imaging method based on L1 & TV regularization
    Liu M.
    Xu Z.
    Chen T.
    Zhang B.
    Wu Y.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2023, 45 (09): : 2718 - 2726