Topological vulnerability of power grids to disasters: Bounds, adversarial attacks and reinforcement

被引:1
|
作者
Deka, Deepjyoti [1 ]
Vishwanath, Sriram [2 ]
Baldick, Ross [2 ]
机构
[1] Los Alamos Natl Lab, Ctr Nonlinear Studies, Los Alamos, NM 87545 USA
[2] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
来源
PLOS ONE | 2018年 / 13卷 / 10期
关键词
FAILURES; ROBUSTNESS; MITIGATION; STRATEGIES; NETWORKS; CASCADE; MODEL;
D O I
10.1371/journal.pone.0204815
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Natural disasters like hurricanes, floods or earthquakes can damage power grid devices and create cascading blackouts and islands. The nature of failure propagation and extent of damage, among other factors, is dependent on the structural features of the grid, that are distinct from that of random networks. This paper analyzes the structural vulnerability of real power grids to impending disasters and presents intuitive graphical metrics to quantify the extent of topological damage. We develop two improved graph eigen-value based bounds on the grid vulnerability. Further we study adversarial attacks aimed at weakening the grid's structural robustness and present three combinatorial algorithms to determine the optimal topological attack. Simulations on power grid networks and comparison with existing work show the improvements of the proposed measures and attack schemes.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [32] On the Vulnerability of Hyperdimensional Computing-Based Classifiers to Adversarial Attacks
    Yang, Fangfang
    Ren, Shaolei
    NETWORK AND SYSTEM SECURITY, NSS 2020, 2020, 12570 : 371 - 387
  • [33] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [34] On the vulnerability of deep learning to adversarial attacks for camera model identification
    Marra, F.
    Gragnaniello, D.
    Verdoliva, L.
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2018, 65 : 240 - 247
  • [35] Adversarial Attacks in a Deep Reinforcement Learning based Cluster Scheduler
    Zhang, Shaojun
    Wang, Chen
    Zomaya, Albert Y.
    2020 IEEE 28TH INTERNATIONAL SYMPOSIUM ON MODELING, ANALYSIS, AND SIMULATION OF COMPUTER AND TELECOMMUNICATION SYSTEMS (MASCOTS 2020), 2020, : 1 - 8
  • [36] Line failure probability bounds for power grids
    Nesti, Tommaso
    Zocca, Alessandro
    Zwart, Bert
    2017 IEEE POWER & ENERGY SOCIETY GENERAL MEETING, 2017,
  • [37] Critical State Detection for Adversarial Attacks in Deep Reinforcement Learning
    Kumar, Praveen R.
    Kumar, Niranjan, I
    Sivasankaran, Sujith
    Vamsi, Mohan A.
    Vijayaraghavan, Vineeth
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 1761 - 1766
  • [38] XSS adversarial example attacks based on deep reinforcement learning
    Chen, Li
    Tang, Cong
    He, Junjiang
    Zhao, Hui
    Lan, Xiaolong
    Li, Tao
    COMPUTERS & SECURITY, 2022, 120
  • [39] Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks
    Wu, Junlin
    Sibai, Hussein
    Vorobeychik, Yevgeniy
    PROCEEDINGS 45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS, SPW 2024, 2024, : 57 - 67
  • [40] Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning
    Sun, Jianwen
    Zhang, Tianwei
    Xie, Xiaofei
    Ma, Lei
    Zheng, Yan
    Chen, Kangjie
    Liu, Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5883 - 5891