Adversarial Training for a Continuous Robustness Control Problem in Power Systems

被引:5
|
作者
Omnes, Loic [1 ]
Marot, Antoine [1 ]
Donnot, Benjamin [1 ]
机构
[1] RTE, AI Lab, Paris, France
来源
2021 IEEE MADRID POWERTECH | 2021年
关键词
adversarial; robustness; control; power system;
D O I
10.1109/PowerTech46648.2021.9494982
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
We propose a new adversarial training approach for injecting robustness when designing controllers for upcoming cyber-physical power systems. Previous approaches relying deeply on simulations are not able to cope with the rising complexity and are too costly when used online in terms of computation budget. In comparison, our method proves to be computationally efficient online while displaying useful robustness properties. To do so we model an adversarial framework, propose the implementation of a fixed opponent policy and test it on a L2RPN (Learning to Run a Power Network) environment. This environment is a synthetic but realistic modeling of a cyber-physical system accounting for one third of the IEEE 118 grid. Using adversarial testing, we analyze the results of submitted trained agents from the robustness track of the L2RPN competition. We then further assess the performance of these agents in regards to the continuous N-1 problem through tailored evaluation metrics. We discover that some agents trained in an adversarial way demonstrate interesting preventive behaviors in that regard, which we discuss.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Toward Intrinsic Adversarial Robustness Through Probabilistic Training
    Dong, Junhao
    Yang, Lingxiao
    Wang, Yuan
    Xie, Xiaohua
    Lai, Jianhuang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3862 - 3872
  • [32] Provable Robustness of Adversarial Training for Learning Halfspaces with Noise
    Zou, Difan
    Frei, Spencer
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] Training on Foveated Images Improves Robustness to Adversarial Attacks
    Shah, Muhammad A.
    Kashaf, Aqsa
    Raj, Bhiksha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] XAI to Explore Robustness of Features in Adversarial Training for Cybersecurity
    AL-Essa, Malik
    Andresini, Giuseppina
    Appice, Annalisa
    Malerba, Donato
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2022), 2022, 13515 : 117 - 126
  • [35] ADVERSARIAL TRAINING FOR THE ADVERSARIAL ROBUSTNESS OF EEG-BASED BRAIN-COMPUTER INTERFACES
    Li, Yunhuan
    Yu, Xi
    Yu, Shujian
    Chen, Badong
    2022 IEEE 32ND INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2022,
  • [36] Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training
    Li, Chaofei
    Zhu, Ziyuan
    Niu, Ruicheng
    Zhao, Yuting
    COMPUTERS & SECURITY, 2024, 143
  • [37] Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks
    Ma, Linhai
    Liang, Liang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 240
  • [38] Enhancing Model Robustness and Accuracy Against Adversarial Attacks via Adversarial Input Training
    Ingle, Ganesh
    Pawale, Sanjesh
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (03) : 1210 - 1228
  • [39] ANALYSIS OF CONTROL PERFORMANCE FOR STABILITY ROBUSTNESS OF POWER-SYSTEMS
    KHAMMASH, MH
    VITTAL, V
    PAWLOSKI, CD
    IEEE TRANSACTIONS ON POWER SYSTEMS, 1994, 9 (04) : 1861 - 1867
  • [40] Generative Adversarial Training with Perturbed Token Detection for Model Robustness
    Zhao, Jiahao
    Mao, Wenji
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13012 - 13025