On-Line Building Energy Optimization Using Deep Reinforcement Learning

被引:392
|
作者
Mocanu, Elena [1 ,2 ]
Mocanu, Decebal Constantin [3 ]
Nguyen, Phuong H. [1 ]
Liotta, Antonio [4 ]
Webber, Michael E. [5 ]
Gibescu, Madeleine [1 ]
Slootweg, J. G. [1 ]
机构
[1] Eindhoven Univ Technol, Dept Elect Engn, NL-5600 MB Eindhoven, Netherlands
[2] Eindhoven Univ Technol, Dept Mech Engn, NL-5600 MB Eindhoven, Netherlands
[3] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5600 MB Eindhoven, Netherlands
[4] Univ Derby, Data Sci Ctr, Derby DE1 3HD, England
[5] Univ Texas Austin, Dept Mech Engn, Austin, TX 78712 USA
基金
欧盟地平线“2020”;
关键词
Deep reinforcement learning; demand response; deep neural networks; smart grid; strategic optimization; PREDICTION;
D O I
10.1109/TSG.2018.2834219
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These arc expected to benefit planning and operation of the future power systems and to help customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using deep reinforcement learning, a hybrid type of methods that combines reinforcement learning with deep learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and deep policy gradient, both of which have been extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly dimensional database includes information about photovoltaic power generation, electric vehicles and buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.
引用
收藏
页码:3698 / 3708
页数:11
相关论文
共 50 条
  • [31] Deep reinforcement learning optimization scheduling algorithm for continuous production line
    Zhu, Guang-He
    Zhu, Zhi-Qiang
    Yuan, Yi-Ping
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2024, 54 (07): : 2086 - 2092
  • [32] Study on deep reinforcement learning techniques for building energy consumption forecasting
    Liu, Tao
    Tan, Zehan
    Xu, Chengliang
    Chen, Huanxin
    Li, Zhengfei
    ENERGY AND BUILDINGS, 2020, 208
  • [33] Energy Management System by Deep Reinforcement Learning Approach in a Building Microgrid
    Dini, Mohsen
    Ossart, Florence
    ELECTRIMACS 2022, VOL 2, 2024, 1164 : 257 - 269
  • [34] Building interfaces for on-line collaborative learning
    Ivan Kalas
    Michal Winczer
    Education and Information Technologies, 2006, 11 (3-4) : 371 - 384
  • [35] Building interfaces for on-line collaborative learning
    Kalas, Ivan
    Winczer, Michal
    EDUCATION AND INFORMATION TECHNOLOGIES, 2006, 11 (3-4) : 371 - 384
  • [36] A model-based reinforcement learning approach using on-line clustering
    Tziortziotis, Nikolaos
    Blekas, Konstantinos
    2012 IEEE 24TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2012), VOL 1, 2012, : 712 - 718
  • [37] Energy consumption prediction method of energy saving building based on deep reinforcement learning
    He, Chuan
    Xiong, Ying
    Lin, Yeda
    Yu, Lie
    Xiong, Hui-Hua
    INTERNATIONAL JOURNAL OF GLOBAL ENERGY ISSUES, 2022, 44 (5-6) : 524 - 536
  • [38] Adapting Sampling Interval of Sensor Networks Using On-Line Reinforcement Learning
    Martins Dias, Gabriel
    Nurchis, Maddalena
    Bellalta, Boris
    2016 IEEE 3RD WORLD FORUM ON INTERNET OF THINGS (WF-IOT), 2016, : 460 - 465
  • [39] Building Safe and Stable DNN Controllers using Deep Reinforcement Learning and Deep Imitation Learning
    He, Xudong
    2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 775 - 784
  • [40] Tree-Based On-Line Reinforcement Learning
    Salles Barreto, Andre da Motta
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 2417 - 2423