Deep reinforcement learning control for co-optimizing energy consumption, thermal comfort, and indoor air quality in an office building

被引:1
|
作者
Guo, Fangzhou [1 ]
Ham, Sang woo [1 ]
Kim, Donghun [1 ]
Moon, Hyeun Jun [2 ]
机构
[1] Lawrence Berkeley Natl Lab, Berkeley, CA 94720 USA
[2] Dankook Univ, Dept Architectural Engn, Yongin, South Korea
关键词
Deep reinforcement learning; Deep deterministic policy gradient; Smart building; Air conditioning; Energy efficiency; Thermal comfort; Indoor air quality; MODELS;
D O I
10.1016/j.apenergy.2024.124467
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
With the recent demand for decarbonization and energy efficiency, advanced HVAC control using Deep Reinforcement Learning (DRL) becomes a promising solution. Due to its flexible structures, DRL has been successful in energy reduction for many HVAC systems. However, only a few researches applied DRL agents to manage the entire central HVAC system and control multiple components in both the water loop and the air loop, owing to its complex system structures. Moreover, those researches have not extended their applications by incorporating the indoor air quality, especially both CO2 and PM2.5concentrations, on top of energy saving and thermal comfort, as achieving those objectives simultaneously can cause multiple control conflicts. What's more, DRL agents are usually trained on the simulation environment before deployment, so another challenge is to develop an accurate but relatively simple simulator. Therefore, we propose a DRL algorithm for a central HVAC system to co-optimize energy consumption, thermal comfort, indoor CO2 level, and indoor PM2.5 level in an office building. To train the controller, we also developed a hybrid simulator that decoupled the complex system into multiple simulation models, which are calibrated separately using laboratory test data. The hybrid simulator combined the dynamics of the HVAC system, the building envelope, as well as moisture, CO2, and particulate matter transfer. Three control algorithms (rule-based, MPC, and DRL) are developed, and their performances are evaluated on the hybrid simulator environment with a realistic scenario (i.e., with stochastic noises). The test results showed that, the DRL controller can save 21.4 % of energy compared to a rule-based controller, and has improved thermal comfort, reduced indoor CO2 concentration. The MPC controller showed an 18.6 % energy saving compared to the DRL controller, mainly due to savings from comfort and indoor air quality boundary violations caused by unmeasured disturbances, and it also highlights computational challenges in real-time control due to non-linear optimization. Finally, we provide the practical considerations for designing and implementing the DRL and MPC controllers based on their respective pros and cons.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Interaction between Thermal Comfort, Indoor Air Quality and Ventilation Energy Consumption of Educational Buildings: A Comprehensive Review
    Jia, Lin-Rui
    Han, Jie
    Chen, Xi
    Li, Qing-Yun
    Lee, Chi-Chung
    Fung, Yat-Hei
    BUILDINGS, 2021, 11 (12)
  • [42] Comparing different control strategies for indoor thermal comfort aimed at the evaluation of the energy cost of quality of building
    Calvino, Francesco
    La Gennusa, Maria
    Morale, Massimo
    Rizzo, Gianfranco
    Scaccianoce, Gianluca
    APPLIED THERMAL ENGINEERING, 2010, 30 (16) : 2386 - 2395
  • [43] Optimizing the hyper-parameters of deep reinforcement learning for building control
    Li, Shuhao
    Su, Shu
    Lin, Xiaorui
    BUILDING SIMULATION, 2025,
  • [44] An investigation of indoor air quality, thermal comfort and sick building syndrome symptoms in UK energy efficient homes
    McGill, Grainne
    Oyedele, Lukumon O.
    McAllister, Keith
    SMART AND SUSTAINABLE BUILT ENVIRONMENT, 2015, 4 (03) : 329 - 348
  • [45] Effect of the Set-Point Temperature on Indoor Thermal Comfort and Energy Demand in Office Building
    Park, Taeju
    Song, Doosam
    Kang, Kinam
    Kang, Gyumin
    Kim, Brain S.
    Cho, Hyejung
    Song, Sunggeun
    ASHRAE TRANSACTIONS 2014, VOL 120, PT 2, 2014, 120
  • [46] Pig Building Environment Optimization Control and Energy Consumption Analysis Based on Deep Reinforcement Learning
    Xie Q.
    Wang S.
    Musabimana J.
    Guo Y.
    Liu H.
    Bao J.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2023, 54 (11): : 376 - 430
  • [47] An Innovative Modelling Approach Based on Building Physics and Machine Learning for the Prediction of Indoor Thermal Comfort in an Office Building
    Tardioli, Giovanni
    Filho, Ricardo
    Bernaud, Pierre
    Ntimos, Dimitrios
    BUILDINGS, 2022, 12 (04)
  • [48] Study of the thermal comfort, of the energy consumption and of the indoor environment control in surgery rooms
    Melhado, MA
    Beyer, PO
    Hensen, JM
    Siqueira, LFG
    INDOOR AIR 2005: PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON INDOOR AIR QUALITY AND CLIMATE, VOLS 1-5, 2005, : 3160 - 3165
  • [49] Smart control of window and air cleaner for mitigating indoor PM2.5 with reduced energy consumption based on deep reinforcement learning
    An, Yuting
    Niu, Zhuolun
    Chen, Chun
    BUILDING AND ENVIRONMENT, 2022, 224
  • [50] Influence of the Airflow in a Solar Passive Building on the Indoor Air Quality and Thermal Comfort Levels
    Conceicao, Eusebio
    Gomes, Joao
    Awbi, Hazim
    ATMOSPHERE, 2019, 10 (12)