Data Driven Q-Learning for Commercial HVAC Control

被引:6
|
作者
Faddel, Samy [1 ]
Tian, Guanyu [1 ]
Zhou, Qun [1 ]
Aburub, Haneen [2 ]
机构
[1] Univ Cent Florida, Dept Elect & Comp Engn, Orlando, FL 32816 USA
[2] Florida Int Univ, Dept Elect & Comp Engn, Miami, FL 33199 USA
来源
关键词
HVAC; Demand Response. Comfort level; Data; Reinforcement Learning; REINFORCEMENT; COMFORT;
D O I
10.1109/southeastcon44009.2020.9249737
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Commercial HVAC systems play a key role in the consumption of commercial buildings. Therefore, there is a need for a safe and cost-effective HVAC control algorithm. An algorithm that can learn from the previous experience and reduce the associated energy cost is highly required. In this paper, a data driven-based reinforcement learning for the optimal control of HVAC system of a commercial building is proposed. Random forests technique is utilized to provide a data driven model for the HVAC system. A Q-learning algorithm is used as a type of reinforcement learning to minimize the building energy consumption cost while maintaining the comfort level. The results showed that the proposed algorithm is able to maintain the required building temperature and provide a lower energy cost compared to the base case schedule.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Nested Q-learning of hierarchical control structures
    Digney, BL
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 161 - 166
  • [22] Deep Q-learning: A robust control approach
    Varga, Balazs
    Kulcsar, Balazs
    Chehreghani, Morteza Haghir
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2023, 33 (01) : 526 - 544
  • [23] Decentralized Q-Learning for Uplink Power Control
    Dzulkifly, Sumayyah
    Giupponi, Lorenza
    Said, Fatin
    Dohler, Mischa
    2015 IEEE 20TH INTERNATIONAL WORKSHOP ON COMPUTER AIDED MODELLING AND DESIGN OF COMMUNICATION LINKS AND NETWORKS (CAMAD), 2015, : 54 - 58
  • [24] Fuzzy Q-learning Control for Temperature Systems
    Chen, Yeong-Chin
    Hung, Lon-Chen
    Syamsudin, Mariana
    22ND IEEE/ACIS INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ARTIFICIAL INTELLIGENCE, NETWORKING AND PARALLEL/DISTRIBUTED COMPUTING (SNPD 2021-FALL), 2021, : 148 - 151
  • [25] Q-learning for risk-sensitive control
    Borkar, VS
    MATHEMATICS OF OPERATIONS RESEARCH, 2002, 27 (02) : 294 - 311
  • [26] Nested Q-learning of hierarchical control structures
    Digney, BL
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 1676 - 1681
  • [27] Deep Reinforcement Learning: From Q-Learning to Deep Q-Learning
    Tan, Fuxiao
    Yan, Pengfei
    Guan, Xinping
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 475 - 483
  • [28] Backward Q-learning: The combination of Sarsa algorithm and Q-learning
    Wang, Yin-Hao
    Li, Tzuu-Hseng S.
    Lin, Chih-Jui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (09) : 2184 - 2193
  • [29] Development of a Bias Compensating Q-Learning Controller for a Multi-Zone HVAC Facility
    Asad Rizvi, Syed Ali
    Pertzborn, Amanda J.
    Lin, Zongli
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2023, 10 (08) : 1704 - 1715
  • [30] Development of a Bias Compensating Q-Learning Controller for a Multi-Zone HVAC Facility
    Syed Ali Asad Rizvi
    Amanda J.Pertzborn
    Zongli Lin
    IEEE/CAAJournalofAutomaticaSinica, 2023, 10 (08) : 1704 - 1715