Multimodal Data Fusion and Deep Learning for Occupant-Centric Indoor Environmental Quality Classification

被引:0
|
作者
Lee, Min Jae [1 ]
Zhang, Ruichuan [1 ]
机构
[1] Virginia Polytech Inst & State Univ, Myers Lawson Sch Construction, Blacksburg, VA 24061 USA
关键词
Indoor environmental quality (IEQ); Multimodal data fusion; Deep learning; Occupant comfort and health; Transformer; IMPROVED EFFICIENCY; FEEDBACK; HEALTH; IMPACT; IEQ;
D O I
10.1061/JCCEE5.CPENG-6249
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Amidst the growing recognition of the impact of indoor environmental conditions on buildings and occupant comfort, health, and well-being, there has been an increasing focus on the assessment and modeling of indoor environmental quality (IEQ). Despite considerable advancements, existing IEQ modeling methodologies often prioritize and limit to singular comfort metrics, potentially neglecting the comprehensive factors associated with occupant comfort and health. There is a need for more inclusive and occupant-centric IEQ assessment models that cover a broader spectrum of environmental parameters and occupant needs. Such models require integrating diverse environmental and occupant data, facing challenges in leveraging data across various modalities and time scales as well as understanding the temporal patterns, relationships, and trends. This paper proposes a novel framework for classifying IEQ conditions based on occupant self-reported comfort and health levels to address these challenges. The proposed framework leverages a multimodal data-fusion approach with Transformer-based models, aiming to accurately predict indoor comfort and health levels by integrating diverse data sources, including multidimensional IEQ data and multimodal occupant feedback. The framework was evaluated in classifying IEQ conditions of selected public indoor spaces and achieved 97% and 96% accuracy in comfort and health-based classifications, outperforming several baselines.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Multiview Multimodal Feature Fusion for Breast Cancer Classification Using Deep Learning
    Hussain, Sadam
    Teevno, Mansoor Ali
    Naseem, Usman
    Avalos, Daly Betzabeth Avendano
    Cardona-Huerta, Servando
    Tamez-Pena, Jose Gerardo
    IEEE ACCESS, 2025, 13 : 9265 - 9275
  • [42] An enhanced multimodal fusion deep learning neural network for lung cancer classification
    Sangeetha, S. K. B.
    Mathivanan, Sandeep Kumar
    Karthikeyan, P.
    Rajadurai, Hariharan
    Shivahare, Basu Dev
    Mallik, Saurav
    Qin, Hong
    SYSTEMS AND SOFT COMPUTING, 2024, 6
  • [43] Emotion Recognition and Classification of Film Reviews Based on Deep Learning and Multimodal Fusion
    Na, Risu
    Sun, Ning
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [44] A comprehensive investigation of multimodal deep learning fusion strategies for breast cancer classification
    Nakach, Fatima-Zahrae
    Idri, Ali
    Goceri, Evgin
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (12)
  • [45] Perceived Mental Workload Classification Using Intermediate Fusion Multimodal Deep Learning
    Dolmans, Tenzing C.
    Poel, Mannes
    van't Klooster, Jan-Willem J. R.
    Veldkamp, Bernard P.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 14
  • [46] Tomato Disease Classification and Identification Method Based on Multimodal Fusion Deep Learning
    Zhang, Ning
    Wu, Huarui
    Zhu, Huaji
    Deng, Ying
    Han, Xiao
    AGRICULTURE-BASEL, 2022, 12 (12):
  • [47] Estimation of indoor occupancy level based on machine learning and multimodal environmental data
    Siecinski, Szymon
    Mohammadi, Esfandiar
    Grzegorzek, Marcin
    2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024, 2024, : 868 - 872
  • [48] Enhancing fetal electrocardiogram classification: A hybrid approach incorporating multimodal data fusion and advanced deep learning models
    Ziani S.
    Multimedia Tools and Applications, 2024, 83 (18) : 55011 - 55051
  • [49] A Deep Reinforcement Learning Method For Multimodal Data Fusion in Action Recognition
    Guo, Jiale
    Liu, Qiang
    Chen, Enqing
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 120 - 124
  • [50] Deep learning in multimodal remote sensing data fusion: A comprehensive review
    Li, Jiaxin
    Hong, Danfeng
    Gao, Lianru
    Yao, Jing
    Zheng, Ke
    Zhang, Bing
    Chanussot, Jocelyn
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2022, 112