Respiratory Rate Estimation from Thermal Video Data Using Spatio-Temporal Deep Learning

被引:0
|
作者
Mozafari, Mohsen [1 ]
Law, Andrew J. [1 ,2 ]
Goubran, Rafik A. [1 ]
Green, James R. [1 ]
机构
[1] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
[2] Natl Res Council Canada NRC, Flight Res Lab, Ottawa, ON K1A 0R6, Canada
关键词
respiration rate estimation; thermal video; deep learning; face detection;
D O I
10.3390/s24196386
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Thermal videos provide a privacy-preserving yet information-rich data source for remote health monitoring, especially for respiration rate (RR) estimation. This paper introduces an end-to-end deep learning approach to RR measurement using thermal video data. A detection transformer (DeTr) first finds the subject's facial region of interest in each thermal frame. A respiratory signal is estimated from a dynamically cropped thermal video using 3D convolutional neural networks and bi-directional long short-term memory stages. To account for the expected phase shift between the respiration measured using a respiratory effort belt vs. a facial video, a novel loss function based on negative maximum cross-correlation and absolute frequency peak difference was introduced. Thermal recordings from 22 subjects, with simultaneous gold standard respiratory effort measurements, were studied while sitting or standing, both with and without a face mask. The RR estimation results showed that our proposed method outperformed existing models, achieving an error of only 1.6 breaths per minute across the four conditions. The proposed method sets a new State-of-the-Art for RR estimation accuracy, while still permitting real-time RR estimation.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Spatio-temporal rate allocation for hybrid video coding
    Beermann, M
    VISUAL COMMUNICATIONS AND IMAGE PROCESSING 2003, PTS 1-3, 2003, 5150 : 222 - 230
  • [32] A deep learning method for data recovery in sensor networks using effective spatio-temporal correlation data
    Du, Jinghan
    Chen, Haiyan
    Zhang, Weining
    SENSOR REVIEW, 2019, 39 (02) : 208 - 217
  • [33] Video-based driver emotion recognition using hybrid deep spatio-temporal feature learning
    Varma, Harshit
    Ganapathy, Nagarajan
    Deserno, Thomas M.
    MEDICAL IMAGING 2022: IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS, 2022, 12037
  • [34] Machine Learning Based Estimation of Ozone Using Spatio-Temporal Data from Air Quality Monitoring Stations
    Chiwewe, Tapiwa M.
    Ditsela, Jeofrey
    2016 IEEE 14TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), 2016, : 58 - 63
  • [35] Learning spatio-temporal features for action recognition from the side of the video
    Pei, Lishen
    Ye, Mao
    Zhao, Xuezhuan
    Xiang, Tao
    Li, Tao
    SIGNAL IMAGE AND VIDEO PROCESSING, 2016, 10 (01) : 199 - 206
  • [36] Learning spatio-temporal features for action recognition from the side of the video
    Lishen Pei
    Mao Ye
    Xuezhuan Zhao
    Tao Xiang
    Tao Li
    Signal, Image and Video Processing, 2016, 10 : 199 - 206
  • [37] Enhanced spatio-temporal electric load forecasts using less data with active deep learning
    Arsam Aryandoust
    Anthony Patt
    Stefan Pfenninger
    Nature Machine Intelligence, 2022, 4 : 977 - 991
  • [38] Crop Yield Prediction Using Multitemporal UAV Data and Spatio-Temporal Deep Learning Models
    Nevavuori, Petteri
    Narra, Nathaniel
    Linna, Petri
    Lipping, Tarmo
    REMOTE SENSING, 2020, 12 (23) : 1 - 18
  • [39] Spatio-Temporal Agnostic Deep Learning Modeling of Forest Fire Prediction Using Weather Data
    Mutakabbir, Abdul
    Lung, Chung-Horng
    Ajila, Samuel A.
    Zaman, Marzia
    Naik, Kshirasagar
    Purcell, Richard
    Sampalli, Srinivas
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 346 - 351
  • [40] Enhanced spatio-temporal electric load forecasts using less data with active deep learning
    Aryandoust, Arsam
    Patt, Anthony
    Pfenningert, Stefan
    NATURE MACHINE INTELLIGENCE, 2022, 4 (11) : 977 - 991