Contactless blood oxygen estimation from face videos: A multi-model fusion method based on deep learning

被引:9
|
作者
Hu, Min [1 ]
Wu, Xia [1 ]
Wang, Xiaohua [1 ]
Xing, Yan [2 ]
An, Ning [1 ,3 ]
Shi, Piao [1 ]
机构
[1] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Anhui Prov Key Lab Affect Comp & Adv Intelligent M, Minist Educ, Hefei 230601, Anhui, Peoples R China
[2] Hefei Univ Technol, Sch Math, Hefei 230601, Anhui, Peoples R China
[3] Hefei Univ Technol, Natl Smart Eldercare Int S&T Cooperat Base, Hefei 230601, Anhui, Peoples R China
关键词
Estimation; Remote photo-plethysmography; Deep learning; Residual network; Coordinate attention; Multi-model fusion; PULSE; NONCONTACT; SIGNAL;
D O I
10.1016/j.bspc.2022.104487
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Blood Oxygen (SpO2), a key indicator of respiratory function, has received increasing attention during the COVID-19 pandemic. Clinical results show that patients with COVID-19 likely have distinct lower SpO2 before the onset of significant symptoms. Aiming at the shortcomings of current methods for monitoring SpO2 by face videos, this paper proposes a novel multi-model fusion method based on deep learning for SpO2 estimation. The method includes the feature extraction network named Residuals and Coordinate Attention (RCA) and the multimodel fusion SpO2 estimation module. The RCA network uses the residual block cascade and coordinate attention mechanism to focus on the correlation between feature channels and the location information of feature space. The multi-model fusion module includes the Color Channel Model (CCM) and the Network-Based Model(NBM). To fully use the color feature information in face videos, an image generator is constructed in the CCM to calculate SpO2 by reconstructing the red and blue channel signals. Besides, to reduce the disturbance of other physiological signals, a novel two-part loss function is designed in the NBM. Given the complementarity of the features and models that CCM and NBM focus on, a Multi-Model Fusion Model(MMFM) is constructed. The experimental results on the PURE and VIPL-HR datasets show that three models meet the clinical requirement (the mean absolute error <= 2%) and demonstrate that the multi-model fusion can fully exploit the SpO2 features of face videos and improve the SpO2 estimation performance. Our research achievements will facilitate applications in remote medicine and home health.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Multi-model train state estimation based on multi-sensor parallel fusion filtering
    Jin, Yongze
    Xie, Guo
    Li, Yankai
    Shang, Linyu
    Hei, Xinhong
    Ji, Wenjiang
    Han, Ning
    Wang, Bo
    ACCIDENT ANALYSIS AND PREVENTION, 2022, 165
  • [32] Multi-state joint estimation of series battery pack based on multi-model fusion
    Liu, Fang
    Yu, Dan
    Su, Weixing
    Bu, Fantao
    ELECTROCHIMICA ACTA, 2023, 443
  • [33] A Survey on Deep Learning Face Age Estimation Model: Method and Ethnicity
    Dahlan, Hadi A.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (11) : 86 - 101
  • [34] Deep multi-model fusion network based real object tactile understanding from haptic data
    Joolee, Joolekha Bibi
    Uddin, Md Azher
    Jeon, Seokhee
    APPLIED INTELLIGENCE, 2022, 52 (14) : 16605 - 16620
  • [35] Deep multi-model fusion network based real object tactile understanding from haptic data
    Joolekha Bibi Joolee
    Md Azher Uddin
    Seokhee Jeon
    Applied Intelligence, 2022, 52 : 16605 - 16620
  • [36] A feature reuse based multi-model fusion method for state of health estimation of lithium-ion batteries
    Bai, Junqi
    Huang, Jiayin
    Luo, Kai
    Yang, Fan
    Xian, Yanhua
    JOURNAL OF ENERGY STORAGE, 2023, 70
  • [37] A Self-Learning Fault Diagnosis Strategy Based on Multi-Model Fusion
    Wang, Tianzhen
    Dong, Jingjing
    Xie, Tao
    Diallo, Demba
    Benbouzid, Mohamed
    INFORMATION, 2019, 10 (03):
  • [38] Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification
    Al-Tameemi, Israa K. Salman
    Feizi-Derakhshi, Mohammad-Reza
    Pashazadeh, Saeed
    Asadpour, Mohammad
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (02): : 2145 - 2177
  • [39] Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification
    Salman Al-Tameemi I.K.
    Feizi-Derakhshi M.-R.
    Pashazadeh S.
    Asadpour M.
    Computers, Materials and Continua, 2023, 76 (02): : 2145 - 2177
  • [40] Financial Forecasting Method for Generative Adversarial Networks Based on Multi-model Fusion
    Lin, Pei-Guang
    Li, Qing-Tao
    Zhou, Jia-Qian
    Wang, Ji-Hou
    Jian, Mu-Wei
    Zhang, Chen
    Journal of Computers (Taiwan), 2023, 34 (01): : 131 - 144