A General Multistage Deep Learning Framework for Sensor-Based Human Activity Recognition Under Bounded Computational Budget

被引:0
|
作者
Wang, Xing [1 ]
Zhang, Lei [1 ]
Cheng, Dongzhou [1 ]
Tang, Yin [2 ]
Wang, Shuoyuan [3 ]
Wu, Hao [4 ]
Song, Aiguo [5 ]
机构
[1] Nanjing Normal Univ, Sch Elect & Automat Engn, Nanjing 210023, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[3] Southern Univ Sci & Technol, Dept Stat & Data Sci, Shenzhen 518055, Peoples R China
[4] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650500, Yunnan, Peoples R China
[5] Southeast Univ, Sch Instrument Sci & Engn, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Human activity recognition; Computational efficiency; Accuracy; Proposals; Reinforcement learning; Predictive models; Power demand; Heuristic algorithms; Data models; early exit; human activity recognition (HAR); reinforcement learning; sensors; sequential decision;
D O I
10.1109/TIM.2024.3481549
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, sliding windows have been widely employed for sensor-based human activity recognition (HAR) due to their implementational simplicity. In this article, inspired by the fact that not all time intervals in a window are activity-relevant, we propose a novel multistage HAR framework named MS-HAR by implementing a sequential decision procedure to progressively process a sequence of relatively small intervals, i.e., reduced input, which is automatically cropped from the original window with reinforcement learning. Such a design naturally facilitates dynamic inference at runtime, which may be terminated at an arbitrary time once the network obtains sufficiently high confidence about its current prediction. Compared to most existing works that directly handle the whole window, our method allows for very precisely controlling the computational budget online by setting confidence thresholds, which forces the network to spend more computation on a "difficult" activity while spending less computation on an "easy" activity under a finite computational budget. Extensive experiments on four benchmark HAR datasets consisting of WISMD, PAMAP2, USC-HAD, and one weakly labeled dataset demonstrate that our method is considerably more flexible and efficient than the competitive baselines. Particularly, our proposed framework is general since it is compatible with most mainstream backbone networks.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Resource-Efficient Continual Learning for Sensor-Based Human Activity Recognition
    Leite, Clayton Frederick Souza
    Xiao, Yu
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [32] Continual learning in sensor-based human activity recognition: An empirical benchmark analysis
    Jha, Saurav
    Schiemer, Martin
    Zambonelli, Franco
    Ye, Juan
    INFORMATION SCIENCES, 2021, 575 : 1 - 21
  • [33] Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition
    Khaertdinov, Bulat
    Ghaleb, Esam
    Asteriadis, Stylianos
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [34] A Deep Learning Framework for Smartphone Based Human Activity Recognition
    Mallik, Manjarini
    Sarkar, Garga
    Chowdhury, Chandreyee
    MOBILE NETWORKS & APPLICATIONS, 2024, 29 (01): : 29 - 41
  • [35] Selective Ensemble Based on Extreme Learning Machine for Sensor-Based Human Activity Recognition
    Tian, Yiming
    Zhang, Jie
    Chen, Lingling
    Geng, Yanli
    Wang, Xitai
    SENSORS, 2019, 19 (16)
  • [36] Segment-Based Unsupervised Learning Method in Sensor-Based Human Activity Recognition
    Takenaka, Koki
    Kondo, Kei
    Hasegawa, Tatsuhito
    SENSORS, 2023, 23 (20)
  • [37] Multiclass autoencoder-based active learning for sensor-based human activity recognition
    Park, Hyunseo
    Lee, Gyeong Ho
    Han, Jaeseob
    Choi, Jun Kyun
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 151 : 71 - 84
  • [38] Sensor-based Complex Human Activity Recognition from Smartwatch Data using Hybrid Deep Learning Network
    Mekruksavanich, Sakorn
    Jitpattanakul, Anuchit
    2021 36TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC), 2021,
  • [39] Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation
    Zhu, Zheng-An
    Lu, Yun-Chung
    You, Chih-Hsiang
    Chiang, Chen-Kuo
    SENSORS, 2019, 19 (04)
  • [40] Codebook Approach for Sensor-based Human Activity Recognition
    Shirahama, Kimiaki
    Koeping, Lukas
    Grzegorzek, Marcin
    UBICOMP'16 ADJUNCT: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, 2016, : 197 - 200