HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning

被引:9
|
作者
Zhang, Xiao [1 ,2 ]
Yu, Hongzheng [1 ]
Yang, Yang [4 ]
Gu, Jingjing [5 ]
Li, Yujun [3 ]
Zhuang, Fuzhen [6 ,7 ]
Yu, Dongxiao [1 ]
Ren, Zhaochun [1 ]
机构
[1] Shandong Univ, Sch Comp Sci & Technol, Qingdao 266237, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[3] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
[4] Nanjing Univ Sci & Technol, Nanjing 210014, Peoples R China
[5] Nanjing Univ Aeronaut & Astronaut, Nanjing 210016, Peoples R China
[6] Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R China
[7] Chinese Acad Sci, Xiamen Data Intelligence Acad ICT, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Training; Data models; Activity recognition; Correlation; Intelligent sensors; Training data; Catastrophic forgetting; incremental learning; human activity recognition; mobile device; multi-modality;
D O I
10.1109/JBHI.2021.3085602
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, with the development of various kinds of sensors in smartphones or wearable devices, human activity recognition (HAR) has been widely researched and has numerous applications in healthcare, smart city, etc. Many techniques based on hand-crafted feature engineering or deep neural network have been proposed for sensor based HAR. However, these existing methods usually recognize activities offline, which means the whole data should be collected before training, occupying large-capacity storage space. Moreover, once the offline model training finished, the trained model can't recognize new activities unless retraining from the start, thus with a high cost of time and space. In this paper, we propose a multi-modality incremental learning model, called HarMI, with continuous learning ability. The proposed HarMI model can start training quickly with little storage space and easily learn new activities without storing previous training data. In detail, we first adopt attention mechanism to align heterogeneous sensor data with different frequencies. In addition, to overcome catastrophic forgetting in incremental learning, HarMI utilizes the elastic weight consolidation and canonical correlation analysis from a multi-modality perspective. Extensive experiments based on two public datasets demonstrate that HarMI can achieve a superior performance compared with several state-of-the-arts.
引用
收藏
页码:939 / 951
页数:13
相关论文
共 50 条
  • [21] Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning
    Deng, Jiewen
    Jiang, Renhe
    Zhang, Jiaqi
    Song, Xuan
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 2018 - 2026
  • [22] Multi-modality deep forest for hand motion recognition via fusing sEMG and acceleration signals
    Fang, Yinfeng
    Lu, Huiqiao
    Liu, Han
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (04) : 1119 - 1131
  • [23] Multi-modality deep forest for hand motion recognition via fusing sEMG and acceleration signals
    Yinfeng Fang
    Huiqiao Lu
    Han Liu
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 1119 - 1131
  • [24] Measuring multi-modality similarities via subspace learning for cross-media retrieval
    Zhang, Hong
    Weng, Jianguang
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2006, PROCEEDINGS, 2006, 4261 : 979 - +
  • [25] Learning based Multi-modality Image and Video Compression
    Lu, Guo
    Zhong, Tianxiong
    Geng, Jing
    Hu, Qiang
    Xu, Dong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6073 - 6082
  • [26] Object Tracking Based on Multi-modality Dictionary Learning
    Wang, Jing
    Zhu, Hong
    Xue, Shan
    Shi, Jing
    IMAGE AND GRAPHICS (ICIG 2017), PT II, 2017, 10667 : 129 - 138
  • [27] Few-shot Learning for Multi-Modality Tasks
    Chen, Jie
    Ye, Qixiang
    Yang, Xiaoshan
    Zhou, S. Kevin
    Hong, Xiaopeng
    Zhang, Li
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 5673 - 5674
  • [28] Focal Channel Knowledge Distillation for Multi-Modality Action Recognition
    Gan, Lipeng
    Cao, Runze
    Li, Ning
    Yang, Man
    Li, Xiaochao
    IEEE ACCESS, 2023, 11 : 78285 - 78298
  • [29] Learning Latent Factors in Linked Multi-modality Data
    He, Tiantian
    Chan, Keith C. C.
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2018), 2018, 11177 : 214 - 224
  • [30] Convolutional non-local spatial-temporal learning for multi-modality action recognition
    Ren, Ziliang
    Yuan, Huaqiang
    Wei, Wenhong
    Zhao, Tiezhu
    Zhang, Qieshi
    ELECTRONICS LETTERS, 2022, 58 (20) : 765 - 767