HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning

被引:9
|
作者
Zhang, Xiao [1 ,2 ]
Yu, Hongzheng [1 ]
Yang, Yang [4 ]
Gu, Jingjing [5 ]
Li, Yujun [3 ]
Zhuang, Fuzhen [6 ,7 ]
Yu, Dongxiao [1 ]
Ren, Zhaochun [1 ]
机构
[1] Shandong Univ, Sch Comp Sci & Technol, Qingdao 266237, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[3] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
[4] Nanjing Univ Sci & Technol, Nanjing 210014, Peoples R China
[5] Nanjing Univ Aeronaut & Astronaut, Nanjing 210016, Peoples R China
[6] Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R China
[7] Chinese Acad Sci, Xiamen Data Intelligence Acad ICT, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Training; Data models; Activity recognition; Correlation; Intelligent sensors; Training data; Catastrophic forgetting; incremental learning; human activity recognition; mobile device; multi-modality;
D O I
10.1109/JBHI.2021.3085602
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, with the development of various kinds of sensors in smartphones or wearable devices, human activity recognition (HAR) has been widely researched and has numerous applications in healthcare, smart city, etc. Many techniques based on hand-crafted feature engineering or deep neural network have been proposed for sensor based HAR. However, these existing methods usually recognize activities offline, which means the whole data should be collected before training, occupying large-capacity storage space. Moreover, once the offline model training finished, the trained model can't recognize new activities unless retraining from the start, thus with a high cost of time and space. In this paper, we propose a multi-modality incremental learning model, called HarMI, with continuous learning ability. The proposed HarMI model can start training quickly with little storage space and easily learn new activities without storing previous training data. In detail, we first adopt attention mechanism to align heterogeneous sensor data with different frequencies. In addition, to overcome catastrophic forgetting in incremental learning, HarMI utilizes the elastic weight consolidation and canonical correlation analysis from a multi-modality perspective. Extensive experiments based on two public datasets demonstrate that HarMI can achieve a superior performance compared with several state-of-the-arts.
引用
收藏
页码:939 / 951
页数:13
相关论文
共 50 条
  • [41] Multi-modality fusion learning for the automatic diagnosis of optic neuropathy
    Cao, Zheng
    Sun, Chuanbin
    Wang, Wenzhe
    Zheng, Xiangshang
    Wu, Jian
    Gao, Honghao
    PATTERN RECOGNITION LETTERS, 2021, 142 : 58 - 64
  • [42] Multi-modality Network with Visual and Geometrical Information for Micro Emotion Recognition
    Guo, Jianzhu
    Zhou, Shuai
    Wu, Jinlin
    Wan, Jun
    Zhu, Xiangyu
    Lei, Zhen
    Li, Stan Z.
    2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, : 814 - 819
  • [43] Cross-Modal Federated Human Activity Recognition via Modality-Agnostic and Modality-Specific Representation Learning
    Yang, Xiaoshan
    Xiong, Baochen
    Huang, Yi
    Xu, Changsheng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3063 - 3071
  • [44] Pedestrian recognition by using a kernel-based multi-modality approach
    Sirbu, Adela-Maria
    Rogozan, Alexandrina
    Diosan, Laura
    Bensrhair, Abdelaziz
    16TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING (SYNASC 2014), 2014, : 258 - 263
  • [45] Multi-Modality Mobile Image Recognition Based on Thermal and Visual Cameras
    Lai, Jui-Hsin
    Lin, Chung-Ching
    Chen, Chun-Fu
    Lin, Ching-Yung
    2015 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2015, : 477 - 482
  • [46] Multi-modality, in vivo imaging of signal transduction pathway activity
    Blasberg, RG
    Doubrovin, MM
    Serganova, IS
    Ponomarev, VB
    Gelovani, JG
    FASEB JOURNAL, 2004, 18 (04): : A2 - A2
  • [47] A Novel Two-Stream Transformer-Based Framework for Multi-Modality Human Action Recognition
    Shi, Jing
    Zhang, Yuanyuan
    Wang, Weihang
    Xing, Bin
    Hu, Dasha
    Chen, Liangyin
    APPLIED SCIENCES-BASEL, 2023, 13 (04):
  • [48] Explainable multi-task learning for multi-modality biological data analysis
    Tang, Xin
    Zhang, Jiawei
    He, Yichun
    Zhang, Xinhe
    Lin, Zuwan
    Partarrieu, Sebastian
    Hanna, Emma Bou
    Ren, Zhaolin
    Shen, Hao
    Yang, Yuhong
    Wang, Xiao
    Li, Na
    Ding, Jie
    Liu, Jia
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [49] Multi-concept multi-modality active learning for interactive video annotation
    Wang, Meng
    Hua, Xian-Sheng
    Song, Yan
    Tang, Jinhui
    Dai, Li-Rong
    ICSC 2007: INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, PROCEEDINGS, 2007, : 321 - +
  • [50] Incremental Cross-Modality Deep Learning for Pedestrian Recognition
    Pop, Danut Ovidiu
    Rogozan, Alexandrina
    Nashashibi, Fawzi
    Bensrhair, Abdelaziz
    2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 523 - 528