HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning

被引:9
|
作者
Zhang, Xiao [1 ,2 ]
Yu, Hongzheng [1 ]
Yang, Yang [4 ]
Gu, Jingjing [5 ]
Li, Yujun [3 ]
Zhuang, Fuzhen [6 ,7 ]
Yu, Dongxiao [1 ]
Ren, Zhaochun [1 ]
机构
[1] Shandong Univ, Sch Comp Sci & Technol, Qingdao 266237, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[3] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
[4] Nanjing Univ Sci & Technol, Nanjing 210014, Peoples R China
[5] Nanjing Univ Aeronaut & Astronaut, Nanjing 210016, Peoples R China
[6] Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R China
[7] Chinese Acad Sci, Xiamen Data Intelligence Acad ICT, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Training; Data models; Activity recognition; Correlation; Intelligent sensors; Training data; Catastrophic forgetting; incremental learning; human activity recognition; mobile device; multi-modality;
D O I
10.1109/JBHI.2021.3085602
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, with the development of various kinds of sensors in smartphones or wearable devices, human activity recognition (HAR) has been widely researched and has numerous applications in healthcare, smart city, etc. Many techniques based on hand-crafted feature engineering or deep neural network have been proposed for sensor based HAR. However, these existing methods usually recognize activities offline, which means the whole data should be collected before training, occupying large-capacity storage space. Moreover, once the offline model training finished, the trained model can't recognize new activities unless retraining from the start, thus with a high cost of time and space. In this paper, we propose a multi-modality incremental learning model, called HarMI, with continuous learning ability. The proposed HarMI model can start training quickly with little storage space and easily learn new activities without storing previous training data. In detail, we first adopt attention mechanism to align heterogeneous sensor data with different frequencies. In addition, to overcome catastrophic forgetting in incremental learning, HarMI utilizes the elastic weight consolidation and canonical correlation analysis from a multi-modality perspective. Extensive experiments based on two public datasets demonstrate that HarMI can achieve a superior performance compared with several state-of-the-arts.
引用
收藏
页码:939 / 951
页数:13
相关论文
共 50 条
  • [31] An Encoder Generative Adversarial Network for Multi-modality Image Recognition
    Chen, Yu
    Yang, Chunling
    Zhu, Min
    Yang, ShiYan
    IECON 2018 - 44TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2018, : 2689 - 2694
  • [32] Gait Activity Classification Using Multi-Modality Sensor Fusion: A Deep Learning Approach
    Yunas, Syed U.
    Ozanyan, Krikor B.
    IEEE SENSORS JOURNAL, 2021, 21 (15) : 16870 - 16879
  • [33] The incremental value of advanced cardiovascular multi-modality imaging in the investigation of cardiac masses
    Lech, P.
    Ma, G.
    Lee, A. F.
    Ripley, D. P.
    INTERNATIONAL JOURNAL OF CARDIOLOGY, 2016, 222 : 714 - 716
  • [34] Modality aware contrastive learning for multimodal human activity recognition
    Dixon, Sam
    Yao, Lina
    Davidson, Robert
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (16):
  • [35] Fake news detection in the Hindi language using multi-modality via transfer and ensemble learning
    Garg, Sonal
    Sharma, Dilip Kumar
    INTERNET TECHNOLOGY LETTERS, 2025, 8 (01)
  • [36] A NOVEL MULTI-MODALITY FRAMEWORK FOR EXPLORING BRAIN CONNECTIVITY HUBS VIA REINFORCEMENT LEARNING APPROACH
    Zhang, Shu
    Zhang, Haiyang
    Wang, Ruoyang
    Kang, Yanqing
    Yu, Sigang
    Wu, Jinru
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [37] Multi-modality machine learning predicting Parkinson’s disease
    Mary B. Makarious
    Hampton L. Leonard
    Dan Vitale
    Hirotaka Iwaki
    Lana Sargent
    Anant Dadu
    Ivo Violich
    Elizabeth Hutchins
    David Saffo
    Sara Bandres-Ciga
    Jonggeol Jeff Kim
    Yeajin Song
    Melina Maleknia
    Matt Bookman
    Willy Nojopranoto
    Roy H. Campbell
    Sayed Hadi Hashemi
    Juan A. Botia
    John F. Carter
    David W. Craig
    Kendall Van Keuren-Jensen
    Huw R. Morris
    John A. Hardy
    Cornelis Blauwendraat
    Andrew B. Singleton
    Faraz Faghri
    Mike A. Nalls
    npj Parkinson's Disease, 8
  • [38] Multi-modality machine learning predicting Parkinson's disease
    Makarious, Mary B.
    Leonard, Hampton L.
    Vitale, Dan
    Iwaki, Hirotaka
    Sargent, Lana
    Dadu, Anant
    Violich, Ivo
    Hutchins, Elizabeth
    Saffo, David
    Bandres-Ciga, Sara
    Kim, Jonggeol Jeff
    Song, Yeajin
    Maleknia, Melina
    Bookman, Matt
    Nojopranoto, Willy
    Campbell, Roy H.
    Hashemi, Sayed Hadi
    Botia, Juan A.
    Carter, John F.
    Craig, David W.
    Van Keuren-Jensen, Kendall
    Morris, Huw R.
    Hardy, John A.
    Blauwendraat, Cornelis
    Singleton, Andrew B.
    Faghri, Faraz
    Nalls, Mike A.
    NPJ PARKINSONS DISEASE, 2022, 8 (01)
  • [39] GCN-Based Multi-Modality Fusion Network for Action Recognition
    Liu, Shaocan
    Wang, Xingtao
    Xiong, Ruiqin
    Fan, Xiaopeng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1242 - 1253
  • [40] Deep Adversarial Learning for Multi-Modality Missing Data Completion
    Cai, Lei
    Wang, Zhengyang
    Gao, Hongyang
    Shen, Dinggang
    Ji, Shuiwang
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 1158 - 1166