On the Benefit of FMG and EMG Sensor Fusion for Gesture Recognition Using Cross-Subject Validation

被引:0
|
作者
Rohr, Maurice [1 ]
Haidamous, Jad [1 ]
Schaefer, Niklas [2 ]
Schaumann, Stephan [2 ]
Latsch, Bastian [2 ]
Kupnik, Mario [2 ]
Antink, Christoph Hoog [1 ]
机构
[1] Tech Univ Darmstadt, AI Syst Med Lab, D-64283 Darmstadt, Germany
[2] Tech Univ Darmstadt, Measurement & Sensor Technol Grp, D-64283 Darmstadt, Germany
关键词
Electromyography; Muscles; Gesture recognition; Force; Hands; Accuracy; Sensor fusion; Data acquisition; Cameras; Feature extraction; Electromyography (EMG); ferroelectrets; force myography (FMG); gesture recognition; sensor fusion; FORCE MYOGRAPHY; SURFACE ELECTROMYOGRAPHY; CLASSIFICATION; PIEZOELECTRETS;
D O I
10.1109/TNSRE.2025.3543649
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Hand gestures are a natural form of human communication, making gesture recognition a sensible approach for intuitive human-computer interaction. Wearable sensors on the forearm can be used to detect the muscle contractions that generate these gestures, but classification approaches relying on a single measured modality lack accuracy and robustness. In this work, we analyze sensor fusion of force myography (FMG) and electromyography (EMG) for gesture recognition. We employ piezoelectric FMG sensors based on ferroelectrets and a commercial EMG system in a user study with 13 participants to measure 66 distinct hand movements with 10ms labelling precision. Three classification tasks, namely flexion and extension, single finger, and all finger movement classification, are performed using common handcrafted features as input to machine learning classifiers. Subsequently, the evaluation covers the effectiveness of the sensor fusion using correlation analysis, classification performance based on leave-one-subject-out-cross-validation and 5x2cv-t-tests, and its effects of involuntary movements on classification. We find that sensor fusion leads to significant improvement (42% higher average recognition accuracy) on all three tasks and that both sensor modalities contain complementary information. Furthermore, we confirm this finding using reduced FMG and EMG sensor sets. This study reinforces the results of prior research about the effectiveness of sensor fusion by performing meticulous statistical analyses, thereby paving the way for multi-sensor gesture recognition in assistance systems.
引用
收藏
页码:935 / 944
页数:10
相关论文
共 50 条
  • [31] EEG-based Cross-subject Mental Fatigue Recognition
    Liu, Yisi
    Lan, Zirui
    Cui, Jian
    Sourina, Olga
    Muller-Wittig, Wolfgang
    2019 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW), 2019, : 247 - 252
  • [32] Multisource Transfer Learning for Cross-Subject EEG Emotion Recognition
    Li, Jinpeng
    Qiu, Shuang
    Shen, Yuan-Yuan
    Liu, Cheng-Lin
    He, Huiguang
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (07) : 3281 - 3293
  • [33] Controlling Mobile Robot Using IMU and EMG Sensor-based Gesture Recognition
    Shin, Seong-Og
    Kim, Donghan
    Seo, Yong-Ho
    2014 NINTH INTERNATIONAL CONFERENCE ON BROADBAND AND WIRELESS COMPUTING, COMMUNICATION AND APPLICATIONS (BWCCA), 2014, : 554 - 557
  • [34] Improving Cross-Subject Activity Recognition via Adversarial Learning
    Leite, Clayton Frederick Souza
    Xiao, Yu
    IEEE ACCESS, 2020, 8 : 90542 - 90554
  • [35] Cross-Subject Continuous Emotion Recognition using Speech and Body Motion in Dyadic Interactions
    Fatima, Syeda Narjis
    Erzin, Engin
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 1731 - 1735
  • [36] HUMAN GESTURE RECOGNITION USING HIDDEN MARKOV MODELS AND SENSOR FUSION
    Ramon Emmanuel, Dominguez
    Hernandez Raquel, Diaz
    Robles Leopoldo, Altamirano
    COMPUTER SCIENCE-AGH, 2022, 23 (02): : 227 - 245
  • [37] Contrastive Learning of Subject-Invariant EEG Representations for Cross-Subject Emotion Recognition
    Shen, Xinke
    Liu, Xianggen
    Hu, Xin
    Zhang, Dan
    Song, Sen
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2496 - 2511
  • [38] Cross-subject electroencephalogram emotion recognition based on maximum classifier discrepancy
    Cai Z.
    Guo M.
    Yang X.
    Chen X.
    Xu G.
    Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2021, 38 (03): : 455 - 462
  • [39] Multi-method Fusion of Cross-Subject Emotion Recognition Based on High-Dimensional EEG Features
    Yang, Fu
    Zhao, Xingcong
    Jiang, Wenge
    Gao, Pengfei
    Liu, Guangyuan
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2019, 13
  • [40] Live Demostration: Sensor fusion using EMG and vision for hand gesture classification in mobile applications
    Ceolini, Enea
    Taverni, Gemma
    Khacef, Lyes
    Payvand, Melika
    Donati, Elisa
    2019 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS 2019), 2019,