On the use of Multi-Modal Sensing in Sign Language Classification

被引:0
|
作者
Sharma, Sneha [1 ]
Gupta, Rinki [1 ]
Kumar, Arun [2 ]
机构
[1] Amity Univ, Elect & Commun Engn Dept, Noida, Uttar Pradesh, India
[2] Indian Inst Technol Delhi, Ctr Appl Res Elect, New Delhi, India
关键词
Electromyography; accelerometer; sign language; support-vector machine; ANOVA;
D O I
10.1109/spin.2019.8711702
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Inliterature, sign language recognition (SLR) has been proposed using multi-channel data acquisition devices with various sensing modalities. When using wearable sensors, multimodality data acquisition has been shown to be particularly useful for improving the classification accuracies as compared to single modality data acquisition. In this work, a statistical analysis is presented to quantify the performance of different combinations of wearable sensors such as surface electromyogram (sEMG), accelerometers and gyroscopes in the classification of isolated signs. Twelve signs from the Indian sip( language are considered such that the signs consist of static hand postures, as well as complex motion of forearm and simple wrist motions. Following four combinations of sensor modalities are compared for classification accuracies using statistical tests: 1) accelerometer and gyroscope 2) sEMG and accelerometer, 3) sEMG and gyroscope and finally, 4) sEMG, accelerometer and gyroscope. Results obtained on actual data indicate that the combination of all three modalities, namely sEMG, accelerometer and gyroscope yield the best classification accuracy of 88.25% as compared to the remaining sensor combinations. However, the statistical analysis of the classification accuracies using analysis of variance (ANOVA) indicates that the use of sEMG sensors is particularly useful in the classification of static hand postures. Moreover, the classification of signs involving dynamic motion of hands either with simple wrist motion or motion of hand along a complex trajectory is comparatively better with any sensing modality as compared to the classification of static hand postures.
引用
收藏
页码:495 / 500
页数:6
相关论文
共 50 条
  • [21] Multi-modal Remote Sensing Image Classification for Low Sample Size Data
    He, Qi
    Lee, Yao
    Huang, Dongmei
    He, Shengqi
    Song, Wei
    Du, Yanling
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [22] Multi-modal sign icon retrieval for augmentative communication
    Wu, CH
    Chiu, YH
    Cheng, KW
    ADVANCES IN MUTLIMEDIA INFORMATION PROCESSING - PCM 2001, PROCEEDINGS, 2001, 2195 : 598 - 605
  • [23] TennisSense: A Multi-Modal Sensing Platform for Sport
    O'Connor, Noel E.
    Kelly, Philip
    O'Conaire, Ciaran
    Connaghan, Damien
    Smeaton, Alan F.
    Caulfield, Brian
    Diamond, Dermot
    Moynahan, Niall
    ERCIM NEWS, 2009, (76): : 54 - 55
  • [24] A Multi-Modal Approach to Sensing Human Emotion
    Gibilisco, Hannah
    Laubenberger, Michael
    Spiridonov, Valerii
    Belga, Jacob
    Hallstrom, Jason O.
    Peluso, Paul R.
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 2499 - 2502
  • [25] Multi-Modal Physiological Sensing on the Upper Arm
    Branan, Kimberly L.
    Reyes, Gilberto O. Flores
    Abel, John A.
    Erraguntla, Madhav
    Gutierrez-Osuna, Ricardo
    Cote, Gerard L.
    BIOPHOTONICS IN EXERCISE SCIENCE, SPORTS MEDICINE, HEALTH MONITORING TECHNOLOGIES, AND WEARABLES III, 2022, 11956
  • [26] Multi-modal Sensing for Human Activity Recognition
    Bruno, Barbara
    Grosinger, Jasmin
    Mastrogiovanni, Fulvio
    Pecora, Federico
    Saffiotti, Alessandro
    Sathyakeerthy, Subhash
    Sgorbissa, Antonio
    2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2015, : 594 - 600
  • [27] A Multi-modal Interaction Approach to Enhance Natural Language Descriptions of Remote Sensing Images
    Hu, Dongming
    Geng, Lijie
    Yang, Xiaojiang
    Wei, Bin
    PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON MACHINE INTELLIGENCE AND DIGITAL APPLICATIONS, MIDA2024, 2024, : 744 - 749
  • [28] Multi-modal sensing in spin crossover compounds
    Gentili, Denis
    Demitri, Nicola
    Schaefer, Bernhard
    Liscio, Fabiola
    Bergenti, Ilaria
    Ruani, Giampiero
    Ruben, Mario
    Cavallini, Massimiliano
    JOURNAL OF MATERIALS CHEMISTRY C, 2015, 3 (30) : 7836 - 7844
  • [29] Cross-Modal Retrieval Augmentation for Multi-Modal Classification
    Gur, Shir
    Neverova, Natalia
    Stauffer, Chris
    Lim, Ser-Nam
    Kiela, Douwe
    Reiter, Austin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 111 - 123
  • [30] Directing Humanoids in a Multi-modal Command Language
    Oka, Tetsushi
    Abe, Toyokazu
    Shimoji, Masato
    Nakamura, Takuya
    Sugita, Kaoru
    Yokota, Masao
    2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2008, : 580 - 585