On the use of Multi-Modal Sensing in Sign Language Classification

被引:0
|
作者
Sharma, Sneha [1 ]
Gupta, Rinki [1 ]
Kumar, Arun [2 ]
机构
[1] Amity Univ, Elect & Commun Engn Dept, Noida, Uttar Pradesh, India
[2] Indian Inst Technol Delhi, Ctr Appl Res Elect, New Delhi, India
关键词
Electromyography; accelerometer; sign language; support-vector machine; ANOVA;
D O I
10.1109/spin.2019.8711702
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Inliterature, sign language recognition (SLR) has been proposed using multi-channel data acquisition devices with various sensing modalities. When using wearable sensors, multimodality data acquisition has been shown to be particularly useful for improving the classification accuracies as compared to single modality data acquisition. In this work, a statistical analysis is presented to quantify the performance of different combinations of wearable sensors such as surface electromyogram (sEMG), accelerometers and gyroscopes in the classification of isolated signs. Twelve signs from the Indian sip( language are considered such that the signs consist of static hand postures, as well as complex motion of forearm and simple wrist motions. Following four combinations of sensor modalities are compared for classification accuracies using statistical tests: 1) accelerometer and gyroscope 2) sEMG and accelerometer, 3) sEMG and gyroscope and finally, 4) sEMG, accelerometer and gyroscope. Results obtained on actual data indicate that the combination of all three modalities, namely sEMG, accelerometer and gyroscope yield the best classification accuracy of 88.25% as compared to the remaining sensor combinations. However, the statistical analysis of the classification accuracies using analysis of variance (ANOVA) indicates that the use of sEMG sensors is particularly useful in the classification of static hand postures. Moreover, the classification of signs involving dynamic motion of hands either with simple wrist motion or motion of hand along a complex trajectory is comparatively better with any sensing modality as compared to the classification of static hand postures.
引用
收藏
页码:495 / 500
页数:6
相关论文
共 50 条
  • [31] Improved Sentiment Classification by Multi-modal Fusion
    Gan, Lige
    Benlamri, Rachid
    Khoury, Richard
    2017 THIRD IEEE INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND APPLICATIONS (IEEE BIGDATASERVICE 2017), 2017, : 11 - 16
  • [32] Multi-modal classification in digital news libraries
    Chen, MY
    Hauptmann, A
    JCDL 2004: PROCEEDINGS OF THE FOURTH ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES: GLOBAL REACH AND DIVERSE IMPACT, 2004, : 212 - 213
  • [33] Multi-modal Music Genre Classification Approach
    Zhen, Chao
    Xu, Jieping
    PROCEEDINGS OF 2010 3RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION TECHNOLOGY (ICCSIT 2010), VOL 8, 2010, : 398 - 402
  • [34] Toward Multi-modal Music Emotion Classification
    Yang, Yi-Hsuan
    Lin, Yu-Ching
    Cheng, Heng-Tze
    Liao, I-Bin
    Ho, Yeh-Chin
    Chen, Homer H.
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2008, 9TH PACIFIC RIM CONFERENCE ON MULTIMEDIA, 2008, 5353 : 70 - +
  • [35] Use of multi-modal resources in retelling tasks of Specific Language Impairment children
    Figueroa-Leighton, Alejandra
    Crespo Allende, Nina
    Sepulveda, Jeannette
    LOGOS-REVISTA DE LINGUISTICA FILOSOFIA Y LITERATURA, 2018, 28 (02): : 412 - 428
  • [36] A Multi-modal SPM Model for Image Classification
    Zheng, Peng
    Zhao, Zhong-Qiu
    Gao, Jun
    INTELLIGENT COMPUTING METHODOLOGIES, ICIC 2017, PT III, 2017, 10363 : 525 - 535
  • [37] Multi-modal Learning for Social Image Classification
    Liu, Chunyang
    Zhang, Xu
    Li, Xiong
    Li, Rui
    Zhang, Xiaoming
    Chao, Wenhan
    2016 12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD), 2016, : 1174 - 1179
  • [38] Towards automation in using multi-modal language resources: compatibility and interoperability for multi-modal features in Kachako
    Kano, Yoshinobu
    LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2012, : 1098 - 1101
  • [39] TRANSFORMER-BASED MULTI-MODAL LEARNING FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION
    Hoffmann, David Sebastian
    Clasen, Kai Norman
    Demir, Begum
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 4891 - 4894
  • [40] Multi-label remote sensing classification with self-supervised gated multi-modal transformers
    Liu, Na
    Yuan, Ye
    Wu, Guodong
    Zhang, Sai
    Leng, Jie
    Wan, Lihong
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2024, 18