On the use of Multi-Modal Sensing in Sign Language Classification

被引:0
|
作者
Sharma, Sneha [1 ]
Gupta, Rinki [1 ]
Kumar, Arun [2 ]
机构
[1] Amity Univ, Elect & Commun Engn Dept, Noida, Uttar Pradesh, India
[2] Indian Inst Technol Delhi, Ctr Appl Res Elect, New Delhi, India
关键词
Electromyography; accelerometer; sign language; support-vector machine; ANOVA;
D O I
10.1109/spin.2019.8711702
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Inliterature, sign language recognition (SLR) has been proposed using multi-channel data acquisition devices with various sensing modalities. When using wearable sensors, multimodality data acquisition has been shown to be particularly useful for improving the classification accuracies as compared to single modality data acquisition. In this work, a statistical analysis is presented to quantify the performance of different combinations of wearable sensors such as surface electromyogram (sEMG), accelerometers and gyroscopes in the classification of isolated signs. Twelve signs from the Indian sip( language are considered such that the signs consist of static hand postures, as well as complex motion of forearm and simple wrist motions. Following four combinations of sensor modalities are compared for classification accuracies using statistical tests: 1) accelerometer and gyroscope 2) sEMG and accelerometer, 3) sEMG and gyroscope and finally, 4) sEMG, accelerometer and gyroscope. Results obtained on actual data indicate that the combination of all three modalities, namely sEMG, accelerometer and gyroscope yield the best classification accuracy of 88.25% as compared to the remaining sensor combinations. However, the statistical analysis of the classification accuracies using analysis of variance (ANOVA) indicates that the use of sEMG sensors is particularly useful in the classification of static hand postures. Moreover, the classification of signs involving dynamic motion of hands either with simple wrist motion or motion of hand along a complex trajectory is comparatively better with any sensing modality as compared to the classification of static hand postures.
引用
收藏
页码:495 / 500
页数:6
相关论文
共 50 条
  • [1] Weighted Multi-modal Sign Language Recognition
    Liu, Edmond
    Lim, Jong Yoon
    MacDonald, Bruce
    Ahn, Ho Seok
    2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, 2024, : 880 - 885
  • [2] Skeleton aware multi-modal sign language recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2021, : 3408 - 3418
  • [3] Multi-modal Dialogue System with Sign Language Capabilities
    Hruz, M.
    Campr, P.
    Krnoul, Z.
    Zelezny, M.
    Aran, Oya
    Santemiz, Pinar
    ASSETS 11: PROCEEDINGS OF THE 13TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS AND ACCESSIBILITY, 2011, : 265 - 266
  • [4] Skeleton Aware Multi-modal Sign Language Recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3408 - 3418
  • [5] Skeleton aware multi-modal sign language recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    arXiv, 2021,
  • [6] Multi-modal Sign Language Recognition with Enhanced Spatiotemporal Representation
    Xiao, Shiwei
    Fang, Yuchun
    Ni, Lan
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Multi-modal Extreme Classification
    Mittal, Anshul
    Dahiya, Kunal
    Malani, Shreya
    Ramaswamy, Janani
    Kuruvilla, Seba
    Ajmera, Jitendra
    Chang, Keng-Hao
    Agarwal, Sumeet
    Kar, Purushottam
    Varma, Manik
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12383 - 12392
  • [8] Vehicle Classification and Identification Using Multi-Modal Sensing and Signal Learning
    Kerekes, Ryan A.
    Karnowski, Thomas P.
    Kuhn, Mike
    Moore, Michael R.
    Stinson, Brad
    Tokola, Ryan
    Anderson, Adam
    Vann, Jason M.
    2017 IEEE 85TH VEHICULAR TECHNOLOGY CONFERENCE (VTC SPRING), 2017,
  • [9] Classification of multi-modal remote sensing images based on knowledge graph
    Fang, Jianyong
    Yan, Xuefeng
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2023, 44 (15) : 4815 - 4835
  • [10] MLMSign: Multi-lingual multi-modal illumination-invariant sign language recognition
    Sadeghzadeh, Arezoo
    Shah, A. F. M. Shahen
    Islam, Md Baharul
    INTELLIGENT SYSTEMS WITH APPLICATIONS, 2024, 22