Deep learning-based sign language recognition system using both manual and non-manual components fusion

被引:1
|
作者
Jebali, Maher [1 ]
Dakhli, Abdesselem [1 ]
Bakari, Wided [1 ]
机构
[1] Univ Hail, Comp Sci Dept, POB 2440, Hail 100190, Saudi Arabia
来源
AIMS MATHEMATICS | 2024年 / 9卷 / 01期
关键词
CNN; CTC; recurrent neural network; sign language recognition; head pose;
D O I
10.3934/math.2024105
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Sign language is regularly adopted by speech-impaired or deaf individuals to convey information; however, it necessitates substantial exertion to acquire either complete knowledge or skill. Sign language recognition (SLR) has the intention to close the gap between the users and the non-users of sign language by identifying signs from video speeches. This is a fundamental but arduous task as sign language is carried out with complex and often fast hand gestures and motions, facial expressions and impressionable body postures. Nevertheless, non-manual features are currently being examined since numerous signs have identical manual components but vary in non-manual components. To this end, we suggest a novel manual and non-manual SLR system (MNM-SLR) using a convolutional neural network (CNN) to get the benefits of multi-cue information towards a significant recognition rate. Specifically, we suggest a model for a deep convolutional, long short-term memory network that simultaneously exploits the non-manual features, which is summarized by utilizing the head pose, as well as a model of the embedded dynamics of manual features. Contrary to other frequent works that focused on depth cameras, multiple camera visuals and electrical gloves, we employed the use of RGB, which allows individuals to communicate with a deaf person through their personal devices. As a result, our framework achieves a high recognition rate with an accuracy of 90.12% on the SIGNUM dataset and 94.87% on RWTH-PHOENIX-Weather 2014 dataset.
引用
收藏
页码:2105 / 2122
页数:18
相关论文
共 50 条
  • [1] Evaluation of Manual and Non-manual Components for Sign Language Recognition
    Mukushev, Medet
    Sabyrov, Arman
    Imashev, Alfarabi
    Koishybay, Kenessary
    Kimmelman, Vadim
    Sandygulova, Anara
    PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), 2020, : 6073 - 6078
  • [2] Manual and non-manual sign language recognition framework using hybrid deep learning techniques
    Javaid, Sameena
    Rizvi, Safdar
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (03) : 3823 - 3833
  • [3] A survey on manual and non-manual sign language recognition for isolated and continuous sign
    Agrawal, Subhash Chand
    Jalal, Anand Singh
    Tripathi, Rajesh Kumar
    INTERNATIONAL JOURNAL OF APPLIED PATTERN RECOGNITION, 2016, 3 (02) : 99 - 134
  • [4] Non-manual cues in automatic sign language recognition
    Caridakis, George
    Asteriadis, Stylianos
    Karpouzis, Kostas
    PERSONAL AND UBIQUITOUS COMPUTING, 2014, 18 (01) : 37 - 46
  • [5] Non-manual cues in automatic sign language recognition
    George Caridakis
    Stylianos Asteriadis
    Kostas Karpouzis
    Personal and Ubiquitous Computing, 2014, 18 : 37 - 46
  • [6] Recognition of Non-Manual Content in Continuous Japanese Sign Language
    Brock, Heike
    Farag, Iva
    Nakadai, Kazuhiro
    SENSORS, 2020, 20 (19) : 1 - 21
  • [7] Sign Language Recognition Model Combining Non-manual Markers and Handshapes
    Quesada, Luis
    Marin, Gabriela
    Guerrero, Luis A.
    UBIQUITOUS COMPUTING AND AMBIENT INTELLIGENCE, UCAMI 2016, PT I, 2016, 10069 : 400 - 405
  • [8] Non-Manual Features in Multilingual Sign Language Communication
    Gubina, Galina, V
    Guzikova, Maria O.
    VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA FILOLOGIYA-TOMSK STATE UNIVERSITY JOURNAL OF PHILOLOGY, 2021, 71 : 38 - 55
  • [9] Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition
    Elakkiya R
    Selvamani K
    Journal of Medical Systems, 2017, 41
  • [10] Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition
    Elakkiya, R.
    Selvamani, K.
    JOURNAL OF MEDICAL SYSTEMS, 2017, 41 (11)