Deep learning-based sign language recognition system using both manual and non-manual components fusion

被引:1
|
作者
Jebali, Maher [1 ]
Dakhli, Abdesselem [1 ]
Bakari, Wided [1 ]
机构
[1] Univ Hail, Comp Sci Dept, POB 2440, Hail 100190, Saudi Arabia
来源
AIMS MATHEMATICS | 2024年 / 9卷 / 01期
关键词
CNN; CTC; recurrent neural network; sign language recognition; head pose;
D O I
10.3934/math.2024105
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Sign language is regularly adopted by speech-impaired or deaf individuals to convey information; however, it necessitates substantial exertion to acquire either complete knowledge or skill. Sign language recognition (SLR) has the intention to close the gap between the users and the non-users of sign language by identifying signs from video speeches. This is a fundamental but arduous task as sign language is carried out with complex and often fast hand gestures and motions, facial expressions and impressionable body postures. Nevertheless, non-manual features are currently being examined since numerous signs have identical manual components but vary in non-manual components. To this end, we suggest a novel manual and non-manual SLR system (MNM-SLR) using a convolutional neural network (CNN) to get the benefits of multi-cue information towards a significant recognition rate. Specifically, we suggest a model for a deep convolutional, long short-term memory network that simultaneously exploits the non-manual features, which is summarized by utilizing the head pose, as well as a model of the embedded dynamics of manual features. Contrary to other frequent works that focused on depth cameras, multiple camera visuals and electrical gloves, we employed the use of RGB, which allows individuals to communicate with a deaf person through their personal devices. As a result, our framework achieves a high recognition rate with an accuracy of 90.12% on the SIGNUM dataset and 94.87% on RWTH-PHOENIX-Weather 2014 dataset.
引用
收藏
页码:2105 / 2122
页数:18
相关论文
共 50 条
  • [21] Automatic Facial Expression Recognition in an Image Sequence of Non-manual Indian Sign Language Using Support Vector Machine
    Saraswat, Mukesh
    Arya, K. V.
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SOFT COMPUTING FOR PROBLEM SOLVING (SOCPROS 2011), VOL 2, 2012, 131 : 267 - 275
  • [22] Deep Learning-Based Sign Language Recognition System for Cognitive Development
    Jebali, Maher
    Dakhli, Abdesselem
    Bakari, Wided
    COGNITIVE COMPUTATION, 2023, 15 (06) : 2189 - 2201
  • [23] Deep learning-based sign language recognition system for static signs
    Wadhawan, Ankita
    Kumar, Parteek
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (12): : 7957 - 7968
  • [24] Deep learning-based sign language recognition system for static signs
    Ankita Wadhawan
    Parteek Kumar
    Neural Computing and Applications, 2020, 32 : 7957 - 7968
  • [25] Deep Learning-Based Sign Language Recognition System for Cognitive Development
    Maher Jebali
    Abdesselem Dakhli
    Wided Bakari
    Cognitive Computation, 2023, 15 : 2189 - 2201
  • [26] RELATIVE CLAUSES IN BRAZILIAN SIGN LANGUAGE: NON-MANUAL MARKERS AS A COMBINING STRATEGY
    Ludwig, Carlos Roberto
    HUMANIDADES & INOVACAO, 2023, 10 (09): : 130 - 140
  • [27] Sequential Belief-Based Fusion of Manual and Non-manual Information for Recognizing Isolated Signs
    Aran, Oya
    Burger, Thomas
    Caplier, Alice
    Akarun, Lale
    GESTURE-BASED HUMAN-COMPUTER INTERACTION AND SIMULATION, 2009, 5085 : 134 - +
  • [28] A belief-based sequential fusion approach for fusing manual signs and non-manual signals
    Aran, Oya
    Burger, Thomas
    Caplier, Alice
    Akarun, Lale
    PATTERN RECOGNITION, 2009, 42 (05) : 812 - 822
  • [29] From Seed to System: The Emergence of Non-Manual Markers for Wh-Questions in Nicaraguan Sign Language
    Kocab, Annemarie
    Senghas, Ann
    Pyers, Jennie
    LANGUAGES, 2022, 7 (02)
  • [30] A Study on How to Express Non-manual Markers in the Electronic Dictionary of Japanese Sign Language
    Terauchi, Mina
    Nagashima, Yuji
    HUMAN-COMPUTER INTERACTION - INTERACT 2015, PT IV, 2015, 9299 : 502 - 505