KUNet-An Optimized AI Based Bengali Sign Language Translator for Hearing Impaired and Non Verbal People

被引:0
|
作者
Jim, Abdullah Al Jaid [1 ,2 ]
Rafi, Ibrahim [2 ]
Tiang, Jun Jiat [3 ]
Biswas, Uzzal [2 ]
Abdullah-Al Nahid [2 ]
机构
[1] Trust Univ, Dept Elect & Elect Engn, Barishal 8200, Bangladesh
[2] Khulna Univ, Elect & Commun Engn Discipline, Khulna 9208, Bangladesh
[3] Multimedia Univ, Fac Engn, Ctr Wireless Technol CWT, Cyberjaya 63100, Malaysia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Sign language; Accuracy; Computational modeling; Support vector machines; Genetic algorithms; Image recognition; Feature extraction; Databases; Assistive technologies; Zero shot learning; Deep learning; Machine learning; Bengali sign language (BdSL); classification; computer vision; deep learning; machine learning; sign language recognition;
D O I
10.1109/ACCESS.2024.3474011
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sign language is the most prevalent form of communication among people with speech and hearing disabilities. The most widely used types of sign language involve the creation of static or dynamic gestures using hand(s). Among many sign languages, Bengali Sign Language (BdSL) is one of the most complicated sign languages to learn and comprehend because of its enormous alphabet, vocabulary, and variation in expression techniques. Existing solutions include learning BdSL or hiring an interpreter. Besides, BdSL interpreter support is hard to come by and expensive (if not voluntary). Disabled people might find it more comfortable to converse with generals implementing machine translation of sign language. Deep learning that mimics the human brain, a subset of the machine learning domain, seems to be a viable solution. For the hearing impaired and non verbal community, computer vision, in particular, may hold the key to finding a solution. Therefore, we have created a novel model, KUNet ("Khulna University Network" a CNN based model), a classification framework optimized by the genetic algorithm (GA), has been proposed to classify BdSL. This model and the dataset contribute to creating a BdSL machine translator. GA-optimized KUNet acquired an accuracy of 99.11% on KU-BdSL. After training the model on KU-BdSL, we demonstrated a comparison of the model with state-of-the-art studies and interpreted the black-box nature of the model using explainable AI (XAI). Additionally, we have found that our model outperformed several well-known models trained on the KU-BdSL dataset. This study will benefit the hearing impaired and non verbal community by allowing them to communicate effortlessly and minimizing their hardship.
引用
收藏
页码:155052 / 155063
页数:12
相关论文
共 30 条
  • [21] A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language
    Halim, Zahid
    Abbas, Ghulam
    ASSISTIVE TECHNOLOGY, 2015, 27 (01) : 34 - 43
  • [22] Bilingual Sign Recognition Using Image Based Hand Gesture Technique for Hearing and Speech Impaired People
    Nikam, Ashish S.
    Ambekar, Aarti G.
    2016 INTERNATIONAL CONFERENCE ON COMPUTING COMMUNICATION CONTROL AND AUTOMATION (ICCUBEA), 2016,
  • [23] A yet efficient communication system with hearing-impaired people based on isolated words of arabic language
    Darabkh, Khalid A.
    Khalifeh, Ala F.
    Jafar, Iyad F.
    Bathech, Baraa A.
    Sabah, Saed W.
    IAENG International Journal of Computer Science, 2013, 40 (03) : 183 - 192
  • [24] Skeleton-based Chinese sign language recognition and generation for bidirectional communication between deaf and hearing people
    Xiao, Qinkun
    Qin, Minying
    Yin, Yuting
    NEURAL NETWORKS, 2020, 125 : 41 - 55
  • [25] School based factors affecting learning of Kenyan sign language in primary schools for hearing impaired in Embu and Isiolo counties, Kenya
    Rwaimba, Samuel Muthomi
    REVISTA BRASILEIRA DE EDUCACAO DO CAMPO-BRAZILIAN JOURNAL OF RURAL EDUCATION, 2016, 1 (02): : 584 - 605
  • [26] Deep Learning and Sign Language Models Based Enhanced Accessibility of e-governance Services for Speech and Hearing-Impaired
    Eunice, R. Jennifer
    Hemanth, D. Jude
    ELECTRONIC GOVERNANCE WITH EMERGING TECHNOLOGIES, EGETC 2022, 2022, 1666 : 12 - 24
  • [27] A proposed artificial intelligence-based real-time speech-to-text to sign language translator for South African official languages for the COVID-19 era and beyond: In pursuit of solutions for the hearing impaired
    Madahana, Milka
    Khoza-Shangase, Katijah
    Moroe, Nomfundo
    Mayombo, Daniel
    Nyandoro, Otis
    Ekoru, John
    SOUTH AFRICAN JOURNAL OF COMMUNICATION DISORDERS, 2022, 69 (02)
  • [28] Innovative hand pose based sign language recognition using hybrid metaheuristic optimization algorithms with deep learning model for hearing impaired persons
    Alabduallah, Bayan
    Dayil, Reham Al
    Alkharashi, Abdulwhab
    Alneil, Amani A.
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [29] Empowering Deaf-Hearing Communication: Exploring Synergies between Predictive and Generative AI-Based Strategies towards (Portuguese) Sign Language Interpretation
    Adao, Telmo
    Oliveira, Joao
    Shahrabadi, Somayeh
    Jesus, Hugo
    Fernandes, Marco
    Costa, Angelo
    Ferreira, Vania
    Goncalves, Martinho Fradeira
    Lopez, Miguel A. Guevara
    Peres, Emanuel
    Magalhaes, Luis Gonzaga
    JOURNAL OF IMAGING, 2023, 9 (11)
  • [30] Sign language based educational interventions vs. other educational interventions to improve the oral health of hearing-impaired individuals: A systematic review and meta-analysis
    Bhadauria, Upendra S.
    Purohit, Bharathi
    Priya, Harsh
    COMMUNITY DENTAL HEALTH, 2024, 41 (01) : 14 - 19