Developing Convolutional Neural Networks for Recognition of American Sign Language
DOI:
https://doi.org/10.52113/2/12.01.2025/1-12Keywords:
Hand gesture, Sign Recognition, ASL lettersAbstract
Instead of using speech to communicate with one another, the deaf and dumb utilize a set of signs known as sign language. However, utilizing signs to interact with this society is extremely difficult for non-sign language speakers. To facilitate communication between the deaf and the general public, an application that can identify sign language motions or actions must be developed. Considering its significance, there are already techniques for recognizing American Sign Language (ASL), one of the most widely used sign languages in the world, with varying degrees of accuracy. The study aims to improve the accuracy of current ASL identification techniques by putting forward a deep-learning model. A convolutional neural network (CNN) was developed and trained in this study to correctly recognize hand gestures that represent the ASL letters (A-Z). The suggested model performs exceptionally well, attaining nearly perfect accuracy on the dataset, with a test accuracy of 99.97%. The model is a potential tool for practical applications in assistive technology for the hearing impaired since the results show that it can identify and distinguish between distinct ASL hand signs.
Downloads
Downloads
Published
Versions
- 2025-12-18 (2)
- 2025-12-18 (1)
Issue
Section
License
Copyright (c) 2025 Salwa Baawi

This work is licensed under a Creative Commons Attribution 4.0 International License.
