Developing Convolutional Neural Networks for Recognition of American Sign Language

Authors

  • Salwa Baawi University of Al-Qadisiyah, College of Computer Science and information Technology
  • Farah Jawad Al-Ghanim
  • Nisreen Ryadh Hamza

DOI:

https://doi.org/10.52113/2/12.01.2025/1-12

Keywords:

Hand gesture, Sign Recognition, ASL letters

Abstract

Instead of using speech to communicate with one another, the deaf and dumb utilize a set of signs known as sign language. However, utilizing signs to interact with this society is extremely difficult for non-sign language speakers. To facilitate communication between the deaf and the general public, an application that can identify sign language motions or actions must be developed. Considering its significance, there are already techniques for recognizing American Sign Language (ASL), one of the most widely used sign languages in the world, with varying degrees of accuracy. The study aims to improve the accuracy of current ASL identification techniques by putting forward a deep-learning model. A convolutional neural network (CNN) was developed and trained in this study to correctly recognize hand gestures that represent the ASL letters (A-Z). The suggested model performs exceptionally well, attaining nearly perfect accuracy on the dataset, with a test accuracy of 99.97%. The model is a potential tool for practical applications in assistive technology for the hearing impaired since the results show that it can identify and distinguish between distinct ASL hand signs.

Downloads

Download data is not yet available.

Downloads

Published

2025-12-18 — Updated on 2025-12-18

Versions

How to Cite

Developing Convolutional Neural Networks for Recognition of American Sign Language. (2025). Muthanna Journal of Pure Science, 12(1). https://doi.org/10.52113/2/12.01.2025/1-12