Dynamic sign language translating system using deep learning and natural language processing
Main Article Content
Abstract
People around the world with speech and hearing impairment use a media of communication known as ‘Sign language’.
In recent times, Sign language is omnipresent. However, there exists a challenge for people who do not know sign language, to communicate with people who can communicate exclusively using sign language. This gap can be bridged by using technologies of recent times to recognize gestures and design intuitive systems with deep learning. The aim of this paper is to recognise American Sign Language gestures dynamically and create an intuitive system which provides sign language translation to text and speech of various languages. The system uses Convolutional neural network, natural language processing, language translation and text-to-speech algorithms. It is capable of recognizing hand gestures dynamically and predicting the corresponding letters to form a desired sentence accurately.
Downloads
Metrics
Article Details
Licensing
TURCOMAT publishes articles under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licensing allows for any use of the work, provided the original author(s) and source are credited, thereby facilitating the free exchange and use of research for the advancement of knowledge.
Detailed Licensing Terms
Attribution (BY): Users must give appropriate credit, provide a link to the license, and indicate if changes were made. Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses them or their use.
No Additional Restrictions: Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.