Sign Language Recognition
Main Article Content
Abstract
Sign Language is mainly used by deaf (hard hearing) and dumbpeople to exchange
information between their own community and with other people. It is a language where
people use theirhand gestures to communicate as they can’t speak or hear. Sign Language
Recognition (SLR) deals with recognizing the hand gestures acquisition and continues till
text or speech is generated for corresponding hand gestures. Here hand gestures for sign
language can be classified as static and dynamic.However, static hand gesture recognition is
simpler than dynamic hand gesture recognition, but both recognition is important to the
human community.We can use Deep Learning Computer Vision to recognize the hand
gestures by building Deep Neural Network architectures (Convolution Neural Network
Architectures) where the model will learn to recognize the hand gestures images over an
epoch.Once the model Successfully recognizes the gesture the corresponding English text is
generated and then text can be converted to speech.This model will be more efficient and
hence communicate for the deaf (hard hearing) and dump people will beeasier. In this paper,
we will discuss how Sign Language Recognition is done using Deep Learning
Downloads
Metrics
Article Details
Licensing
TURCOMAT publishes articles under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licensing allows for any use of the work, provided the original author(s) and source are credited, thereby facilitating the free exchange and use of research for the advancement of knowledge.
Detailed Licensing Terms
Attribution (BY): Users must give appropriate credit, provide a link to the license, and indicate if changes were made. Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses them or their use.
No Additional Restrictions: Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.