Priyanjali Gupta Builds AI Model For Real-Time Sign Language Translation

Priyanjali Gupta Builds AI Model For Real-Time Sign Language Translation

Priyanjali Gupta, a computer science student at the Vellore Institute of Technology, Tamil Nadu, has emerged as a trailblazer in the field of inclusive technology. Inspired by her mother’s challenge to utilize her engineering education, Gupta embarked on a journey that led to the development of an innovative AI model capable of instantaneously translating American Sign Language (ASL) into English.

The spark for Gupta’s groundbreaking idea ignited during her interactions with voice-controlled virtual assistants like Alexa. It was her mother’s teasing remark that prompted her to contemplate the intersection of technology and inclusivity. The idea of leveraging AI to bridge communication gaps struck a chord with Gupta, setting the wheels in motion for her ambitious project.

In February 2022, just a year after her mother’s challenge, Gupta successfully developed an AI model using the Tensorflow object detection API. The model utilized a pre-trained ssd_mobilenet, employing transfer learning techniques to enhance its capabilities. This achievement quickly gained widespread attention when Gupta shared it on LinkedIn, garnering over 58,000 reactions and 1,000 positive endorsements.

Gupta’s Github post sheds light on the intricacies of her project, detailing the manual creation of a dataset using a webcam. The dataset includes essential signs in ASL, such as Hello, I Love You, Thank You, Please, Yes, and No. This meticulous approach to data collection demonstrates Gupta’s commitment to accuracy and effectiveness in her model.

Acknowledging her sources of inspiration, Gupta credits a video by data scientist Nicholas Renotte on Real-Time Sign Language Detection for guiding her in the right direction. However, she doesn’t rest on her laurels and is actively exploring ways to enhance her model’s capabilities. Currently, Gupta is researching the application of Long-Short Term Memory networks (LSTMs) to train the model on multiple frames for video detection.

The challenges inherent in developing a deep learning model for sign detection are not lost on Gupta. As she humbly admits, “Making a deep neural network solely for sign detection is rather complex.” Yet, she embraces the learning process, expressing confidence that the open-source community, with its wealth of experience, will contribute to finding solutions.

Despite ASL being the third most widely spoken language in the United States, translation applications and technologies for this language have not kept pace with demand. Gupta’s work is a significant step towards filling this void, especially considering the growing Zoom Boom, which has highlighted the importance of sign language communication during the pandemic.

Gupta’s dedication aligns with a broader trend among researchers and developers striving to address the challenges faced by the sign language community. Google AI’s real-time sign language detection model, achieving a remarkable 91% accuracy in identifying signers, exemplifies the ongoing efforts in this domain. However, Gupta emphasizes the need for standardizing sign languages and other communication modes for the differently-abled, emphasizing the importance of bridging the communication gap.

About The Author

Discover more from

Subscribe to get the latest posts to your email.

Subscribe To Our Newsletter

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading