Sign language detection application to facilitate communication for speech and hearing impaired individuals based on computer vision technology using inception resnetv2. HARDISC, an Android app, uses AI & Computer Vision (Inception ResNetV2) to detect sign language (A-Z) in real-time, facilitating communication for speech & hearing impaired with 99.40% accuracy.
Communication is a fundamental human need, yet not everyone possesses perfect communication abilities. People Communication is a fundamental human need, yet individuals with speech and hearing impairments face challenges due to the limited understanding of sign language among the general public. This study applies Artificial Intelligence and Computer Vision to enhance communication accessibility by detecting hand gestures and converting them into text. The lack of real-time sign language translation remains a barrier for individuals with disabilities. Existing systems often struggle with accuracy and device compatibility. This research develops and evaluates HARDISC, an Android-based application that recognizes letters A–Z through hand movement detection using a camera. The goal is to provide an effective and inclusive communication tool for the speech and hearing impaired. HARDISC utilizes Transfer Learning with Inception ResNetV2 and VGG16 for gesture classification. Image processing enables the camera to detect and translate hand movements into text. Model evaluation was based on accuracy, loss values, and device compatibility. Results show Inception ResNetV2 achieved 98.98% accuracy with a 0.0417 loss value, while VGG16 recorded 99.40% accuracy with a 0.0146 loss value, demonstrating high performance. HARDISC is compatible with Android KitKat 4 to Android 12, ensuring accessibility. This application provides an innovative, real-time solution to bridge communication gaps for individuals with speech and hearing impairments, improving their interaction with the general public.
This paper presents a highly relevant and timely contribution to the field of assistive technology, focusing on bridging communication gaps for individuals with speech and hearing impairments. The authors introduce HARDISC, an Android-based application designed to facilitate real-time sign language detection and translation into text using computer vision and artificial intelligence. By addressing the critical need for accessible communication tools, particularly given the limited public understanding of sign language, this research holds significant potential to enhance social inclusion and empower a vulnerable population, making it a valuable addition to current efforts in human-computer interaction and accessibility. Methodologically, the study employs a robust approach by leveraging Transfer Learning with two advanced deep learning architectures: Inception ResNetV2 and VGG16. These models are utilized for the classification of hand gestures corresponding to the American Sign Language alphabet (A-Z), captured via a device camera. The evaluation of HARDISC's performance focused on key metrics including accuracy, loss values, and device compatibility. The results are notably impressive, with VGG16 achieving a superior 99.40% accuracy and a low 0.0146 loss, while Inception ResNetV2 also performed strongly with 98.98% accuracy and 0.0417 loss. Crucially, the application's broad compatibility across Android versions, from KitKat 4 to Android 12, underscores its practical applicability and potential for wide adoption. Overall, HARDISC stands out for its demonstrated high accuracy, real-time processing, and extensive device compatibility, directly addressing stated limitations of existing systems. The successful application of state-of-the-art deep learning models for gesture recognition provides a strong foundation for future advancements. While the current scope focuses on individual letters (A-Z), which is an excellent starting point, the work lays the groundwork for expanding to full words, phrases, and more nuanced contextual understanding of sign language. This research represents a significant step forward in developing innovative, inclusive communication solutions, and its findings offer a compelling case for further exploration and deployment of AI-powered assistive technologies to foster more equitable and connected societies.
You need to be logged in to view the full text and Download file of this article - Sign Language Detection Application to Facilitate Communication for Speech and Hearing Impaired Individuals Based on Computer Vision Technology Using Inception Resnetv2 from Rekayasa .
Login to View Full Text And DownloadYou need to be logged in to post a comment.
By Sciaria
By Sciaria
By Sciaria
By Sciaria
By Sciaria
By Sciaria