Machine Learning-Based Audio Interface Model for Sign Language Recognition
Akash Rai1, Sujata Kadu2, Satish Salunkhe3

1Akash Rai, Department of Information Technology, Terna Engineering College, Nerul, Navi Mumbai (Maharashtra), India.
2Sujata Kadu, Department of Information Technology Terna Engineering College, Nerul, Navi Mumbai (Maharashtra), India.
3Satish Salunkhe, Department of Computer Engineering, Terna Engineering College, Nerul, Navi Mumbai (Maharashtra), India.
Manuscript received on 26 November 2022 | Revised Manuscript received on 11 December 2022 | Manuscript Accepted on 15 December 2022 | Manuscript published on 30 December 2022 | PP: 38-42 | Volume-12 Issue-1, December 2022 | Retrieval Number: 100.1/ijitee.A93741212122 | DOI: 10.35940/ijitee.A9374.1212122
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (

Abstract: Due to the fact that most offices and educational institutions now operate from home, the work-from-home and study-from-home cultures have made it difficult to interact with persons who are deaf or hard of hearing. These people communicate within their society using sign language, which is not widely understood by others. Most of the time, as a result of this, they miss out on the opportunity to express their point in front of everyone since they are ignored/passed over without receiving the necessary attention. In real-time, having an independent translator that can process photos and interpret signs quickly at the speed of streaming images is critical. We’ll utilize TensorFlow Object Detection and Python to bridge the gap by creating an end-to-end bespoke object detection model that not only translates sign language in real time but also speaks it to others. 
Keywords: Automated Sign Language Recognition, Object Detection, Sign Language Detection
Scope of the Article: Natural Language Processing