Two Way Communication for the Differently Abled
Rohini Jadhav1, Tanmay Chordia2, Yagyesh Shrivastava3, Srijan Kumar Singh4, Umesh Thorat5

1Rohini B. Jadhav*, Department of Computer Engineering ,BV(DU)COE, Pune, India.
2Tanmay Chordia, Department of Computer Engineering, BV(DU)COE, Pune, India.
3Yagyesh Shrivastava, Department of Computer Engineering, BV(DU)COE, Pune, India.
4Srijan Kumar Singh, Department of Computer Engineering, BV(DU)COE, Pune, India.
5Umesh G. Thorat, Tata Consultancy Services, Pune, India.
Manuscript received on March 15, 2020. | Revised Manuscript received on March 24, 2020. | Manuscript published on April 10, 2020. | PP: 2032-2035 | Volume-9 Issue-6, April 2020. | Retrieval Number: F3357049620/2020©BEIESP | DOI: 10.35940/ijitee.F3357.049620
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (

Abstract: Communication is the main channel for the people to interact with each other, every day we see many people who are facing challenges like deaf, dumb, facing the difficulty to interact with others. Due to birth defects, injuries and oral disorders, there has been a dramatic increase in the number of deaf and deaf victims in recent years. As they are unable to communicate with normal people, they must rely on some kind of visual communication. Formerly developed techniques are all sensors based and they didn’t give the general solution and were not economical. One of the main paradigms that we focus on is to endeavour the linking between the Sign Language medium with the Standard English Language and thus providing the communication between the two communities in a seamless experience. This project is developed in such a way to allow twoway communication between the one who knows the sign language (deaf and dumb) and the one who doesn’t(rest).Our project uses camera to take images of different gestures and image processing technique is used to recognise gestures and give audio and text as an output. On the other hand, for the reply our project will also process Speech to give back Sign language gestures as a reply to complete two-way communication.
Keywords: Speech to Text , HSV(Hue Saturation Value) Model, CNN (convolutional neural network), Image Conversion
Scope of the Article: Communication