Sign Language Interpreter using Kinect Motion Sensor using Machine Learning
Amol Potgantwar1, Pragati Bachchhav2

1Prof. (Dr.) Amol Potgantwar*, H.O.D, Computer Dept., SITRC, Nashik.
2Pragati Bachchhav, Student, Computer Dept., SITRC, Nashik.

Manuscript received on September 15, 2019. | Revised Manuscript received on 24 September, 2019. | Manuscript published on October 10, 2019. | PP: 3151-3156 | Volume-8 Issue-12, October 2019. | Retrieval Number: L26451081219/2019©BEIESP | DOI: 10.35940/ijitee.L2645.1081219
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (

Abstract: Sign language is the route through which deaf and dumb people usually communicate with one another. It is seen that, impaired people find a difficulty while interacting with the society. Normal individuals are not able to understand their sign language. To bridge this gap, the proposed system acts as the mediator between impaired and normal people. This system uses Kinect motion sensor to capture the signs. The Kinect motion sensor captures 3 dimentional dynamic gestures. Thus this study is proposed for extracting features of dynamic gestures of Indian Sign Language (ISL). As American Sign Language (ASL) is popular and is used in research and development field, on the other hand ISL has been recently standardized and hence ISL recognition is less explored. The propose method extracts feature from the sign and converts it to the intended textual form. The method then integrates local as well as global information of the signs. This integrated feature leads to improvizing the system performance , the system serves as an aid to disabled people. Its application includes hospitals, government sectors and some multinational companies.
Keywords: Indian Sign languange, American Sign language, Kinect Motion Sensor
Scope of the Article: Machine Learning