A Research on Application of Human-Robot Interaction using Artifical Intelligence
L. Mary Gladence1, Vakula C. K2, Mercy Paul Selvan3, T. Y. S. Samhita4
1L. Mary Gladence, Assistant professor, Department of IT, Sathya bama Institute of science and Technology, Chennai TamilNadu), India.
2Vakula C.K, UG Student, Department of IT, Sathya bama Institute of Science and Technology, Chennai (TamilNadu), India.
3Mercy Paul Selvan, Assistant professor, Department of IT, Sathya bama Institute of science and Technology, Chennai (TamilNadu), India.
4T. Y. S. Samhita, UG Student, Department of IT, Sathya bama Institute of Science and Technology, Chennai (TamilNadu), India.
Manuscript received on 20 August 2019 | Revised Manuscript received on 27 August 2019 | Manuscript Published on 31 August 2019 | PP: 784-787 | Volume-8 Issue-9S2 August 2019 | Retrieval Number: I11620789S219/19©BEIESP DOI: 10.35940/ijitee.I1162.0789S219
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: This work presents an online robot instructing with common human – robot connection. Normal human-PC association is a critical interface to acknowledge benevolent joint effort of canny robot and human. The greater part of the correspondence between people is done through discourse and signal, and the connection among discourse and motion is regular and natural. Robot educating by methods for discourse acknowledgment  is another route for instructing and playback, which utilizes the common discernment channels of individuals. This paper centers around a training technique dependent on the common human-PC communication. The task is to teach the Robot to compose by giving three unique information sources like Voice order, Camera based Video info or utilizing MEMS equipment interface utilizing Zigbee. Voice direction can be perceived utilizing Android application. Motion will perceived utilizing framework camera, utilizing PCA calculation framework will order the pictures. MEMS sensor is wired equipment hardware which will contain the quantity of likelihood blend to order the robot. Motions are segregated by applying a most extreme data model, with highlights separated utilizing principal component analysis (PCA). The proposed interface could be stretched out to the genuine modern scene. By utilizing signal and discourse, administrators can control the robot without complex tasks. The outcomes show that the online robot instructing framework can effectively show robot controllers.
Keywords: Gesture, Human-Robot Interaction, PCA, Natural Speech Understanding, Online Robot Teaching.
Scope of the Article: Artificial Intelligence