Hadoop Cluster Performance with MapReduce and Pig Latin in the Big Data
K.Umapavan Kumar1, S. V. N Srinivasu2, A. Ramaswamy Reddy3

1Dr K UmapavanKumar, Assoc.Professor. Department of Computer Science & Engineering, Malla Reddy Institute of Technology, Maisammaguda, Dhulapally,Secundarabad (Telangana), India.
2Dr S.V.N. Srinivasu, Professor, Department of Computer Science & Engineering, Narasaraopeta Engineering College, Narasaraopeta, Guntur (Andhra Pradesh), India.
3Dr. A. Ramaswamy Reddy, Principal, Malla Reddy Institute of Technology, Maisammaguda, Dhulapally, Secundarabad (Telangana), India.
Manuscript received on 01 May 2019 | Revised Manuscript received on 15 May 2019 | Manuscript published on 30 May 2019 | PP: 83-86 | Volume-8 Issue-7, May 2019 | Retrieval Number: F3699048619/19©BEIESP
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: The huge amount of data population in the current scenario incurring two major issues one is storage and other is processing of the data. The big data scenarios like social media, search engines and other applications generating humongous data which need be separately handled when compared with existing storage and processing techniques. The important point is the way of storing the data and processing the data. The current discussion addressing the Hadoop framework internals and the capability of the Hadoop cluster along with the processing of map reduce and pig Latin scripts. The main goal is to analyze the environment of map reduce and pig scripts with a method of estimating the factors like time and space requirements along with the input splits and output splits in a detailed manner. The existing works in Hadoop internals not focused much on these aspects and sure the discussion creates a road map to study the architectural aspects which can be helpful to the researchers to enhance the existing architectures in a better possible way. On the other hand adopt the new techniques like analytics and Machine Learning libraries based on the requirements of the industry. The reason behind this work is to pin point the usage of map reduce and the complementary aspects like pig and summary of the various parameters to suggest usage path to the developers. The work also provides some analytics to conclude the suitability of the application running in the context of Map Reduce and Pig Latin
Keyword: Big Data, MapReduce, Hadoop Pig, unstructured.
Scope of the Article: Clustering.