A Distribution of Nodes in Big Data using Hadoop Open Source System
Nishant Mathur1, Mukul Jain2

1Mr. Nishant Mathur*, Assistant Professor, ICFAI Tech School, ICFAI University, Dehradun , India.
2Dr. Mukul Jain, Assistant Professor, ICFAI Tech School, ICFAI University, Dehradun, India.
Manuscript received on December 18, 2019. | Revised Manuscript received on December 23, 2019. | Manuscript published on January 10, 2020. | PP: 106-110 | Volume-9 Issue-3, January 2020. | Retrieval Number: C8459019320/2020©BEIESP | DOI: 10.35940/ijitee.C8459.019320
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Apache Hadoop is an free open source Java framework under Apache Software Foundation. It provides storage of large amount of data efficiently with low costing. Hadoop has two main core components one is HDFS (Hadoop Distributed File System) and second Map Reduce. It is basically a file system and has capability of high fault-tolerant and while deploying supports less cost hardware. It. provides the high speed admittance to the relevance data. The Hadoop architecture is based on cluster, which consist of two nodes named as Data -Node and Name-Node which perform the internal activity known as heart beat to process data storage on distributed file system and Map reducing is performed internally to show the clustering of distributed data on localhost of ssh serverwebsite. Large quantity of data is needed to store in distributed file structure, for this Hadoop has played important role. Maintaining the large volume storage, making data duplicity for providing security and recovery of big data for its analysis and prediction. 
Keywords: Big Data, Hadoop, HDFS, Map Reduce Data-Node, Name-Node, Task-Tracker and Job-Tracker.
Scope of the Article: Big Data Networking