Reliability of Fault Tolerance in Cloud using Machine Learning Algorithm
S. Harini Krishna1, G. Niveditha2, K. Gnana Mayuri3

1S. Harini Krishna, Assistant Professor, Department of Computer Science Engineering, Geethanjali College of Engineering and Technology, Hyderabad, India.
2G. Niveditha, Assistant Professor, Department of Computer Science Engineering, Geethanjali College of Engineering and Technology, Hyderabad, India.
3K. Gnana Mayuri, Assistant Professor, Department of Computer Science Engineering, Geethanjali College of Engineering and Technology, Hyderabad, India. 

Manuscript received on September 16, 2019. | Revised Manuscript received on 24 September, 2019. | Manuscript published on October 10, 2019. | PP: 1150-1152 | Volume-8 Issue-12, October 2019. | Retrieval Number: L38891081219/2019©BEIESP | DOI: 10.35940/ijitee.L3889.1081219
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: The basic fault tolerance issues seen in cloud computing are identification and recovery. To fight with these issues, so many fault tolerance methods have been designed to decrease the faults. However, due to the reliability and web based service giving behavior, fault tolerance in cloud computing will be a huge challenge. The present model is not just on tolerating faults but also to decrease the possibility of future faults as well[4].The fault tolerance deals with the exact and constant operation of the fault segments. The processing on computing nodes can be done remotely in the real time cloud applications, so there could be more possibilities of errors. Hence there lies an immense necessity for fault tolerance to attain consistency to the real time computing on cloud infrastructure. The “fault tolerance” can be explained through fault processing that have two basic stages. The stages are (i) The effective error processing stage which is used to intended for carrying the “effective error” back to inactive state, i.e., before the error occurred (ii) The latent error processing stage intended for guaranteeing that the fault does not get effective once again.
Keywords: Algorithm Engineering, Machine Learning, Cloud, Guaranteeing
Scope of the Article: Machine Learning