Study of WEBCRAWLING Polices
Anish Gupta1, K. B. Singh2, R. K. Singh3
1Anish Gupta, Pursuing Ph.D, B.R. Ambedkar University Bihar, Muzzafarpur (Bihar), India.
2Dr. K. B. Singh, Associated, Institute of Physics IOP, London, Indian Science Congress Association ISCA, Kolkata, Indian Society of Atomic & Molecular Physic ISAMP, India.
3Dr. Ram Kishore Singh, Associate Professor & Head, Department of EC and IT, M. I. T. Muzaffarpur (Bihar), India.
Manuscript received on 10 May 2013 | Revised Manuscript received on 18 May 2013 | Manuscript Published on 30 May 2013 | PP: 65-67 | Volume-2 Issue-6, May 2013 | Retrieval Number: F0786052613/13©BEIESP
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Web crawler is a software program that browses WWW in an automated or orderly fashion, and the process is known as web crawling. A web crawler creates the copy of the visited pages so that when required later on, it will index the pages and processing becomes faster. This paper discuss the various techniques of the web crawling through which search becomes faster. In this paper studied has been done on the various issues important for designing high performance system. The performances and outcomes are determined by the given factors under the summarization criteria.
Keywords: Web Crawler, WWW – World Wide Web, URL – Universal Resource Locator, OPIC (On-Line Page Importance Computation), MIME – (Multipurpose Internet Mail Extension).
Scope of the Article: Semantic Web