Web Search Engines Operation

Document Sample
Web Search Engines Operation Powered By Docstoc
					?Search engines are the central factor in driving Potential Buyers to your website. So,
it is better to learn how these search engines truly operate and how they give
information to the Searcher Starting a search.

Here are practically two models of search engines. The first is by crawlers or spiders.

Search Engines employ spiders to index websites. As you submit your website pages
to a search engine by using their mandatory submission page, the search engine spider
will index your whole site. A spider is an Automatic Machine that is driven by the
search engine system. Spider sees a web site, look at the content on the real site, the
site's Meta tags and also check on the links that the site connects. The spider then
Transfer all that Data back to a Main depository, where the data is indexed. It will also
Go to every link you allow on your website and index those sites as well. Certain
spiders will merely index a sure number of pages on your site, so don't Build a site Of
500 pages!

The spider will frequently Revisit the sites to check for whichever information that
has changed. The Rate of which this occurs is determined by the Administrators of the
search engines.

A spider is Just like a textbook where it Have the table of contents, the actual content
and the links and references for all the websites it finds in the course of its search, and
it possibly will index up to a million pages a day.

Examples of search engines: Yahoo, Google, Bing, and Cuil

Once you tell a search engine to find information, it is in reality searching By the
index which it has Made and not truly searching the Web in real time. Distinct search
engines deliver diverse rankings as not all search engines uses the same algorithm.

Some of the things that a search engine algorithm Search for is the density and
location of keywords on a web page. However, it can additionally find artificial
keyword stuffing or keywordspam. Therefore the algorithms study the way that pages
link to other pages on the Web. By Inspecting how pages link to each other, an engine
can both find out what a page is about, if the keywords of the linked pages are related
to the keywords on the original page.