Search Engine Marketing Audit

Document Sample
Search Engine Marketing Audit Powered By Docstoc
					Search Engine Marketing Audit.

Analysis of the systems and processes of web
site development by PracticeWEB and the
impact on search engine visibility for its clients.


Leading Consultant:
Mike Grehan.




       netmarketing is the online trading brand of: Network marketing Communications Ltd.
                    Design Works, William Street, Gateshead, Tyne & Wear.
                          t: 0191 423 6200 - e: info@netmarketing.co.uk
Summary

Consultancy and training agenda

How search engines work

   •   The evolution of the search engine industry

   •   Natural (or organic) listings (free crawl and listings)

   •   Paid inclusion (paying to have your site crawled)

   •   Paid listings (search engine advertising)

Becoming crawler friendly

   •   Anatomy of a web page

Getting a rank at search engines

   •   The importance of links

The major players

   •   Google

   •   Yahoo!

   •   MSN

   •   Ask Jeeves.

The "Dark Side" of search engine optimisation

   •   What "not to do" - avoid being penalised or banned from a search engine index

A brief look at the future of search on the web

   •   How search engines are likely to change

About Mike Grehan
   •   Some background information on the author of this report




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
Summary:

PracticeWEB recognised the need to ensure that their systems and processes for
designing and developing web sites on behalf of their clients were not creating any
technical barriers to being indexed by search engines.

Client Services Manager Helen Juckes, and senior members of the technical support
group spent two days reviewing and analysing current practices.

In general, PracticeWEB creates web pages which can be easily crawled by search
engine spiders and indexed. However, there is a method of making pages more
relevant to search engines by optimising the pages.

Now, with a much more intimate knowledge of how the major search engines really
work and how they crawl the web, pages can be created specifically around
keywords and phrases.

The major issue which PracticeWEB faces is frequent lack of visibility for some
clients in the search engine rankings. However, as the team is now aware, beyond
creating "crawler friendly" pages for search engines to index, the visibility aspect i.e.
turning up in the top ten results at the major search engines, is largely in the hands of
the clients themselves.

The PracticeWEB team was taken through a very thorough explanation of what it is
that causes one web page to rank higher in a search engine results page than
another. As this document (or tool as it is) will go on to explain, a high ranking at a
search engine is largely based on the number of inbound links that point to them.
Links are about reputation as far as a search engine algorithm is concerned.

This document is both a follow up report and a working tool for the team at
PracticeWEB as well as its clients. With this greater knowledge and better
understanding between client and supplier of what is required to develop a high
ranking in search engines, it will become a joint effort fuelled by truly realistic
expectations.

As links are so vitally important, I have included a set of worksheets for the
PracticeWEB team and its clients. It's very much a blueprint for linking success.

The rest of this document covers the art and science of search engine marketing as
learned by the PracticeWEB team and to be adopted by their clients.

It is my job to act as consultant to some of the world's leading organisations. I can
say quite honestly that, having looked at the PracticeWEB operation and the service
they provide to their clients, I was very impressed with their professionalism.

Link long and prosper.

Mike Grehan.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
             PracticeWEB CONSULTANCY AND TRAINING AGENDA
              On site delivery at PracticeWEB offices 23/24 September 2004
-------------------------------------------------------------------------------------------------------

Agenda and objectives of the consultancy and training programme.

Discovery session:

     •    To gain insight into the current set up and operational systems and processes
          of PracticeWEB's approach to web site creation and hosting of its client
          websites.

     •    To understand vendor (PracticeWEB) expectations as far as search engines
          are concerned.

     •    To determine the level of expectation of customers as far as search engines
          are concerned.

Evaluation:

     •    Appraise current systems and processes with relevant members of staff and
          make suggestions and recommendations for enhancement.

     •    Raise awareness of the "dark side" of search engine optimisation (Spam) and
          the penalties imposed by search engines.

     •     Discuss best practice site design and server set up for search engine
          crawlers.

Tips and tools:

     •    Introduce software tools for use in automating search engine optimisation
          techniques.

     •    Raise awareness of online services for search engine marketers.

     •    Examine options for PracticeWEB to play the role of search marketing vendor
          or trainer/educator of search marketing techniques.

Formal presentation to group:

     •    Present detailed overview of how search engines really work.

     •    Q&A session.

     •    Open discussion about setting realistic expectations at levels which can be
          truly achieved.




                netmarketing is the online trading brand of: Network marketing Communications Ltd.
                             Design Works, William Street, Gateshead, Tyne & Wear.
                                   t: 0191 423 6200 - e: info@netmarketing.co.uk
The evolution of the search engine industry.

The history of web search engines really started with student projects at various
universities which then evolved out of academia and into commercial organizations.
Prior to the dawning of what we now know as search engines (and directories), the
web was a chaotic mess. In fact, the biggest librarian's nightmare ever.

There was information - tons of it - but you just simply couldn't easily find it. Even with
today's extremely improved search technology, some people still believe that not a lot
has changed as the web continues to grow exponentially.

This is not quite true though. Search engines (and directories) have at least
attempted to provide a more methodical and logical way of retrieving information from
the billions of pages which exist on the world wide web. The work which started as
university projects has revolutionised methods of information retrieval science and
the way we use the web today.

Although we tend to use the term 'search engine' generically for any type of search
service available on the web, originally they fell into two distinct categories: search
engines and directories.

Yahoo!, for instance, started life specifically as a human powered directory. A team of
editors was created in an effort to catalogue and index the web in a hierarchical
manner, just as in a library. However, it wasn't too long before they realised that their
human powered attempt simply couldn't scale at the same pace as the web itself.

Now, the most popular form of search engines are known as "algorithmic" and use
automated software programs, referred to as crawlers (interchangeable with spiders),
which traverse the web page-by-page, link-by-link, downloading web pages in the
millions every day to be compiled into large searchable databases.

From the first web crawler (which was actually called World Wide Web Wanderer)
through to current search superstar Google, brands have come and gone and the
industry has weaved its way from pure technology to tech-media innovation.

While former innovators, such as Alta Vista, floundered in their attempts to cash in on
the portal model, even suggesting that they would provide free access to the web for
all their users at one time, (a disastrous decision which saw much back-peddling) two
Stanford University students hit the search space like a supernova.

Larry page and Sergey Brin, collaborated to develop technology that became the
foundation for the Google search engine. The earliest web search engines based
their ranking algorithms mainly on the text which appeared on each page.

Generally speaking, this meant that each web page which contained the terms which
the end user typed into the search box i.e. "digital cameras", that page would be
considered as a relevant candidate to be returned.

However, as the web grew and the number of candidate pages for user queries grew
with it, search engines encountered the first major stumbling block: the abundance
problem.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
A ranking mechanism based on text only retrieval could bring back millions of
candidate pages for a particular "keyword search" - all of which are relevant because
they contain the search terms - but how do you determine which are the most
important or "authoritative" of those pages to place at the top of the ranking order?

PageRank™ (named after Google co-founder Larry Page and not web pages as
many assume) is a technology which harnesses data of the entire link structure of the
web (how web pages are linked together to form the web). Google uses this to
determine which pages are the most important (or authoritative), for any given search
by evaluating the number and the quality of inbound links to each given web page.
Google then uses hypertext matching analysis to factor in the textual content and
composition of each page.

This innovative approach proved that Google could provide far more relevant results
than its competitors. Combined with a totally uncluttered home page focused entirely
on the search box, and results returned at lighting speed (a fraction of a second in
most cases) Google rapidly soared into place as the leading search provider online.

It's safe to say that, the current crop of general purpose search engines also factor in
linkage data as a primary ranking consideration. In fact, Ask Jeeves boasts its own
Teoma Technology, Subject-Specific Popularity linkage based search solution, as
providing more relevant results to those of even Google.

The growing popularity of search engines online has also spawned a new industry
alongside: that of search engine marketing. Originally this began as search engine
optimisation. A technical method of analysing top ranking pages both in textual
composition and number of back-links in an effort to manipulate the search engine
results towards favoured pages.

Some search engine optimisers became the scourge of the search engines by
developing ways of "spoofing" the search engine index by using devious technical
tactics to fool search engines into ranking their pages more highly. When detected,
search engines employ their own techniques to penalise (lower the rank) of such
pages.

Natural (or organic) listings (free crawl and listings)




            netmarketing is the online trading brand of: Network marketing Communications Ltd.
                         Design Works, William Street, Gateshead, Tyne & Wear.
                               t: 0191 423 6200 - e: info@netmarketing.co.uk
The graphic shown on the previous page indicates the various components of a
general purpose search engine. The top left of the graphic shows the first part of the
process which is crawling the web finding web pages to index.

There is a pervading myth about search engines that, when you issue a query at the
search box, the search engine then goes out onto the web and looks for matching
pages to return. However, this is not the case at all. Search engines continually crawl
the web downloading millions of pages everyday to place in their own
proprietary/custom databases. When a query is issued at the search engine
interface, the search engine returns pages which have been indexed in its own
database. This is one of the main reasons that you get different results for the same
query at different search services.

You need to think of the Spider as a little like being a librarian gathering indexing
information for the library filing system - with so much information to catalogue some
things may get overlooked - particularly if they are not 'flagged up' to be noticed.

Although crawling is actually very much a rapid process, conceptually, a crawler is
doing just the same thing as a surfer. In much the same way as your browser i.e.
Internet Explorer, sends HTTP requests (hypertext transfer protocol), the most
common protocol on the web, to retrieve web pages to download and show them on
your computer monitor, the crawler does similar, but downloads the data to a client (a
computer programme creating a repository/database interacting with other
components). First the crawler retrieves the URL and then connects to the remote
server where the page is being hosted.

It then issues a request to retrieve the page and its textual content and then scans
the links the page contains to place in a queue for further crawling. Because a
crawler works on ‘autopilot’ and only downloads textual data and not images or other
file types (in the main) it is able to jump from one page to the next via the links it has
scanned at very rapid speeds. Most of the major search engines can now download
tens of millions of pages everyday.

Simply put, crawlers are unmanned software programmes operated by the search
engines which traverse the web recording information to add to their index. Basically
they collect text and follow links. Given the schematic above, this may seem like a
fairly simplistic description, but in essence, this is all a crawler is doing. If you check
the log files (analysis of web site activity) of your site, you'll frequently see names like
‘yahoo-slurp’ or ‘googlebot’ (respectively the names of spiders for Yahoo! and
Google). So what happens when slurp or googlebot, for instance, arrive at your site?

First the text from the <title> tag is extracted. The <title> tag on your web page is
probably the most important piece of information you can feed to a search engine
spider. Next the actual text from the page is parsed (stripped out) of the HTML code
and a note of where it appeared on the page is recorded. For those search engines
which use the information in <meta> tags (only a few of the major search engines
use this information and it is not of the high importance that people sometimes
assume) the keywords and description are also extracted.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
The crawler/spider then pulls out the hyperlinks and puts them into two categories:
those which belong to the site (internal links) and those which don't i.e. links to
external sites which your pages point to. External links are placed in ‘crawl control’
where they wait in the queue for future crawling. Each page from a site which is
downloaded is placed in the page repository and given an ID number.

As has already been mentioned, we tend to use the term 'search engine' generically
for all search services on the web. It's also interesting to note that, when we use the
term with regard to true search engines (the crawlers), we tend to talk about them as
though they were all the same thing. The fact of the matter is, even though they all
use Crawlers/Spiders to build their indexes, they all collect different information in
different ways. And the algorithm (the computer programme which sorts and ranks
search results) which each of the major search engines uses for ranking purposes is
unique to each specific service.

When someone issues a keyword query (or key phrase) at a search engine interface,
it is just like issuing a query to a database. Based on the keyword or phrase which is
input, the retrieval programme (algorithm) returns up to as many as millions of pages
containing those key words or phrases. However, a number of things are considered
based on the information the spider returned: link popularity, keyword ratio/density
(i.e. how many times the keyword appeared on the page and where it appeared),
how old the site is, how long it has been in the database and whether the page has
changed since the last visit, amongst other things. A ‘weighting factor’ is added or
subtracted depending on the number of times a keyword is repeated and the same
for where it appears on the page. All of these and other factors which are
programmed into the algorithm are considered to determine in which order of
preference the results will be returned.

It's of interest to note that, many studies have proved that, the average surfer rarely
goes beyond the first page of results following a keyword query. Also, the average
surfer will use between three and five words for a query (mostly three). And the
relevancy factor of the pages returned following a query, begins to ‘drop like a stone’
after the second page of results. It’s also very important to remember that search
engines, from time-to-time, will change their ranking algorithm and therefore what
scores a top 10 result today – may not do so tomorrow. This is why it is so important
to monitor for changes.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
Understanding that search engines crawl the web for pages and then return pages,
not web sites, gives greater insight into how to develop your web site to be crawler
friendly. Remember, a web site is not visited in a linear manner, such as the graphic
to the left on the page above i.e. hierarchically working from the topmost page to the
lowest. In fact they come at your web site from all sides looking for data as in the
graphic on the right.

Paid inclusion (paying to have your site crawled)

Certain site design elements can cause problems for crawlers./spiders. These
include such things as dynamic delivery, i.e. web sites where the page content is
created automatically from a database. These types of web sites create URL's (web
page addresses) with an array of characters such as ? & % and others.

Up until recently, a crawler would not venture past one of these URL's for fear of
getting trapped in the database itself. It may happen that the database site has only a
few hundred pages which would not be much of a problem. But it may also happen
that the database has 150,000 pages and this may force it into a recursive loop
bringing down both the webserver and the crawler.

There are ways and means of "flattening out" these URL's such as mod_rewrite at
the server level, for instance. However, even though both Yahoo! and Google are
having less and less problems with these types of sites, there are other technologies
which do cause major crawling problems. The use of Flash for animations is a delight
for the site visitor, but means nothing to a search engine crawler as they have so little
textual content. Audio visual files and streaming technology for video cause issues as
they don't have enough data for a crawler to be able to index.

Beyond looking for workarounds, of which there are many, Yahoo! has made it easy
for site owners to guarantee that their pages are indexed, regardless of the
technology, by offering a 'paid inclusion' service.

The brand new Yahoo! Search was launched at the beginning of March 2004. This
replaced the natural (organic) results supplied by Google and also replaced the paid
inclusion programmes previously provided by Inktomi, Fast (AllTheWeb) and Alta
Vista and rolled them all into one.

Whereas before, paid inclusion programmes worked on the basis of an initial sum for
the first URL (say $39) and then a lower amount per URL after that (say $25) and
that was it - SiteMatch (the name for the new Yahoo! paid inclusion programme)
differs in that, it charges an initial subscription fee and then a cost per click charge. It
has to be remembered that, the only guarantee you are getting from Yahoo! is rapid
inclusion in their index and across their network. There is NO guarantee of any
preference for ranking. And unlike previous paid inclusion programmes, SiteMatch
subscriptions and feeds are reviewed by human editors (for quality control purposes)
before they are included in the index.

Yahoo! says that 95% of its index is made up from the free crawl. So it's absolutely
essential for you to check and see if your pages are already in the Yahoo! index
*before* you go the SiteMatch route (see the section on submitting to the free crawl
at Yahoo!)




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
Paid listings (search engine advertising)

Overture purchased Alta Vista and AllTheWeb in 2003. In turn they were acquired by
Yahoo! in 2003.

Is it a search engine? Is it a directory? Is it an auction site? Is it an advertising media
network? Is it a classified ads site? Well… it’s kind of a combination of the lot
actually! Overture does not crawl the web for sites to include in its database. The
main results at Overture are those where webmasters have bid for particular
keywords/phrases. The more they pay the higher they come. And in true Dutch
auction style, if their competitor bids more for that particular keyword/phrase, then
down you slip… until you bid more again, of course. Overture also has a team of over
100 editors screening ad’s for their partner sites. So, hybrid is definitely the word
here. The interesting thing about Overture is, they attract more revenue from their
partner sites than they do from their own. In fact, over 90% of revenue is from clicks
at MSN, Yahoo! Alta Vista, and other partners (these are marked as 'sponsored
results' at partner sites).

Overture, formerly known as GoTo, is the world's leader in Pay-For-Performance
(commonly known as pay-per-click) search on the Internet. Advertisers bid for
placement on relevant search results and pay Overture only when a consumer clicks
on their listing. Following a rigorous screening for user relevance by Overture's 100-
person editorial team, the company distributes its search results to tens of thousands
of sites across the internet, including Microsoft and Yahoo!, making it the largest Pay-
For-Performance search and advertising network on the internet.

The same type of service is also offered by Google with their AdWords product. As
with Yahoo!/Overture, you can see these adverts to the right of the page and
frequently at the top of the page above the natural (organic) results. It is very much a
Dutch auction to advertise on search engines this way. If you bid for the keyword "tax
relief expert" your competitor may then pay a higher click rate than you and leapfrog
over you I the listings. You may then pay a higher rate per click than him, and so and
so forth.

Although the amounts per click on less competitive keywords and phrases can be
quite low i.e. 10p per click, these amounts do add up if you attract a lot of visitors.
remember 10p per click multiplied by 50 clicks per day is £5. And £5 per day
multiplied by 28 days is £140 per month. And then remember, this is for only one
keyword/phrase, you may end up bidding on dozens or even hundreds.

Anatomy of a web page:

There is no doubt that it is far more cost effective to rank well in the natural listings as
opposed to the paid listings. So understanding the best composition of a web page to
ensure that it is crawler friendly and in context very important.

Avoiding technical barriers such as user name and password blocked web sites and
dynamically generated content (as mentioned earlier) provides ease of access to
web crawlers. And the cleaner your code, with keywords place in important areas of
the page helps the indexer to get an idea of what the subject matter of the page is.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
Writing for man and machines.

When writing for search engines, it’s essential to remember that, you need to
understand the machine’s limitations, as well as the surfers’ frustrations. So choosing
the right keywords to optimise and build your informational pages around, is
essential. We’re writing for machines first, in order to get that all important rank. And
for humans second, because without the rank they may never see the page. It’s a
completely different style of writing. Yet, it has to please both man and machine.

Let me try to put the two into context and see if I can make the combination easier to
understand. I’ll separate the robots from the humans first, and then put them back
together again. As far as I find it, the basic categorical definitions in information
retrieval science are these:


   •   Data: a representation of facts or ideas in a formalised manner, capable of
       being communicated or manipulated by some process.


   •   Information: the meaning that a human assigns to data by means of the
       known conventions used in its representation.


So: data is related to facts and machines, information is related to meaning and
humans.

If you look at the graphic above you get the indication of where important keywords
should appear on the page. It's vitally important to remember that the page and the
copy has to be appealing to end users too. Don't forget the page is designed
essentially for a human to read.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
The importance of links:

In first generation search engines, the textual content of a page was the primary clue
of the "relevance" of a page following a specific query. But as the web grew, so did
the number of relevant pages for each query.

So search engines had to look at another way of "ranking" important pages following
a query and they borrowed from existing research which has been carried out in
social network analysis. They did this by, instead of looking at pages and text in
isolation, they looked at both that and the linkage data surrounding each page i.e. the
number of links that point back to your pages from pages on somebody else's web
site.

The first search engine to use this method openly was Google. And this is why they
shot to number one in the search engine charts. By using a method known as
PageRank (which is named after the developer of the algorithm, Larry page and not
web pages) they have been able to rank pages not only their popularity, as is the
page with the highest number of links wins, but also on the quality of the pages which
point back to you. So it's more about quality links pointing back as opposed to the
quantity of links.

In this way, search engines are able to create web maps of cyber communities. Fist
the linkage data is analysed to look at what are known as "hubs and authorities" .
Hubs are web sites which point to many good sites on a specific topic. For instance,
a good directory site which pointed to many good accountancy sites would be a good
example. The graphic below gives a basic example of how the theory works.




The red sites are the hubs and they point to many good sites on the same topic,
known as authorities (the blues sites). So search engines are able to break down
linkage data across billions of web pages and identify the cyber communities.




            netmarketing is the online trading brand of: Network marketing Communications Ltd.
                         Design Works, William Street, Gateshead, Tyne & Wear.
                               t: 0191 423 6200 - e: info@netmarketing.co.uk
Each colour represents a different community on the web. And the more links you
have from your community, be it all web sites involved in the accountancy and
finance industry, or hobby sites, or religion, whatever, that's where you need to
acquire "quality" links from.

See appendix 1 Linking Workbook for a working guide to finding quality links for your
web site.

The major players:

Over 90% of all traffic online comes from just four major search engines. This is
where you need to concentrate your promotional efforts. For natural (organic) traffic
i.e. the crawler based search engines, the main distribution looks like this:




            netmarketing is the online trading brand of: Network marketing Communications Ltd.
                         Design Works, William Street, Gateshead, Tyne & Wear.
                               t: 0191 423 6200 - e: info@netmarketing.co.uk
For pay per click (PPC) the distribution network looks like this:




What "not to do" - avoid being penalised or banned from a search engine
index:

Some webmasters seem to spend most of their time trying to figure out ways to fool
or ‘spoof’ search engines. Trying to find ways of manipulating your position in a
search engine or driving untargeted traffic to your web site when they don't really
want to be there is a pointless task. Your time is better spent on getting quality
content and highly optimised pages indexed correctly. These are some of the over
used ‘tricks’ that will get you into trouble with search engines, they are:

Keyword stuffing:

Adding hundreds of keywords to your meta tags, comment tags or at the bottom of
your pages.

Hidden text:

Adding text to your pages in the same colour as your background i.e. white text on a
white background. Not visible to the human eye but visible to spiders.

Tiny text:

Adding text in a tiny size font to your pages. Too much tiny text is just too much for
search engines.

Over submitting:

Submitting your site or pages hundreds of times or even thousands. You’ll get
dropped from the index.


               netmarketing is the online trading brand of: Network marketing Communications Ltd.
                            Design Works, William Street, Gateshead, Tyne & Wear.
                                  t: 0191 423 6200 - e: info@netmarketing.co.uk
Refresh tags:

Using fast refresh tags to quickly move your visitor from one page to another. Search
engine spiders sniff this one out easily and you’ll see your site dropping like a stone.
Some Java techniques are used to avoid the HTML <meta> refresh tag but even
when used for legitimate reasons they are still frowned on by search engines.

Pagejacking:

Finding a top ranking site and literally cutting and pasting all of the code and graphics
into your own page and then submitting it.

Bait and switch:

You submit one highly optimised page and wait for it to rank… and then you pull it off
the server and replace it with another page of your choice.

Cloaked HTML:

When you have a high scoring page, it’s inevitable that someone will steal your code,
so it makes sense in highly competitive market places to be able to protect your
code, and cloaking is the only guaranteed way to do it. Cloaking allows the web
master to feed the spider with a highly optimised text page and the visitor with the
glossy graphics of the actual site. It has to be said though, search engines find this to
be the number one crime. Any site caught cloaking is liable to be dropped from the
search engine index completely.

How search engines are likely to change:

It's hard to believe it, but the sciences and disciplines which go into information
retrieval on the web are only ten years or so old. The new science of networks is a
fascinating field for both academia and the commercial domain.

Search engines strive to provide more relevant results and more qualified visitors via
their advertising programs. And the best way that a search engine could improve on
its relevancy is by knowing more about the end user.

It's often been said that a search engine is like the "black box". A technological
mystery that only the search engine scientists themselves understand. Yet the same
applies in reverse: to the search engine, the end user is a "black box" about which it
knows very little.

This is why personalization and localisation in the search industry is the new area of
research and development. The more information that a search engine can discover
about the end user's searching habits, the more easily it is to provide personalised, or
at least more relevant, results.

Yahoo!, with its hundreds of millions of subscribers is in a prime position to capitalize
on this. People logging in to check and send email, read news and sports reports, are
also likely to be shopping online. And this is where more relevant search results and
pertinent advertising messages can be served.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
MSN has its own huge community of subscribers which it will no doubt tap into when
it launches its own new search service later this year. And of course, they have
Hotmail and news channels just as Yahoo!

So it's no surprise that Google recently launched its own social network service
(Orkut). Or that they announced the impending launch of Gmail, their web based
email service.

And a quick check at Ask Jeeves will show that their purchase of Interactive Search
Holdings also brought them a web based email service and a number of social
network type sites.

Its all about providing a whole range of services, including personalised search
results, to lock you into a search brand and maintain an ongoing dialogue.

Both Google and Yahoo! have allowed the industry (and the public) to take a little
"peek up their skirts" to get an indication of future promise. But we've still yet to see
what the "dark horse" MSN is likely to unveil with the launch of their own search
service.

================================================================




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk
About Mike Grehan:


                                        Mike is a sought after speaker in Europe and the U.S.
                                        and noted author and search engine marketing expert
                                        whose contributions to the industry have been well
                                        documented in multiple books including his most
                                        recent 2nd edition, "Search Engine Marketing."

                                        He is perhaps best known for the unprecedented
                                        access he has been granted to interview and peek
                                        under the hood at many of the major search engines.
                                        He has insight into their search algorithms and a
                                        technical understanding like no one else in the
                                        industry. Danny Sullivan, editor of Search Engine
                                        Watch, readily refers to Grehan as a "top notch
                                        search engine marketer."


He was recently commended by the internet industry as one of the top 100 influential
people in internet business over the past decade, along with his peers who included
easyjet.com founder Stelios Haji-Ioannou, Martha Lane Fox of lastminute.com and
rock star Peter Gabriel.

Supported by the Department for Trade and Industry, the influential 100 were
thanked for their contributions to the industry by Secretary of State for the DTI,
Patricia Hewitt and inventor of the world wide web, Sir Tim Berners-Lee.


Mike is frequently quoted in leading publications such as the Wall Street Journal,
New York Times, Los Angeles Times, Financial Times, Marketing Business
Magazine, Institute of Directors Magazine and has been interviewed on BBC
television and radio.

As Editor in Chief of e-marketing-news he reaches over 17,000 online marketers
around the world. His white papers on the subject of search engine marketing have
been downloaded in the tens of thousands.

Mike is also Search Engine Marketing columnist for Net Imperative, an international
e-commerce publication and a contributor to Search Engine Watch and Revolution
Magazine.

He is search engine marketing expert for e-consultancy forums and member and
contributor of Search Engine Watch forum, High Rankings forum and Webmaster
World forum. He is also currently advisor to the working group developing the search
engine marketing trade association in the UK

Although the majority of Mike's time is spent as a "road warrior" both in Europe and
the US, he resides just outside of Newcastle, close to the Scottish borders in the UK.




             netmarketing is the online trading brand of: Network marketing Communications Ltd.
                          Design Works, William Street, Gateshead, Tyne & Wear.
                                t: 0191 423 6200 - e: info@netmarketing.co.uk