Clustering by yaohongm


2                   Clustering


           What You Will Learn
           After completing this lesson, you will be able to:
              Understand the basics of Clustering
              Grasp the concept of Load Balancing
              See these as design issues
                                 Clustering                               3

Cluster Benefits

                   Cluster Benefits
                   There are three primary benefits to server clustering:
                   improved availability, easier manageability, and more
                   cost-effective scalability. Using Microsoft® Cluster Server
                   (MSCS) as an example:
                      Availability
                       MSCS can automatically detect the failure of an
                       application or server, and quickly restart it on a
                       surviving server. Users experience only a momentary
                       pause in service.
                      Manageability
                       MSCS lets administrators quickly inspect the status of
                       all cluster resources, and easily move workload around
                       onto different servers within the cluster. This is useful
                       for manual load balancing, and to perform "rolling
                       updates" on the servers without taking important data
                       and applications offline.
                      Scalability
                       "Cluster-aware" applications can use the MSCS
                       services through the MSCS Application Programming
                       Interface (API) to do dynamic load balancing and scale
                       across multiple servers within a cluster.
4                           Clustering

Windows 2000 Server Family

                   Windows 2000 Server Family
                   This slide clarifies the availability of Windows Clustering.
                   Windows Clustering is found only in Microsoft®
                   Windows® 2000 Advanced Server and Datacenter
                   Server. Clustering was first introduced in Microsoft®
                   Windows NT® 4.0 Enterprise Edition.
                                   Clustering                               5

What Is a Cluster to Microsoft?

                     What Is a Cluster to Microsoft?
                     This slide defines how Microsoft understands a cluster:
                     A group of independent computers that work together to
                     run a common set of applications and provide the image
                     of a single system to the client and application.
                     Stated simply, the purpose of a cluster is:
                     Boost availability, scalability, and manageability across
                     multiple tiers of a network
6                       Clustering

Why Clusters?

                Why Clusters?
                Clusters are broadly used to make applications highly
                available (over 99.9%). They are also used to scale up an
                application, because each application has throughput
                bottlenecks. Using clustering technologies, you can also
                simplify deployment by using rolling upgrades.
                                   Clustering                                7

Architectural Goals for Clustering

                     Architectural Goals for Clustering
                     Large business sites are models of dynamic change. They
                     start small and grow exponentially in terms of demand for
                     number of unique users supported, complexity, and
                     integration of user services offered.
                     The business plans for many site startups are vetted by
                     their investors for a believable 10-100x-scalability
                     Companies can manage this growth by incrementally
                     growing the number of servers that provide logical
                     services to their clients—either by the servers offering
                     multiple instances of themselves (clones) or by
                     partitioning the workload among themselves and creating
                     services that integrate with existing computer systems.
                     This growth is built on a solid architectural foundation that
                     supports high availability, a secure infrastructure, and a
                     management infrastructure.
                     Companies also need continuous service availability.
                     Their sites have to be available seven days a week, 24
                     hours a day (7x24). They often use redundancy and
                     functional specialization to isolate faults.
                     All of these architectural goals should be met without
                     increasing site management. The architecture should also
                     ensure that operations can be performed and managed
8                      Clustering

How to Scale

               How to Scale
               When using clustering to scale up your application you
               can use two different approaches. While other vendors
               offer pure hardware scaling, Microsoft has decided to
               provide “software“ scaling.
               Hardware scaling means that machines will get bigger and
               bigger with more processors and more memory, whereas
               with software scaling you scale your application using
               farms of smaller, independent servers.
                              Clustering                              9

Hardware Scale

                 Hardware Scale 1/2
                 Hardware scaling effectively uses big symmetric
                 multiprocessing systems. The pros of this approach are:
                    You have only one single system to program and
                    Having only one system allows server consolidation in
                     computing centers
                    Even for Windows NT or Windows 2000, high volume
                     8-way systems are available on the market
                    With Windows 2000 Datacenter Server there is a
                     future path to high volume 32-way systems that are as
                     fast as mainframe systems
10                        Clustering

Hardware Scale

                 Hardware Scale 2/2
                 Hardware scaling of course also has different challenges:
                    It does not scale down to single systems (which makes
                     starter or development systems very expensive)
                    These systems are a single point of failure
                    The biggest disadvantage is that they do not provide
                     linear scaling. The costs may explode with every
                     additional CPU while the performance gain from each
                     CPU will go down.
                               Clustering                               11

Software Scale

                 Software Scale
                 Software scaling is another approach to providing higher
                 scalability and availability. Software scaling has several
                    Easy modular expansion of an existing system by
                     simply adding a new computer to the cluster
                    No single point of failure
                    No hardware limitations to scalability
                    Scalability is linear with incremental cost
                    It allows scaling down to starter or development
                    Use of simple and inexpensive hardware

                 The only challenge when using software scaling is the
                 management of multiple systems. Microsoft’s forthcoming
                 Application Center 2000 software addresses this
12                           Clustering

Microsoft Software Scaling

                    Microsoft Software Scaling 1/2
                    Microsoft today provides three different technologies to
                    achieve higher scalability and availability. Each
                    technology works in a different field. Available are:
                       Network Load Balancing (NLBS). Balances Internet
                        Protocol (IP) traffic to up to 32 nodes. NLBS provides
                        higher scalability and availability to Web clusters.
                       Component Load Balancing (CLB). Allows dynamic
                        load balancing of COM components. CLB provides
                        higher scalability and it may provide higher availability
                        (see later why it only may provide higher availability).
                                  Clustering                              13

Microsoft Software Scaling

                    Microsoft Software Scaling 2/2
                       Microsoft Cluster Services (MSCS). Provides a 2-node
                        fail-over support (4-node in Microsoft® Windows®
                        2000 Data Center Server). MSCS does not give higher
                        scalability but you benefit from the highest availability
                        of your applications.
14                          Clustering

Why Three Technologies?

                   Why Three Technologies? 1/2
                   While competitors provide only one solution for higher
                   availability and scalability (often just called clustering),
                   Microsoft has decided to develop different technologies for
                   different types of services.
                   For example, Web servers are often a bottleneck that
                   needs some kind of network load balancing, but balancing
                   IP traffic does not help with COM-based components or
                   applications. Databases, messaging systems, and
                   enterprise resource planning (ERP) systems are running
                   on high performance hardware that must be secured
                   against system problems.
                                Clustering                             15

Why Three Technologies?

                   Why Three Technologies? 2/2
                   A benefit of Microsoft’s clustering technologies is the
                   reduced costs of gaining availability and scalability,
                   compared to competitors. All three technologies use
                   industry-standard hardware and software—no special
                   equipment is needed. Reduced costs are also achieved
                   by allowing scaling across multiple nodes so that less
                   expensive servers can be used.
                   Another benefit of the three technologies is the increased
                   overall availability. With these technologies you are able
                   to split applications and services across different (types
                   of) clusters. Thus you can also minimize the number of
                   single points of failure.
16                            Clustering

Microsoft Cluster Service

                     Microsoft Cluster Service 1/2
                     The basic MSCS usage scenario is an Active/Passive
                     clustering. That means that each application runs on one
                     MSCS uses software "heartbeats" to detect failed
                     applications or servers. In the event of a server failure, it
                     employs a "shared nothing" clustering architecture that
                     automatically transfers ownership of resources (such as
                     disk drives and IP addresses) from a failed server to a
                     surviving server. It then restarts the failed server's
                     workload on the surviving server. All of this—from
                     detection to restart—typically takes under a minute.
                     If an individual application fails (but the server does not)
                     this is detected by resource monitors. In this case MSCS
                     will typically try to restart the application on the same
                     server; if that fails, it moves the application's resources
                     and restarts it on the other server.
                     The cluster administrator can use a graphical console to
                     set various recovery policies, such as dependencies
                     between applications, whether or not to restart an
                     application on the same server, and whether or not to
                     automatically "fail back" (rebalance) workloads when a
                     failed server comes back online.
                                   Clustering                              17

Microsoft Cluster Service

                     Microsoft Cluster Service 2/2
                     Since cluster systems need to be highly available, special
                     hardware is often used. Microsoft certifies systems to be
                     used in a cluster. The complete list of certified systems is
                     available in the Hardware Compatibility List.
                     In MSCS systems interconnect using a shared small
                     computer system interface (SCSI) or Fibre Channel bus.
                     This means that clusters also use shared disks on the
                     shared SCSI/Fibre Channel bus.
                     To maintain client ability to access the backend, MSCS
                     supports virtual servers. The client connects to the virtual
                     server rather than the particular cluster node. MSCS
                     correctly reconnects to the cluster node running the
                     service. In case of failover the clients are reconnected to
                     the surviving cluster node. Virtual servers can be set up
                     for File/Print services—a built-in feature of Windows, and
                     for database systems (with Microsoft SQL Server™
                     Enterprise Edition) as well as Exchange servers
                     (Exchange Enterprise Edition).
18                           Clustering

2-Node Cluster Service

                    2-Node Cluster Service
                    Presentation note: Animation slide, let it build and explain
                    these steps as you go.
                    1. Two nodes, typical deployment. SQL Server is running
                       on Server A, Exchange on Server B. Server A “owns”
                       Disk Cabinet A (a Redundant Array of Inexpensive
                       Disks (RAID) array, for example), while Server B owns
                       Disk Cabinet B.
                    2. Server A suffers a failure of some sort (illustrated via
                       an explosion).
                    3. Server A’s connection to Disk Cabinet A is lost.
                    4. Server A attempts to restart SQL Server (in the event
                       the application itself failed). If unable (for example, the
                       entire server is lost), Step 5 below then occurs.
                    5. Server B detects this failure and assumes ownership of
                       Disk Cabinet A.
                    6. Server B then starts SQL Server in addition to keeping
                       Exchange running. SQL Server can then access the
                       same data it was working with while running on Server
                       A. Data integrity is preserved, and clients are unaware
                       their applications are functioning on a different server.
                                Clustering                             19

Rolling Upgrades

                   Rolling Upgrades
                   When using a MSCS cluster you are also able to make
                   rolling upgrades to your system. This means that you can
                   upgrade your systems (for example install Service Packs)
                   while the system is running. This is done in four steps:
                   1. Both servers in the cluster are running. To start the
                      upgrade, simply shut down one of the two nodes. A
                      failover takes place and both resources keep online.
                   2. Upgrade one system. During the downtime of this
                      system the other node is serving both resources.
                   3. After the upgrade is complete, both machines are back
                      online and you can shut down the other machine.
                      Again a failover occurs and both resources keep online
                      on the upgraded machine. You can now upgrade the
                      other machine.
                   4. After this upgrade is complete, the cluster is upgraded
                      and back with full power.
20                          Clustering

Windows 2000 Cluster Service Enhancements

                   Windows 2000 Cluster Service
                   Enhancements 1/2
                   Windows 2000 enhancements to the Microsoft Cluster
                   Service are:
                      Support for Active Directory and MMC integration. The
                       Cluster Service for Windows 2000 uses the Active
                       Directory service to publish information about clusters.
                       Integration with the Microsoft Management Console
                       makes setup easy and allows easy monitoring.
                      Recovery from Network Failures. The Cluster Service
                       for Windows 2000 implements a sophisticated
                       algorithm to detect and isolate network failures and to
                       improve failure recovery actions.
                      Plug and Play Support for Networks and Disks. Using
                       the Plug and Play technology the Clustering Service
                       detects the addition and removal of network adapters,
                       Transmission Control Protocol/Internet Protocol
                       (TCP/IP) network stacks, and shared physical disks.
                      Windows Internet Naming Service (WINS), Distributed
                       File Services (DFS), and Dynamic Host Configuration
                       Protocol (DHCP) Support. The Cluster Service now
                       supports WINS and DHCP, and the Distributed File
                       Services as cluster-aware resources that support
                       failover and automatic recovery. A file share resource
                       can now serve as a DFS root. It also now supports
                       Simple Mail Transfer Protocol (SMTP) and Network
                       News Transport Protocol (RFC 977) (NNTP).
              Clustering                           21

   COM Support for the Cluster API. The Cluster Service
    of Windows 2000 includes a standard API for
    developing and supporting cluster-aware applications.
    This API can be used to create cluster-aware
    applications that can automatically balance loads
    across multiple servers within the cluster and can be
    accessed via COM to control cluster behavior and
    automate many cluster administration tasks.
22                          Clustering

Windows 2000 Cluster Service Enhancements

                   Windows 2000 Cluster Service
                   Enhancements 2/2
                   Additional features of Windows 2000 clusters are:
                      4-node clusters in Windows 2000 Datacenter Server.
                       4-node clusters allow a cascading failover within the
                       four nodes.
                      Support for Storage Area Networks
                      Various administrative enhancements such as a
                       wizard for virtual servers, the ability to do an
                       unattended scripted setup of cluster nodes (SYSPREP
                       tool) and Microsoft Management Console (MMC)
                       integration of the cluster administrator
                                 Clustering                                23

Network Load Balancing

                   Network Load Balancing 1/2
                   Network Load Balancing (NLB) is one of the Windows
                   Clustering features of Microsoft Windows 2000 Advanced
                   Server. Network Load Balancing can enhance the
                   availability and scalability of Internet server programs
                   such as those used on Web servers, FTP servers, and
                   other mission-critical servers. Network Load Balancing
                   clusters distribute client connections over as many as 32
                   servers, providing scalability and high availability for client
                   requests for TCP/IP-based services and applications.
                   NLB was formerly known as “Convoy Cluster Software”
                   from Valence Research, an August 1998 Microsoft
                   acquisition. NLB is a pure software-based solution that is
                   installed on each node as a virtual network card driver. So
                   there is no single point of failure and no potential
                   performance bottleneck as in other hardware based
                   Clients access the cluster through one virtual IP address.
                   NLB also supports multihomed servers (servers with
                   multiple virtual IP addresses).
                      32 nodes is not an architectural limit, but a practical
                       one. In more than 90 percent of deployments, cluster
                       size is in the 4-10 node range.
                      NLB does not monitor for application failure, but is
                       designed to work very easily with third-party network
                       monitoring tools via a command line interface.
24                          Clustering

Network Load Balancing

                   Network Load Balancing 2/2
                   NLB itself does not know anything about the services
                   used over TCP/IP. It just balances IP connections. So you
                   can use a wide variety of IP-based services like
                      Web services (HTTP, FTP, Gopher)
                      Virtual Private Networking (VPN)
                      Streaming Media (Windows Media Technologies)
                      Proxy services

                   To use NLB no specialized hardware is required, although
                   it is recommended to use a second network interface card
                   (NIC) for heartbeat traffic between the cluster nodes.
                   NLB is fully remote controllable so all access can be
                   controlled from a single administrative workstation.
                              Clustering                              25

How NLB Works

                How NLB Works
                (Animation slide.)
                   Network Load Balancing employs a fully distributed
                    algorithm to statistically map incoming clients to the
                    cluster hosts based on their IP address, port, and other
                    information. NLB effectively “pre-divides” the client IP
                    space up during convergence (later slide), so each
                    server makes an independent decision on whether to
                    accept or decline the incoming request. The details are
                    Microsoft proprietary.
                   Rather than routing incoming client requests through a
                    central host for redistribution, every NLB cluster host
                    receives each client request. A statistical mapping
                    algorithm determines which host processes each
                    incoming client request. The distribution is affected by
                    host priorities, whether the cluster is in multicast or
                    unicast mode, port rules, and the affinity set.
                   Contrary to almost every other load balancing solution
                    on the market, hardware or software-based, NLB does
                    not have any central “Decision Maker” that decides
                    which server will handle each request. This is very
                    unique and allows for highly efficient traffic handling
                    and very high throughput.
26             Clustering

     Statistical Mapping Algorithm
     The assignment of a given client request to a server
     occurs on all the hosts; there is not a single host that
     centrally distributes the requests among the hosts. The
     hosts jointly use a statistical algorithm that maps incoming
     client requests to active hosts in the cluster.
     Apart from the influence of cluster and host parameter
     settings, it is possible for two successive client requests to
     be assigned to the same host during normal operation.
     However, as more client requests come into the cluster,
     distribution of client requests by the algorithm statistically
     approaches the load division specified by the Load Weight
     parameter of the relevant port rule.
     The following exert influence on the distribution of client
     requests affected by the statistical mapping function:
        Host priorities
        Multicast or unicast mode
        Port rules
        Affinity
        Load percentage distribution
        Client IP address
        Client port number
        Other internal load information

     The statistical mapping function does not change the
     existing distribution of requests unless the membership of
     the cluster changes or you adjust the load percentage.
                                     Clustering                                27

NLB Affinity Support

                       NLB Affinity Support
                       Affinity defines a relationship between client requests from
                       a single client address or from a Class C network of
                       clients and one of the cluster hosts. Affinity ensures that
                       the same host always handles requests from the specified
                       clients. The relationship lasts until convergence occurs
                       (namely, until the membership of the cluster changes) or
                       until you change the affinity setting. There is no time-out—
                       the relationship is based only on the client IP address.
                       There are three types of affinity, which you choose with
                       the Affinity setting. The Affinity setting determines which
                       bits of the source IP and IP port number affect the choice
                       of a host to handle traffic for a particular client's request.
                       The Affinity settings are as follows:
                          None. Setting Affinity to None distributes client
                           requests more evenly; when maintaining session state
                           is not an issue, you can use this setting to speed up
                           response time to requests. For example, because
                           multiple requests from a particular client can go to
                           more than one cluster host, clients that access Web
                           pages can get different parts of a page or different
                           pages from different hosts. With Affinity set to None,
                           the Network Load Balancing statistical mapping
                           algorithm uses both the port number and entire IP
                           address of the client to influence the distribution of
                           client requests. In certain circumstances, setting
                           Affinity to None is suitable when the Network Load
                           Balancing cluster sits behind a reverse proxy server.
                           All the client requests have the same source IP
28            Clustering

         address, so the port number creates an even
         distribution of requests among the cluster hosts.
        Single. When Affinity is set to Single, the entire source
         IP address (but not the port number) is used to
         determine the distribution of client requests. You
         typically set Affinity to Single for intranet sites that
         need to maintain session state. Single Affinity always
         returns each client's traffic to the same server, thus
         assisting the application in maintaining client sessions
         and their associated session state. Note that client
         sessions that span multiple TCP connections (such as
         Active Server Pages (ASP) sessions) are maintained
         as long as the Network Load Balancing cluster
         membership does not change. If the membership
         changes by adding a new host, the distribution of client
         requests is recomputed, and you cannot depend on
         new TCP connections from existing client sessions
         ending up at the same server. If a host leaves the
         cluster, its clients are partitioned among the remaining
         cluster hosts when convergence completes, and other
         clients are unaffected.
       Class C. When Affinity is set to Class C, the statistical-
        mapping algorithm uses only the upper 24 bits of the
        client’s IP address. This option is appropriate for
        server farms that serve the Internet. Client requests
        coming over the Internet might come from clients
        sitting behind proxy farms. In this case, during a single
        client session, client requests can come into the
        Network Load Balancing cluster from several source IP
        addresses during a session. Class C Affinity
        addresses this issue by directing all the client requests
        from a particular Class C network to a single Network
        Load Balancing host. There is no guarantee, however,
        that all of the servers in a proxy farm are on the same
        Class C network. If the client's proxy servers are on
        different Class C networks, then the affinity
        relationship between a client and the server ends
        when the client sends successive requests from
        different Class C network addresses.
        “Class C Affinity” is commonly used when clients are
         expected to use proxy servers, such as in intranet
         sites. “Single” is the default affinity setting.
        If a node fails, all sessions on that node will break. The
         next connection attempt by the client will be
         automatically accepted by another node and the client
         will be prompted to re-log on.
                                 Clustering                               29

NLB Failure Recovery/Convergence

                   NLB Failure Recovery/Convergence
                   Network Load Balancing hosts maintain membership in
                   the cluster through heartbeats. By default, when a host
                   fails to send out heartbeat messages within about five
                   seconds, it is deemed to have failed, and the remaining
                   hosts in the cluster perform convergence, in order to do
                   the following:
                      Establish which hosts are still active members of the
                      Elect the host with the highest priority as the new
                       default host. Note that the lowest value for the Priority
                       ID host parameter indicates the highest priority among
                      Redistribute the failed host's client requests to the
                       surviving hosts.

                      No dedicated “heartbeat” NIC is required but it is
                       recommended. NLB uses the same NIC that handles
                       incoming traffic.
                      The “25%” on the right indicates the load-balancing
                       rule set up by the network admin—in this case, with
                       four servers, they indicated “Equal” load balancing.
                       This can be adjusted.
30                        Clustering

NLB Port Rules

                 NLB Port Rules
                 NLB provides the ability to define port rules. Each port rule
                 configures load balancing for client requests that use the
                 port or ports covered by the port range parameter. Port
                 rules will override the default settings. How you load-
                 balance your applications is mostly defined by how you
                 add or modify port rules, which you create on each host
                 for any particular port range.
                    You can describe three possible filtering modes per
                     port or port range:
                        o Single Host: one host handles all traffic
                        o Multiple Hosts: traffic load balanced across all
                        o Disabled: all traffic filtered out. Screen shot
                            example of “Port Rule” for NLB, the primary
                            user means of defining how the NLB cluster
                    User defines range of ports for each rule. If they want
                     to load balance only standard Web traffic, for example,
                     they would use a range of ports “80 to 80”. Secure
                     traffic (Secure Socket Layer (SSL)) is port 443.
                    Default port rule is generic: Ports “1 to 65,535”,
                     meaning all ports, “Equal” load balancing set across
                     “Multiple Nodes” with “Single Affinity” to enable
                     session support.
                    Can balance TCP or User Datagram Protocol (UDP),
                     or both.
              Clustering                              31

   User can select “Disabled” for a given range of ports.
    This prevents any incoming traffic to those ports to be
    passed up to the application.
32                             Clustering

NLB Tips and Tricks

                      NLB Tips and Tricks
                         Define port filters to get the cluster up and running
                         Use command line tool for monitoring and
                         Use App Monitors to gain high availability for IP-based
                          services (NetIQ, Mission Critical, BMC, Servers Alive)
                         Use a second NIC for heartbeat, cluster administration
                          (security), database (DB) requests. Separate the inner
                          and outer traffic.
                         Use rolling upgrades to make changes to the cluster
                                Clustering                             33

Component Load Balancing

                  Component Load Balancing
                  The Component Load Balancing (CLB) service will be built
                  into the upcoming Microsoft Application Center 2000.
                  Component Load Balancing provides the ability to
                  dynamically balance component creation requests. This
                  balancing is done using a router computer that uses a
                  response time tracking service to determine which cluster
                  member has the highest capacity.
                  CLB is transparent to clients. Existing applications (even
                  MTS applications on NT4) do not need to be rewritten to
                  take advantage of CLB load balancing. The only place
                  where CLB is actually installed will be on the CLB router
                  itself. CLB requires Active Directory and assumes a
                  uniform configuration on all cluster members.
                  It supports up to eight nodes and provides high availability
                  and scalability for transactional components. A single
                  router is used per cluster. To secure the router you can
                  use Microsoft Cluster Service.
34                       Clustering

CLB Operation

                CLB Operation
                Animation Slide
                The basic operation of CLB is shown on this slide.
                1. An inbound request to instantiate a component is
                   received by the CLB router, whichDecides the most
                   appropriate server (variable set of measurements, for
                   example, CPU load shown)
                3. The router sends the request (instantiates Object) to
                   Server A
                4. If the client holds on to object reference, it then
                   communicates directly with Server A
                             Clustering                              35

CLB Failover

               CLB Failover
               A special problem occurs when the CLB router itself fails.
               As we can see, the CLB router is a single point of failure
               in this scenario. To avoid failing the complete cluster with
               the router; router failure is handled by Microsoft Cluster
               Application server failure is handled by the CLB Router,
               which detects and instantiates object on a different server.
36                             Clustering

CLB Considerations

                     CLB Considerations
                        CLB will not provide peak performance due to the
                         Distribute Component Object Model (DCOM) calls
                         across the network vs. local COM calls
                        CLB has a single point of failure (no high availability)
                        CLB is well suited for large backend applications
                         providing services for a lot of different clients with
                         different kinds of presentation layers
                        Change management will be vital in a CLB
                        Establish a tight and well-designed security model
                         (who can call whom)
                                  Clustering                            37

The Graphical View of Microsoft Clustering

                     The Graphical View of Microsoft Clustering
                     This slide shows a graphical view of MS clustering
                     technologies. From back-end to the client you can see
                     different technologies in action:
                        On database systems and file servers you should use
                         the Microsoft Cluster Server to secure your machines
                         against failure. The MSCS support for FibreChannel
                         storage is needed here.
                        In the middleware you will use Component Load
                         Balancing to balance the traffic to middleware
                         components caused by the web servers. To secure the
                         CLB router, use MSCS.
                        On the web servers you may create a server farm
                         using Network Load Balancing.
                        The content staging server is shown as an optional
                         component. “CRS” means Content Replication Server
                         currently found in Microsoft® Site Server Commerce
                         Edition. However, this is merely an example rather
                         than a requirement for a content staging server.
38                           Clustering

Example: Online Bookstore

                   Example: Online Bookstore
                   An example for such a configuration is this online
                   It uses
                      An 8-node NLB cluster running Internet Information
                       Services (IIS)
                      A stating server (non-production) for page changes
                      Application monitoring tools to manage the clusters
                      CLB with three application servers and two routers
                       (clustered with MSCS)
                      2-node Cluster Service cluster with SQL Server 7.0
                       (stores transactions as they are committed to the DB)
                                Clustering                            39

Improving Performance with Load Balancing

                   Improving Performance with Load Balancing
                   Here are some tips on how you can improve your
                   performance with load balancing:
                   When using Network Load Balancing you should use web
                   nodes as application servers and run components in
                   process with your ASP application.
                   For CLB and MSCS: Use high-speed networks in the
                   backbone. Use fewest possible method calls. Require
                   resources as late as possible, and release them as early
                   as possible.
40                            Clustering

Design Issues of Clustering

                     Design Issues of Clustering
                     You can easily cluster your application when you pay
                     attention to clustering in your design:
                        The most important thing is state management. You
                         should never hold state on your web pages or middle-
                         tier components. This prevents clustering! Hold
                         durable state only in the database.
                        Implement your application so that it is aware of
                         clusters. Your servers may be unavailable for a short
                         time due to failover operation, so you should retry
                         operations on failure instead of giving up. Your
                         application can also use the Cluster API or Windows
                         Management Instrumentation (WMI) to check the
                         cluster state.
                        Do advanced error handling required in a server farm.
                         You should gather all important error information. You
                         should even consider using run-time debugging/tracing
                         for single components.
                        Clustering                               41


             Clustering is a challenging infrastructure topic
             It also has its application design issues
             It gives you performance and high availability

To top