A computer cluster is defined as two or more computers which are connected together
to provide a high level of availability, reliability, and scalability than that which can
be obtained by using a single computer. Clusters can assist in the availability and
reliability of the following:
• Application and service failures – when an application or service fails, the
cluster will attempt to restart the application or service.
• System and hardware failures, when hardware components such as CPUs,
drives, memory, network adapters, and power supplies fail, another node in
the cluster will take over the application or services the failed node was
running without disruption.
A computer clusters ability to handle failures, allows for high availability. This is the
ability to provide users with access to a service or application with minimum
disruption during a failure.
In a server cluster, each server owns and manages its local devices and has a copy of
the operating system and the applications or services that the cluster is managing.
Devices common to the cluster, such as disks in common disk arrays and the
connection media for accessing those disks, are owned and managed by only one
server at a time. For most server clusters, the application data is stored on disks in one
of the common disk arrays, and this data is accessible only to the server that currently
owns the corresponding application or service.
Computer clusters are designed so that the servers in the cluster work together to
protect data, keep applications and services running after failure on one of the servers,
and maintain consistency of the cluster configuration over time.
Computer clusters require networks that run on IP-based protocols. Computer clusters
also depend on a name resolution service such as Windows Internet Name Service
(WINS), the Domain Name System (DNS), or both. IP broadcast name resolution can
also be used, however this method is ineffective in routed networks due to the
increase in network traffic.
Clusters require that each node as its own static internal and external IP Addresses.
The internal network is used for a cluster heartbeat which detects a failure in a node.
What is a quorum disk?
A main feature of a computer cluster is the shared Quorum disk. The quorum disk
must be a shared storage device but separate from the rest of the shared storage.
It is normally a minimum of 200MB or more in capacity; it typically stores a log file
for the cluster along with the application requiring clustering.
What are the different Types of Clustering?
There are three different types of computer clusters. Each cluster design is
determined on the requirements and budgets of the businesses.
The first type of cluster is called a High Availability Cluster
• High Availability clusters aim to solve the problems that arise from mainframe
failure in an enterprise. Rather than lose all access to IT systems, High
Availability clusters ensure 24/7 access to applications. This type of clustering
is used especially where important applications need to be available all the
time. The cluster would be designed to maintain redundant nodes. These
nodes would act as a backup system in the event of a failure.
The second type of clustering is called Network Load Balancing.
• Load-balancing clusters operate by routing all work through one or more load-
balancing front-end nodes, which then distribute the workload efficiently
between the remaining active nodes. Load-balancing clusters are typically
used in environments where there is limited funding. This is because the
workload can be evenly distributed amongst a few low budget computers to
optimise processing power.
The third type of clustering is called High Performance Clusters.
• High Performance Clusters are designed to exploit the parallel processing
power of multiple nodes. They are mostly used where nodes are required to
communicate as they perform their tasks – for example, when output of results
from one node will affect future results from another.
What are the Hardware & Software Requirements of
• The hardware for a Cluster service node must meet the hardware requirements
for Windows 2000 Advanced Server, Windows 2000 Datacenter Server,
Windows 2003 Enterprise or Windows 2003 Datacenter Edition. These
requirements are listed in the table on the following page.
• At lest two computers or more, each with the following:
• A separate PCI storage host adapter (SCSI or Fibre Channel) for the shared
disks. This is in addition to the hard drive with the operating system installed.
• Two PCI network adapters on each machine in the cluster, one for the internal
cluster network and one for the external client network.
• An external disk storage unit that connects to all computers. This will be used
as the quorum disk (clustered disk). A redundant array of independent disks
(RAID) is recommended but not essential.
• Storage cables to attach the shared storage device to all computers.
• All hardware should be identical, slot for slot, card for card, for all nodes. This
will make configuration easier and eliminate potential compatibility problems.
Microsoft Windows Server 2003 R2 Datacenter Edition
Computer and 400 MHz processor required; 733 MHz processor recommended
Memory 512 MB of RAM required; 1 GB of RAM recommended; 128 GB
maximum for x86-based computers; 2 TB maximum for x64 and
Hard disk 1.2 GB for network install; 2.9 GB for CD install
Microsoft Windows Server 2003 Enterprise Edition
Computer and 133-MHz or faster processor for x86-based PCs; 733-MHz for
processor Itanium-based PCs; up to eight processors supported on either the
32-bit or the 64-bit version
Memory 128 MB of RAM minimum required; maximum: 32 GB for x86-
based PCs with the 32-bit version and 64 GB for Itanium-based
PCs with the 64-bit version
Hard disk 1.5 GB of available hard-disk space for x86-based PCs; 2 GB for
Itanium-based PCs; additional space is required if installing over a
Drive CD-ROM or DVD-ROM drive
Display VGA or hardware that supports console redirection required
• Microsoft Clustering Services can only run on Windows 2000 Advanced
Server or Windows 2000 Datacenter Server although using Microsoft
Windows Server 2003 Datacenter Edition or Microsoft Windows Server 2003
Enterprise Edition is recommended.
• A name resolution method such as Domain Naming System (DNS), Windows
Internet Naming System (WINS), HOSTS, etc.
• Terminal Server to allow remote cluster administration is recommended.
• A unique NetBIOS cluster name.
• Five unique, static IP addresses: two for the network adapters on the private
network, two for the network adapters on the public network, and one for the
• A domain user account for Cluster service (all nodes must be members of the
• Each node should have two network adapters—one for connection to the
public network and the other for the node-to-node private cluster network. If
you use only one network adapter for both connections, your configuration is
Shared Disk Requirements:
• All shared disks, including the quorum disk, must be physically attached to a
• Disks attached to the shared bus must be seen from all nodes.
• SCSI devices must be assigned unique SCSI identification numbers and
• All shared disks must be configured as basic (not dynamic).
• All partitions on the disks must be formatted as NTFS.
• While not required, the use of fault-tolerant RAID configurations is strongly
recommended for all disks. The key concept here is fault-tolerant raid
configurations—not stripe sets without parity.
What are the Benefits of Clustering?
Computer clusters offer a number of benefits over mainframe computers, including:
• Reduced Cost: The price of today’s personal computers has dropped
significantly in the past few years, while improving processing power. By
using clustering you could use multiple personal computers to create a cluster
as apposed to spending vast amounts of money to obtain high powered server
• Processing Power: The combined processing power of a high-performance
cluster can, in many cases, prove more cost effective and efficient than a
server with similar power.
• Improved Network Technology: Computer clusters are typically connected
via a single virtual local area network (VLAN), and the network treats each
computer as a separate node. Information can be passed throughout these
networks with very little lag, ensuring that data doesn’t get congested between
• Scalability: Thus being one of the biggest advantages of computer clusters is
the scalability they offer. While server computers have a fixed processing
capacity, computer clusters can be easily expanded as requirements change by
adding additional nodes to the network easily.
• High Availability: With Cluster service, ownership of resources such as disk
drives and IP addresses is automatically transferred from a failed server to a
surviving server. When a system or application in the cluster fails, the cluster
software restarts the failed application on a surviving server, or disperses the
work from the failed node to the remaining nodes. As a result, users
experience only a momentary pause in service.
• Failback: Cluster service automatically re-balances the workload in a cluster
when a failed server comes back online.
• Manageability: When using Microsoft Windows Clustering Services you can
use the Cluster Administrator to manage a cluster as a single system and to
manage applications as if they were running on a single server. You can move
applications to different servers within the cluster by dragging and dropping
cluster objects. You can move data to different servers in the same way. This
can be used to manually balance server workloads and to unload servers for
planned maintenance. You can also monitor the status of the cluster, all nodes
and resources from anywhere on the network.
The task given was to Plan; create a cluster design for the game Minitanks. The cluster had to
be a minimum of four nodes and the availability of hardware and funds was limited.
Which cluster was chosen and why?
High Availability cluster design was selected amongst the different types of clusters
The difference between the High Availability Clustering and Network Load
Balancing was that in a High Availability cluster, the servers are linked to a common
hard disk array through a SCSI interface, while in a Network Load Balancing Cluster,
each server functions as an independent entity. In a gaming environment, a player
may not be able to choose who to play against if the game-play was on another server.
This is why the High Availability cluster server was chosen. Also, if a server was to
fail, a different node would take over from the failed node and everything on the hard
disk would still be accessible.
The game its self relied on a gaming engine called Torque; being a Microsoft setup a
decision was made to use Microsoft Clustering Services, which is integrated with
Windows Server 2003 Enterprise Edition. Although Windows 2008 Enterprise
Edition provided this same service, it used too much memory and CPU resources
especially running the cluster virtually.
Which Virtualization software did we use to implement
Four different products we selected to be reviewed for the virtualization of the cluster.
• Microsoft Virtual PC 2007 had no facility to setup a shared Quorum drive.
• Microsoft Virtual Server 2005 allowed for a Quorum drive to be created but
only had two SCSI channels. A four node cluster would require four SCSI
• Citrix XenServer being installed on a machine with a 64-bit processor and a
motherboard with Virtual Technology acts as an operating system, but does
not allow for sharing of SCSI disks.
• VMware with a little modification to the virtual machines configuration file provided
four SCSI channels which allowed the creation of Quorum shared disk which all four
nodes could access.
To set up the cluster in VMware-Server, the following documentation was used as a
http://www.blkmtn.org/files/VMware clustering 1.0.pdf
Problems we encountered when creating the cluster?
There were various problems encountered during the creation process of the virtual
cluster. This was mainly due to the virtualization of the cluster and the standard
hardware devices VMware creates when an operating system is installed.
• Microsoft Virtual PC 2007 did not allow for a shared SCSI disk.
• Microsoft Virtual Server 2005 only allowed a two node configuration of the
quorum shared disk.
• Citrix XenServer running Windows Server 2003 virtually was able to run the
Minitanks.bat file and clients could run the game successfully, but XenServer
does not allow the creation and sharing of virtual iSCSI disks.
• The 180 day evaluation of Windows 2003 Enterprise Server installed virtually
could not be activated through the proxy at West Coast TAFE.
• VMware creates a virtual graphics card, which does not work with the
Mintanks dedicated server because the virtual hardware cannot run graphic
The solutions we used to solve our problems
The following solutions to the previous stated problems were found effective while
implementing the virtual cluster:
• VMware was used to solve the problem of the two node limitation of
Microsoft Virtual Server 2005. It also supplied the ability to create a shared
SCSI disk, a limitation found in Citrix Xenserver.
• The following steps had to be followed in order for the shared Quorum disk to
1. The .VMX file needed to be edited and the following lines added to the
bottom of the configuration file
"Disk. lock= FALSE"
This line needed to be on each of the configuration files on each of the nodes
which were to be clustered. This line is needed to stop the virtual machine
from locking the drive and allowing it to be used as a shared drive. However
the first node in the cluster needs to be setup first before starting up any
other nodes, failure to do so will result in the corruption of the shared disk.
2. The quorum shared disk assigned as SCSI disk, needed to be assigned the
“Independent/Persistent” property in VMware.
3. The quorum shared disk must also be configuring to be on the SCSI channel
as the other virtual machines. i.e. SCSI 0:1
4. The operating system for each of the cluster computers was placed on a
virtual SCSI Disc and was needed to be located on a different controller to
that of the quorum disk.
5. The quorum shared disk needed to be made active on each virtual computer.
6. A user for the cluster had to be created on the domain for the cluster to be
able to communicate with each other.
• To activate the 180day evaluation of Windows 2003 Enterprise Server running as
virtual machines, a straight connection to the internet with no proxy was needed.
• Windows 2003 Enterprise Server with DirectX9 was installed on a physical
machine and the Minitanks dedicated server application was run. This application
ran successfully, allowing for game play and narrowed the problem down to a
VMware virtual hardware issue.