Virtual Equipment by hithereladies

VIEWS: 265 PAGES: 43
 Review Center
VMware Infrastructure 3.5 // 06.01.08
VMware Infrastructure 3.5 Review                                   2


0. Introduction
1. Infrastructure specifications
2. VMware Infrastructure installation and configuration
3. Virtual machines templates and instances creation
4. Virtual machines administration
5. V2V and P2V migrations with VMware Converter
6. Capacity planning with Guided Consolidation
7. Hosts and guests patch management with VMware Update Manager
8. Virtual machines backup with VMware Consolidated Backup
9. High availability with VMware HA, VMotion and Storage VMotion
10. Access control and auditing
11. Power management with VMware Distributed Power Management
12. Conclusions
VMware Infrastructure 3.5 Review                                                                              3

0. Introduction
In December 2007 VMware released a key update for its flagship product: VMware Infrastructure (VI) 3.5.

The foundation layer of VI 3.5 is its hypervisor: ESX Server 3.5. On this product depends the capability to
abstract the physical hardware and run virtual machines. ESX Server 3.5 offers some basic administration
capabilities which are greatly enhanced by an additional management layer found in VirtualCenter 2.5.

VMware offers ESX Server and VirtualCenter together under the name of VMware Infrastructure, but in the
last couple of years the package was vastly enriched with additional modules and today VI 3.5 is able to offer
as much features as a sophisticated physical datacenter.

Compared to VI 3.0, this version introduces a massive amount of improvements which boost the overall
products’ value. These changes alone would be enough to fill an entire review, but
reviewed the entire product not considering it as an upgrade. Therefore this document targets the potential
customers as well as the experienced users.

Before starting it’s worth stating that the vastness of VMware Infrastructure 3.5 means it’s impossible to
provide an extensive review of the entire suite without writing a book. recommends further investigations to completely understand the product’s potential.
VMware Infrastructure 3.5 Review                                                                                   4

1. Infrastructure specifications

Virtualization infrastructure: VMware Infrastructure 3.5

Virtualization Platform                   ESX Server 3.5 (build 64607)

Management Platform                       VirtualCenter 2.5 (build 64201)

Patch Management Platform                 Update Manager 1.0 (build 63959)

Backup Platform                           Consolidated Backup 1.1 (build 64559)

P2V/V2V Migration Platform                Converter Enterprise 4.0 (build 62386)

Server: IBM System x3650

Form factor                                Rack 2U

                                           Dual-Core Intel Xeon Processor 5160 @ 1.6Ghz and 1066 MHz front-
                                           side bus

Number of processors                       1

Level 2 (L2) cache                         2x2 MB

Memory                                     9 GB Fully Buffered DIMM 667 MHz

Expansion slots                            4 PCI-Express

Populated Disk bays                        One 3.5”

Internal storage                           73 GB hot-swap Serial Attached SCSI (SAS) at 15,000 RPM

RAID support                               Integrated RAID-0, -1, 10

Network Interface                          Integrated Broadcom NetXtreme II dual Gigabit Ethernet (GbE)

Host Bus Adapter (optional)                Emulex LPe1150 Single Channel 4 Gbps PCI Express

Power Supply                               832W 2 AC standard

Hot-swap components                        Power supply, fans and hard disk drives

                                           IBM Systems Director Active Energy Manager for x86, Integrated
                                           Service Processor, Diagnostic light emitting diobe (LEDs), drop-down
Management Software
                                           light path diagnostics panel, Automatic Server Restart, IBM Director,
                                           IBM ServerGuide
VMware Infrastructure 3.5 Review                                                                                   5

Storage Attached Network (SAN): IBM System Storage DS3400

Form factor                                Rack 2U

RAID controller                            Dual active

Cache per controller                       512MB battery-backed cache

                                           Two host ports per controller, Fibre Channel (FC) 4 Gbps auto-sensing
Host interface
                                           1 Gbps/2 Gbps

Drive interface                            Serial Attached SCSI (SAS)

Internal storage                           4 Gbps 1.5 TB SATA drives at 7,200 RPM speed

RAID support                               RAID-0, -1, -3, -5, -10

Storage partitions                         4, 16

Power supply and fans                      Dual-redundant, host-swappable

Management software                        IBM System Storage DS3000 Storage Manager

Fabric Switch: IBM TotalStorage SAN16B-2

Form factor                                Rack 1U

Port capacity                              16 Fibre Channel (FC) 1, 2 and 4 Gbps short and long wavelenght

Fibre Channel (FC) interface               E_Port, F_Port, FL_Port

Optical transceivers                       4 Gpbs short wavelength (SWL)

Power supply and fans                      Single power supply, dual fans

Hot-swap components                        SFP optical transceivers

Management software                        Advanced Zoning, Web Tools EZ
VMware Infrastructure 3.5 Review                                                                                6

2. VMware Infrastructure installation and configuration

ESX Server and VirtualCenter installation
The VMware Infrastructure 3.5 is an enterprise technology and requires some enterprise equipment. Almost
every part of the infrastructure must be certified on the VMware’s Hardware Compatibility List (HCL),
including the servers, the network cards and the host bus adapters, the storage array and even the fabric
switch. This implies a careful planning of the hardware purchase, considering both current requirements and
future scalability issues.

All the equipment used by for this review was certified by VMware for VI3.5, so the ESX
Sever 3.5 installation process correctly recognized and configured each device—SCSI array controller, Gigabit
Ethernet network interface cards (NICs), and host bus adapters (HBAs)—without any manual intervention.

The ESX Server setup was straightforward and took about 15 minutes to complete.

Compared to ESX Server, the VirtualCenter 2.5 requirements are less rigid: the product can be installed on
any Windows Server operating system as long as a database server is available on the network.
If not, administrators can require the installation of a local copy of Microsoft SQL Server 2005 Express

While not required, an Active Directory domain it’s highly recommended to achieve centralized access
management. The presence of Active Directory also simplifies the usage of some tools like the new Guided
Consolidation. To let VirtualCenter use the AD user database, the server must be installed as a part of the

The VirtualCenter installation process is as easy as the ESX Server one despite it requires a much longer time
(over 1 hour in our tests). For this review performed a custom installation, requiring a local
SQL Server 2005 instance but no additional VirtualCenter plug-ins (which will be installed at a later time).

VI 3.5 comes with an embedded trial license of 60 days. It allows using the entire platform without
limitations, except the lack of cold P2V migrations.

Replacing the evaluation license with the purchased one is probably is the most cumbersome task in the
entire VI 3.5 management routine.

First, administrators are required to connect online and register serial numbers for purchased component
and features. These products will then be included in a licensing pool. At this point administrators can select
all or some of them to generate a license file, which they will have to manually import inside the
VirtualCenter tier. Here the license file will be handled by a Macrovision Flex Server, which administrators
can decide to install locally during the VirtualCenter setup. Once the license file has been manually copied in
a specific directory, it must be rescan through a dedicated GUI included with the Flex Server.

Only at this point the VirtualCenter can be configured to use the licensing server and its new license file,
finally seeing the list of products and features purchased by a customer. The process is not yet complete
anyway: as soon as a new virtualization host is added to the infrastructure, administrators have to assign it
one of more licensed features among the one available.
VMware Infrastructure 3.5 Review                                                                              7

This long procedure becomes even more complex when a customer buys new features: additional license
files must be generated online, merged with the existing ones and resubmitted to the licensing servers
before they become available inside VirtualCenter.

Infrastructure configuration
Once logged inside VirtualCenter, the very first step to configure the infrastructure involves creating one or
more datacenters. Datacenters are simply logical units where multiple ESX Servers and virtual machines can
be grouped together to simplify administration. VI 3.5 introduces a series of screens which explain the basic
terminology and suggest which steps to take in order to reach an operational status.

Administrators can now add all available ESX Servers through a simple wizard.

Each host can be added just defining its IP address (or its FQDN) along with the local administrator
credentials (typically the root account). If the host already stores some virtual machines they will be
immediately recognized and added to the VirtualCenter inventory.

At this point each host must be configured to correctly expose physical networks and storage facilities inside
current and future virtual machines.

The storage configuration consists of two panels. The first one is used to check the details about all available
storage adapters (local SCSI adapters, Fibre Channel HBAs, hardware iSCSI HBAs, and even iSCSI software
initiators), allowing administrators to verify which local and remote volumes are accessible.

The second one actually manipulates the accessible volumes, giving administrators the ability to create
datastores (format volumes with VMware’s proprietary Virtual Machine File System, or VMFS), rename
datastores, and delete datastores. This is also where administrators can define connection to Network File
System (NFS) volumes.
VMware Infrastructure 3.5 Review                                                                             8

While VirtualCenter 2.5 exposes quite a bit of information about the reachable storage facilities (including
things like the raw capacity, the available paths or the WWPN name) to simplify troubleshooting, it doesn’t
offer an out-of-the-box storage management solution so administrators have to rely on 3rd party solutions to
manipulate the SAN and/or NAS devices at low level.

Likewise, the network configuration follows a similar approach. The first panel details the available NICs and
their optional capabilities (like Wake-On-LAN) while the second panel allows virtual networking design.

The configuration capabilities offered by VI 3.5 in this section are notable: each ESX Server can expose up to
one hundred virtual switches—known as vSwitches—which are fully isolated from each other and able to
offer enhanced capabilities like support for promiscuous mode, Quality of Service (QoS) engine, and load
balancing. Also new to VI 3.5 is support for Cisco Discovery Protocol (CDP) in the vSwitches.

Additionally, ESX Server 3.5 supports VLAN tagging and trunking allowing an easy integration with the
physical infrastructure without losing critical segmentation capabilities.

This range of options enables users to reproduce a wide range of networking scenarios but currently expose
a big limitation: the virtual switches are pretty basic; they are all unmanaged and don’t provide critical
features like port monitoring.

A last aspect that administrators may want to configure about ESX Server hosts is the network time
synchronization. This is a critical feature for all those companies enforcing centralized log management and
auditing (as well as deploying security solutions like IDS) and the fact that ESX Server comes with an
embedded Network Time Protocol (NTP) client is highly appreciated.
VMware Infrastructure 3.5 Review                                                                         9

The operations performed during the review revealed that VirtualCenter doesn’t immediately process
command, but queues them. In some cases the parallel execution is prevented and administrators have to
wait until the previous task is completed before issuing a new one.

In either case, the progress for each activity is displayed in a dedicated pane at the bottom of the
VirtualCenter window.
VMware Infrastructure 3.5 Review                                                                          10

3. Virtual machines templates and instances creation

Virtual machines creation
After virtualization hosts configuration, administrators can proceed in creating virtual machines.
A recommended first step consists in collecting the images of all installation CDs and DVDs the
administrators may need to install over time inside virtual machines.

In VI 3.5 VMware offers a specific section inside the VirtualCenter GUI which allows administrators to browse
the available storage volumes, called datastores, and upload (or download) files inside to (or from) the
datastores. This is the Datastore Browser.

A good practice is to create a dedicated LUN where users can ISO images of operating system and backend
server CDs or DVDs. For those unfamiliar with the concept, an ISO image is a file-based image of the
filesystem of a CD or DVD.

The Datastore Browser doesn’t seem completely integrated inside VirtualCenter since some operations (the
upload) don’t appear in the Tasks queue and in the Events registry while others (the rename) are reported in
a misleading way (as move in this case). This makes very complex tracking down which kind of manipulations
are performed inside the storage facility.

Once completed this optional step the administrator can start creating the virtual machines that will be used
as templates.

For each new virtual machine, administrators have to define the logical datacenter in which the virtual
machine will be located, the name, the storage facility to use (choosing between local and remote ones), the
guest operating system to install and the entire list of virtual hardware to expose. This list includes the
number of virtual CPUs, the amount of virtual RAM, the number of virtual NICs, the kind of virtual SCSI
storage adapter to use, the kind of virtual disk to use along with its size and location, and much more.
VMware Infrastructure 3.5 Review                                                                          11

Additionally, each VM’s removable media device (floppy and CD/DVD drives) can be mapped to the ESX
Server host device, to the client device (where the administrator is running its management console), or to
an ISO image saved on the storage facility.

Some virtual components have limitations (for example a VM cannot have more than four virtual NICs) but
VMware still offers the richest set of virtual hardware among competitors.

VMware also excels in guest operating systems’ support, the broadest available on the market at today,
which includes some Microsoft operating systems (Windows 95, for example) that not even Microsoft
supports anymore inside its virtualization platforms.

As soon as a virtual machine has been created it can be converted in a template. The process simply consists
of a right-click on the powered off virtual machine. After this operation the virtual machine disappears from
the Hosts & Clusters inventory window and it’s relocated in a dedicated section—the Virtual Machines
inventory—where it can be further modified.

The real benefit of converting a VM in a template is being able to customize the guest operating system on
the fly, at deployment time, without having to enter the OS and manually modify the configuration.
To achieve this goal on Windows virtual machines, VMware relies on the Microsoft Sysprep utility.
Administrators have to download it from the Microsoft website (or retrieve it from the Windows installation
CD), extract it, and put its content inside a specific directory of VirtualCenter installation path.

This step can be skipped but administrators would lose the capability to define account credentials,
activation key, network settings, etc., for each new template deployment.

Once the VirtualCenter is equipped with Sysprep any existing template can be deployed with a
straightforward wizard.
VMware Infrastructure 3.5 Review                                                                             12

The required customizations will be applied the first time the new template instance will boot.

To address large-scale deployment needs administrators can even use a Customization Manager which is
able to store different pre-defined customization sets, and that can be invoked anytime during a new

The template customization is a must-have for speed up the provisioning operations, but the process has
some limitations depending on the guest OS type and role (if it’s a domain controller for example), on the
version of ESX Server in use, and on the version of VMware Tools installed inside the template OS.

Additionally, the template management in itself is not much flexible: each time a change is due in the guest
OS its template must be converted in a standard virtual machine, modified, and then converted back in a
VMware Infrastructure 3.5 Review                                                                            13

4. Virtual machines administration

Virtual machines resources control
One of the first questions coming from new administrators is about how many virtual machines can be
executed at the same time on a given ESX Server host. Unfortunately it’s impossible to provide a definitive

Such a proportion, called VM per core ratio, in fact depends on several factors like the application workload
expected inside each virtual machine, the inherent capabilities of the hypervisor, and obviously the physical
resources available on the virtualization host. This prevents administrators from granting that a certain
virtual machine will always have a certain level of performance, or in other words prevents the definition
and enforcement of any service level agreement (SLA).

To address the challenge VI 3.5 offers a wide range of settings to control the virtual machines resources

For each VM, administrators can define a number of shares for CPU, memory and disk. These shares are
used to compare different virtual machines: the highest amount tells ESX Server which VM has the highest
priority in accessing the physical resources when an over-commitment happens. The priority is based on the
ratio of assigned shares to the total number of shares given.

For even greater control, for the three resources above administrators can also define reservations and

Reservations specify the minimum amount of CPU and/or memory that a VM must have to start and it’s
extremely useful to grant normal operations even when the ESX Server is under heavy load. If an application
has some minimum requirements this is the right place to define them. But be aware that the reservation
setting has severe implications: if one ESX Server is unable to grant the required resources it will refuse to
start the virtual machine.

Unlike reservations, which specify minimum amounts, limits instead defines the maximum amount of CPU
and/or memory that a VM can reclaim while running. While the limit can never go above the virtual RAM
defined during the VM creation, it remains very useful for example to simulate the application behavior in
over-commitment scenarios.

To avoid scalability issues and allow resource management in large-scale deployments VMware offers two
additional levels of control: Resource Pools and Distributed Resource Scheduling (DRS).
VMware Infrastructure 3.5 Review                                                                          14

A resource pool is logical group of compute resources; specifically, CPU and memory. For each resource pool,
administrators define reservations and limits for CPU and memory in the same way they do for single virtual
machines. When a VM is dragged inside the resource pool it inherits the resource control settings defined
for the pool. This provides administrators with a way to group virtual machines based on similar resource
needs and easily control the resource allocation for large numbers of virtual machines at once.

Administrators can decide can even decide to retain VM’s specific resource control settings below the
resource pool ones, meaning that resource settings can be applied on a per-VM basis if needed.

DRS is typically used in conjunction with resource pools. When multiple ESX Servers are combined into a DRS
cluster, their resources are combined into a single logical pool. Resource pools then apply and are shared
across multiple physical servers. Additionally, this object is able to move virtual machines between
virtualization hosts when certain resource reservations cannot be satisfied using VMware’s live migration
feature, VMotion.

A new cluster can be created with a single click, just like most of VI 3.5 operations. Administrators have to
specify that the object works as DSR cluster, and have to detail which kind of automation should be enforced
for resource management among three options available.
VMware Infrastructure 3.5 Review                                                                        15

Depending on the option chosen, administrators will just receive recommendations on the best placement
for each virtual machine or will let VirtualCenter decide what to do. Note that each recommendation is
identified by one or more stars, guiding the administrators on which action is the best one to follow.

As soon as the DRS cluster is created, administrators can drag multiple ESX Server hosts inside it.
For each one VirtualCenter will ask if any resource pool at host level should be deleted or retained.
VMware Infrastructure 3.5 Review                                                                           16

This option grants an impressive granularity in controlling resources allocation because administrators can
count on nested controls at virtual machine level, host level, and cluster level at the same time.

Additionally, each VM can be configured to use its own level of automation and administrators can even
decide to tie together several VMs which, for example, are all part of the same multi-tier application.

At this point administrators can create cluster-level resource pools in the same way they did for host-level
VMware Infrastructure 3.5 Review                                                                            17

Virtual machines launch
Once defined the resource requirements for different virtual machines administrators have a primary need:
grant their automatic power on if the virtualization host has to reboot for any reason.

Each ESX Server offers the option to setup the automatic startup for any VM, with or without a precise
order, and a delay between each launch to avoid I/O overloads.

The feature also allows defining which action to perform if the ESX Server has to shut down for any reason.

Virtual machines manipulation
One of the most powerful features offered by virtualization is the capability to save virtual machines states,
called snapshots, and go back and forth between them with just one click. Administrators can use it to mark
safe moments of a guest operating system lifecycle before any major change, like a service pack update, a
new product installation or even a delicate network operation like the domain join.

VI 3.5 offers the same flexible snapshot systems which many administrators use every day in the VMware
Workstation 6.0 product.

For each virtual machine multiple snapshots can be taken, and a complete Snapshot Manager allows
navigating between them, creating what-if paths to perform complex deployments in maximum security.
VMware Infrastructure 3.5 Review                                                                           18

A desirable addition would be an aggregation feature, able to consolidate several existing snapshots when
they are no more in use.

Along with snapshots, VI 3.5 allows the creation of virtual machines clones.
Clones are exact copies of existing VMs, which administrators can customize on the fly through the same
Customization Wizard available for templates deployment.

A VM can be cloned within the same datacenter object or, since VI 3.5, in a different one. Despite this
welcome addition VirtualCenter 2.5 is not yet able to create the so-called linked clones available in VMware
Workstation 6.0. Linked clones share the original guest OS image but write differences inside snapshots.
Linked clones provides a notable storage space saving and would lower the cost of enterprise storage
needed by VI 3.5.

Another operation that administrators can perform on virtual machines is the migration between ESX Server
hosts. Depending on the state of the virtual machine, VMware calls it in two different things: Migration and

As long as virtual machines are powered off this operation is simply called Migration. A Migration can be
performed on a virtual machine per time, just defining the destination host and the preferred storage facility
where to move the virtual disks. Since VMs may link to a specific virtual switch which is not present in the
target virtualization host, a virtual network reconfiguration may be necessary after the movement.
VirtualCenter is not smart enough to perform it by itself, but provides a proper warning.

When administrators want to move a running virtual machine they need an enhanced version of Migration
called VMotion. VMotion requires a specific license as well as special hardware and configuration and will be
covered in a dedicated chapter of this review.
VMware Infrastructure 3.5 Review                                                                            19

Performance analysis
To help in health monitoring and troubleshooting, VirtualCenter is able to expose some performance
counters for each virtual machine. Administrators can visualize a real-time graph or historical data for
different categories: CPU, memory, disk, network and system.

Each chart can be printed or exported in the most popular graphical formats, or even in a Microsoft Excel
spreadsheet (2003 version only).

Unfortunately the performance monitor feature isn’t sufficient to address serious analysis needs, showing a
very limited amount of details. Additionally, it’s not possible to compare counters from different categories
on the same graph, and there’s no way to compare performances between multiple virtual machines
running on different virtualization hosts.

Topology map
The entire virtual datacenter can be displayed in a
dedicated map, showing connections between
available virtualization hosts, virtual machines,
storage facilities and virtual switches.

Since the map can easily become very crowded
even with medium size infrastructures, VMware
offers some basic filtering capabilities.
Unfortunately the system doesn’t seem flexible
enough to isolate few elements in a large-scale
deployment and a filtering syntax a là Wireshark
(formerly Ethereal) is really missing.
The map can also be printed and exported in
graphical formats (.JPG, .BMP and .EMF). While
this is certainly useful for documentation
purposes, the capability to export in Microsoft
Visio format for further manipulation would be
much appreciated.
VMware Infrastructure 3.5 Review                                                                                20

5. V2V and P2V migrations with VMware Converter

V2V migration
When working with virtual infrastructures it often happens that administrators have to import existing
virtual machines, maybe realized with another VMware product or with 3rd party virtualization platforms. To
do this, they have to perform a virtual-to-virtual (V2V) migration.

VMware used to offer a stand alone, free of charge utility to perform this task called VMware Converter, but
VirtualCenter 2.5 comes with a version of this tool which can be integrated directly in the main GUI.

While the Converter plug-in is a great addition for VirtualCenter, its installation may appear somewhat
confusing to new customers because of the architectural approach used by VMware.

Administrators first have to install the plug-in on the same machine where VirtualCenter lies (this step is
usually performed along with the VirtualCenter setup but we opted for a custom installation at the
beginning of this review). At this point they can log into the VirtualCenter GUI and look at the new Plugin
Manager introduced with VI3.5. From here they will be required to download and install the client portion of
Converter. The last step, after the client installation, consists in enabling the plug-in, still from the Plugin

This double installation process is in place to allow customers to install the client portion of Converter on
different tiers bit it seems an unnecessary complexity when all components are installed on the same

After the setup a new Import Machine command appears available for each ESX Server object in the
inventory. It invokes the VMware Converter wizard to perform V2V and physical-to-virtual (P2V) migrations.
VMware Infrastructure 3.5 Review                                                                         21

The product supports a wide range of 3rd party images (virtual machines from Microsoft Virtual Server along
with images from Symantec, StorageCraft and Acronis backup solutions) but it only offers the chance to
reach them through a network share. There’s no chance to import from an FTP or a local drive (which may
be a remote iSCSI target).
Additionally the product requires defining the entire path of the virtual machine to import, which may
become a cumbersome operation because of the spaces in the image filename.
The importing process for a VMware Workstation 6.0 virtual machine with 8GB hard drive run smoothly and
took 15 minutes to complete successfully.

Before proceeding anyway administrators should consider a major shortcoming of any V2V and P2V
migration: the import substantially changes the virtual hardware inside the VM, and if the Guest operating
system is Windows, it will require a new activation. This obviously doesn’t depend on VMware but on
Microsoft Windows Product Activation (WPA) mechanism which is not yet virtualization-friendly.

Once the image is imported it’s recommended to update the VMware Tools inside the guest OS.
The operation can be performed interactively or through an automated process. In both cases it requires the
OS reboot, after which the migrated machine performs exactly like a VI3.5 native one.

P2V migration
VMware Converter is also able to perform a P2V virtual migration, working with offline and running physical
machines (this last option is supported only for Windows operating systems).

Administrators can operate a live P2V migration in two ways: installing Converter on the source machine and
launching the wizard locally, or invoking the migration from inside VirtualCenter.
In this second case the wizard will require the IP and the administrative account to access the source
machine (through CIFS and NetBIOS ports), plus an authorization to temporary install the Converter Agent.
This on-the-fly installation requires a reboot only if the source machine is Microsoft Windows NT or 2000,
and administrators can chose to remove the software package as soon as the migration completes
VMware Infrastructure 3.5 Review                                                                           22

For each physical machine that Converter is able to reach, administrators are required to specify which disks
should be converted and if they should maintain their original size. Additionally, the virtual machine storage
location and its full configuration must be specified. Administrators are even allowed to decide if the
VMware Tools should be installed on the fly and if the operating system should be customized as we already
described in the section dedicated to virtual machines manipulation.

At this point the process is ready to start, immediately or according to a defined scheduling.

In our test the live migration of a workstation with Microsoft Windows XP Service Pack 2 and one 20GB disk
took 1 hour to complete while running a typical office activity: browsing the Internet with Microsoft Internet
Explorer 7, sending emails with Microsoft Outlook 2007, writing documents with Microsoft Word 2007,
reading whitepapers with Adobe Reader 8, and retouching photos with Adobe Photoshop.

The other option that Converter offers is the cold migration. This requires downloading a 97 MB ISO image
from the VMware website, burn it and use it to boot the source physical system. The live CD provided by
VMware starts up a GUI interface to launch the P2V migration which has no differences with the one already
VMware Infrastructure 3.5 Review                                                                                23

6. Capacity planning with Guided Consolidation
One of the most complex task when approaching virtualization is capacity planning: it consists in
understanding which physical servers are the most suitable candidates for a P2V migration and how many of
them can be deployed as virtual machines on a certain virtualization host.

This is not an exact science and each migration can be recommended or not depending on a number of
factors: the physical server’s capabilities, the installed operating system, the kind of applications running
inside it, the applications’ workload, the dependency on certain hardware devices which cannot be
virtualized, and obviously the hardware capabilities of the new virtualization host.

So far VMware provided guidance for this critical process through a dedicated product called Capacity
Planner, which is available for rent only through the Consulting Services. But starting with VI3.5, VMware
integrates a subset of that tool’s features inside VirtualCenter under the name of Guided Consolidation.

It has a section dedicated in the VirtualCenter GUI where administrators can decide which physical servers
they want to analyze for a possible P2V migration among the ones discovered in the local workgroup or

For each chosen candidate administrators have to provide administrative credentials to access the operating
system. Up to 100 candidates can be analyzed at the same time.

Once defined the analysis target, Guided Consolidation starts to track its performance at CPU and memory
level. Depending on the collected data it will generate a recommendation and a related level of confidence.

The level of confidence grows proportionally to the time spent tracking the candidate. A high level is reached
after just 24 hours and may lead to unreliable recommendations. Anyway administrators can let Guided
Consolidation analyze a candidate for longer periods, up to one month, obtaining a much more precise
VMware Infrastructure 3.5 Review                                                                           24

The recommendation tells the best option for each candidate.

If administrators decide to follow Guided Consolidation’s recommendations it will invoke VMware Converter
(as long as it’s installed in the system) to start a P2V migration.

Guided Consolidation is probably one of the most valuable additions introduced in VI 3.5, but this first
version shows severe limitations.

First, the tool is unable to discover domain computers if the VirtualCenter machine is not part of it, and
there’s no way to manually add a system (inside or outside the domain). Secondarily, the discovery service
has serious problems recognizing new machines introduced in the network after the very first scan. Finally,
VirtualCenter doesn’t provide any user-friendly troubleshooting tool to monitor progresses and errors of the
discovery phase.
VMware Infrastructure 3.5 Review                                                                            25

7. Hosts and guests patch management with VMware Update
The new VMware Update Manager (VUM) is one of the most compelling features introduced with VMware
Infrastructure 3.5. It addresses the critical need of managing patches for ESX Server hosts and at the same
time it offers the capability to patch Windows and Linux guest operating systems from the same console. In
some cases VUM is even able to patch applications installed inside Windows guests.

The list of patches for guest OSes is maintained by a well-known security vendor called Shavlik which has a
partnership with VMware. Each time VUM needs to check and download patches it contacts both Shavlik
and VMware servers online.

VMware Update Manager (VUM) installation
VUM is distributed as a VirtualCenter plug-in and can be installed on a dedicated server or on the same
machine already hosting VirtualCenter. In this second case administrators which performed a custom
installation (like we did for this review) will have to manually define a new database in the SQL Server 2005
Express instance already installed by VirtualCenter 2.5 setup.

The process isn’t complex but requires to download the Microsoft SQL Server Management Studio Express,
to create a new database for VUM, to configure authentication and finally to define a new Data Source
Name (DSN). Only at that point the Update Manager setup will be ready to connect to the new database and
use it for patches tracking. The VMware documentation available at the moment of writing doesn’t mention
this process, and seems to imply that VUM installation program is able to automatically perform all the
steps, which is not what we found.

The last part of the installation is identical to the one already described for VMware Converter plug-in: open
the VirtualCenter Plugin Manager, download and install the VUM Client, and finally enable the plug-in.

As soon as the VirtualCenter GUI is launched after the installation, VUM starts downloading all available
patches lists.
VMware Infrastructure 3.5 Review                                                                              26

The patching policy
Every time the VirtualCenter GUI is launched, VUM checks online for the patches list.
This doesn’t imply any patch application until administrators define an updating policy, called a baseline.

VUM comes with a couple of pre-defined baselines for hosts and guest OSes, which automatically select
critical and non-critical updates, but the product is much more flexible than that, allowing to define custom

Administrators can decide to select which updates are included in different baselines in a manual or dynamic
way. In the first case each update package available in the VUM list must be declared as included or
excluded from the baseline. This obviously is the most time consuming approach but it can help to closely
monitor the behavior or a specific patch on a certain system, or to isolate an unstable patch. To simplify this
selection task VMware included an extended description for each patch and created a search facility which
can look for keywords or predefined fields.
VMware Infrastructure 3.5 Review                                                                          27

In the second case updates are automatically included in the baseline depending on the decided priority
level (critical, non-critical, both).

Following the applied baseline VUM will start downloading the required patches. The download service
follows a default scheduling (once per week, Monday at 9pm) which can be modified at any time.

Administrators have to carefully plan the storage space required by the VUM downloads, and since this may
be a complex task VMware created a Microsoft Excel spreadsheet which simplifies the calculation.
Administrators can download if for free from the VMware website.
VMware Infrastructure 3.5 Review                                                                          28

Hosts patching
Any created baseline can be attached to ESX Servers included in the VirtualCenter inventory. Before
patching, anyway, administrators are allowed to scan the targets to understand how many patches are
missing for each attached baseline. If the administrators decide to remediate the unsecure hosts, they will
be asked for a scheduling time, an allowed number of retries if something goes wrong, and the time delay
between each retry.

These scheduling options are very welcome but their inclusion in the baseline definition would be more logic
and would greatly simplify the entire process.

When a remediate command is issued, the host enters in the so called Maintenance Mode, which prevents
any activity, like running virtual machines or performing configuration modifications.
At this point VUM either moves the virtual machines on another available host (through Migrate or
VMotion) or power off/suspend them. If the running virtual machines don’t have VMware Tools installed the
VMware Infrastructure 3.5 Review                                                                             29

operation will fail and VUM will retry the request to enter the Maintenance Mode. The entire behavior can
be customized as part of the VUM general settings.

At the end of the remediation process the host is automatically rebooted and leaves the Maintenance Mode.

Unfortunately VMware doesn’t currently offer remediation for ESX Server versions prior to 3.5.

Guests patching
The process to scan and remediate a virtual machine is identical to the host one but in this case VUM also
shows up some pretty unique features.

The first major feature is the capability to patch both online and offline virtual machines. For running virtual
machines VUM is able to take care of multiple reboots required during the update process without any
manual intervention. For offline virtual machines VUM is able to power them on, apply the required patches
and shut them down again. Note that this process still happens at guest OS level and not through a patch
injection at virtual disk level.

The second major innovation VUM can offer is the ability to create a snapshot of virtual machines before
applying the required patches. If something goes wrong the administrator can rollback the VM image
without losing too much data. And to save space VUM allows deleting the taken snapshot after a certain
amount of hours, when it’s supposed to be safe.
VMware Infrastructure 3.5 Review                                                                             30

The last special feature VUM includes is the capability to patch popular 3rd party applications like Adobe
Reader, Apple QuickTime, Mozilla Firefox or the Sun JAVA run-time.

VUM may win over Microsoft Windows Server Updates Services (WSUS) thanks to these three big
advantages, but its interface is way less usable: at the time of writing the update list comprises 3404 updates
for Windows and Linux operating systems and defining a custom baseline appears a very time-consuming
Additionally, there is no way to integrate VUM with the existing WSUS servers, which would save the storage
space dedicated to patches and the existing corporate investment.

Additionally, this first version of VUM doesn’t seem too friendly for large scale deployments: the only way to
attach a selected baseline to multiple virtual machines at a time is to arrange them in a folder, which implies
an unnecessary constrain and makes the patching process error-prone in a big datacenter.
VMware Infrastructure 3.5 Review                                                                               31

8. Virtual machines backup with VMware Consolidated Backup
Administrators which want to save backups of their guest operating systems have several options in a virtual

The first one consists in adopting a traditional backup product from any 3rd party and install its agent inside
the guest OS itself. This method is doesn’t scale well because administrators are obliged to repeat the
procedure for every new virtual machine deployed just like in physical data centers. Additionally, the
presence of multiple agents inside a single virtualization host is redundant and wastes physical resources.

The second option consists in adoption a backup product specifically designed to work with the VMware
platform, interacting with each single ESX Server in the infrastructure or just with the VirtualCenter tier.
These products, still very rare on the market, perform virtual machines backup acting at host level instead of
the guest one: a single agent running at the same level of the ESX Server is able to backup the VM’s virtual
disk without really knowing what’s inside them.

The third option is adopting a traditional backup product in combination with a stand-alone tool that
VMware offers: the Consolidated Backup (VCB). VCB is not a real backup solution; rather, it is a gateway to
allow 3rd party products the access to VI 3.5 virtual machines disks. VCB doesn’t act at guest OS or host levels
like the previous approaches, but rather at the storage facility level.

To work it requires a dedicated physical server with Microsoft Windows Server 2003 and Fibre Channel
access to the storage facility where VMs’ virtual disks are saved (VCB also supports iSCSI and NAS storage
and in these cases it can be installed inside a VM). On this machine administrators have to install the 3rd
party backup solution along with VCB. VMware offers integration for the most popular products on the
market through a series of scripts.
Supported vendors include CA, CommValut, EMC/Legato, Symantec/Veritas and others.

The configuration is cumbersome since the administrator has to manually customize the scripts provided by
VMware accordingly to the adopted backup solution and the network configuration in use. A controversy
choice for a GUI-driven enterprise solution like this.

As soon as one of these products requires running a backup job the integration scripts are invoked, VCB
creates a VM snapshot and saves it on the storage facility. Then mounts the LUN so that the 3rd party backup
solution is able to access the VM’s snapshot and backup it with its own procedure. VCB is able to expose the
snapshots as virtual disk images or as a group of files (just for Windows guest OSes) so these products can
perform both file-level and image-level backups.

Before VI 3.5 the only way to restore a virtual machine was through the use of the 3 rd party product, but
with this new version VMware allows to use Converter for such operation. Administrators just have to select
the right option in the wizard, locate the 3rd party backup image on a network share and proceed as for P2V
migrations described earlier in this review.

The VCB approach has the great benefit of performing the backups without providing any overload on the
virtualization hosts but its current implementation has some serious limitations. First of all it’s not an out-of-
the-box solution and administrators will find the integration with 3rd party products very complex to setup
and hard to troubleshoot. Secondarily it has a limited support for 3rd party products and doesn’t work for all
virtual machines configured as part of a cluster.
VMware Infrastructure 3.5 Review                                                                              32

9. High availability with VMware HA, VMotion and Storage
When administrators have to achieve fault tolerance in a virtual infrastructure they have to look at it from a
new perspective: not only they have to implement high availability for guest operating systems, but they
also have to implement it for virtualization hosts, granting that all the deployed virtual machines can be
always served by a hypervisor.

VMware addresses the challenge through several different modules.

VMware HA
The first option to implement fault tolerance in VI3.5 is generically called VMware High Availability (HA). It
requires creating a cluster, similar to DRS as described earlier, but with a different configuration.

Despite the term “cluster”, usually associated with a stateful failover model where the workload state is
preserved during a failure, VMware HA’s failover is stateless—workloads are rebooted upon the failure of
physical resources. When a host crashes its virtual machines crash as well until one of the others available
node in the cluster restarts them locally.

When a cluster is configured for HA purposes, each host participating must meet specific requirements like
accessing the same shared storage facility, having all virtual machines stored in that facility, having physical
CPUs from the same vendor, and having identically named vSwitches.

Through the HA cluster options available, administrators are able to shape the recovery policy with several
parameters. The capability to control what happens if a host becomes isolated (called isolation response)
and the fail-over order, called priority, can be defined as a common property for all virtual machines of the
same host, or independently.
VMware Infrastructure 3.5 Review                                                                                33

Additionally, administrators can define how the alive host should serve physical resources to new virtual
machines that it receives after another host’s failure. This last option is particularly important because it
provides full control on what happens after a crash: since the alive host may receive more VMs than the
ones it can serve, administrators may want to power on only the mission critical ones.

As soon as one of the hosts is recognized as not responding (happening within 5 seconds) VirtualCenter
proceed reconnecting its crashed virtual machines to the hosts that are still alive. Our tests showed a
recovery time of less than 30 seconds for Windows Server 2003 R2 guest operating systems. During the
entire review couldn’t report any corruption in the virtual machine image or the guest
operating system despite the high number of provoked faults.

An aspect that could be improved in VMware HA is the procedure required to remove hosts from a cluster.
To achieve such goal an ESX Server must be first manually put in Maintenance Mode, but this unfortunately
requires a manual shutdown/suspend for each virtual machine running on the host. Performing the cluster
node removal in a transparent way would save much time.

Another feature that VMware HA misses is the capability to define the node stickiness. Administrators may
want to define a preferred host where virtual machines should be served. In this way as soon as an ESX
Server recovers from a failure the HA engine may fail over its virtual machines back to the original position.

Virtual Machine Failure Monitoring
Since VI 3.5 VMware further extended the HA engine capabilities, which is now able to also monitor failures
at Guest OS level in some scenarios.

The new feature is called Virtual Machine Failure Monitoring and relies on the communication between the
ESX Server and the VMware Tools installed inside each guest operating system. If the guest OS doesn’t send
an heartbeat every 20 seconds VMware HA triggers a virtual machine restart after a certain amount of time
defined by the administrator. And to avoid false positives, VMware HA is able to recognize if the VMware
Tools are sending heartbeats at slower pace because the guest OS is overloaded.
VMware Infrastructure 3.5 Review                                                                            34

Unfortunately the current Virtual Machine Failure Monitoring implementation is experimental and not yet
well integrated in the VirtualCenter GUI so that administrators have to manually define the system

Additionally, the feature is available only for those guest OSes which have VMware Tools support.

VMotion is probably the most impressive feature that VMware Infrastructure 3.5 can offer. It allows
administrators to migrate a running virtual machine from a ESX Server host to another without any
downtime. And this is extremely useful to perform any sort of maintenance activity without planning
expensive after-hour service interruptions.

VMotion is not related to VMware HA and can be used even if two or more ESX Servers are not in a cluster.

Unfortunately the feature doesn’t come cheap since it has a long list of mandatory requirements.
First of all each physical server used for VMotion must have similar processors. This means that it’s currently
impossible, and not because of VMware, to perform a hot migration between a host with Intel CPUs and one
with AMD CPUs. In some cases is even impossible to use VMotion with servers which have processors from
the same vendor but from different generations.
VMotion also requires a dedicated Gigabit Ethernet network and a fabric switch in case you are using a fibre
channel SAN (i.e., no direct connection to the storage array is supported).
Administrators will need some accurate planning at network and storage level as each host that will be
involved in the live migration has to have the same virtual switch labels and must share the same LUNs
where virtual machines are saved.
 Last but not least the virtual networking configuration must be adjusted so to create a special virtual switch
to take care of the VMotion traffic.
VMware Infrastructure 3.5 Review                                                                               35

Once all these prerequisites are satisfied the live migration is very easy. A right click on any running virtual
machine launches the VMotion process: in our tests it took just 30 seconds with an irrelevant delay in
network response and one single lost packet despite the IOMeter benchmark in progress.

Storage VMotion

In VI3.5 VMware applies the VMotion approach to storage and allows the administrators to perform a live
movement of virtual machine’s virtual disks from one storage facility to another without downtime.

In this way VMotion and Storage VMotion become complementary steps of a hypothetical transparent
migration. In the former process the VM’s configuration files and memory are moved between different ESX
Server while the VM’s virtual disk remains inside the same shared storage facility. In the latter process
instead, the VM’s configuration files and memory remain inside the same ESX Server while the VM’s virtual
disk is moved from one storage location to another.

The Storage VMotion migration is as smooth as the VMotion one and doesn’t impact on virtual machine
performance in any significant way.

Unfortunately it comes with some strict requirements as well: virtual machines to be migrated must not
have snapshots, their virtual disks must be persistent, and no more than four migrations can be performed
at the same time.

An additional negative aspect of Storage VMotion is the current lack of integration with the
VirtualCenter GUI. To perform any task related to this feature, administrators will have to use the
new Remote Command Line Interface (RCLI), introduced with VI 3.5 as an alternative system to manage the
virtual datacenter.

RCLI is a stand-alone tool available for Windows and Linux operating systems which allows administrators to
execute a range of tasks usually performed through the VirtualCenter GUI. The product is still experimental
for all commands but the ones which control the new Storage VMotion feature.

The RCLI is based on the VMware Infrastructure Perl Toolkit and needs an ActivePerl interpreter which is
installed by the program setup. Once available, administrators can use it to perform a Storage VMotion
VMware Infrastructure 3.5 Review                                                                                36

In our tests a running virtual machine with 16GB virtual hard drive was moved in 15 minutes without
affecting system availability. The entire operation impacted the network latency for an irrelevant number of
packets and only one of them was lost.

Despite the great benefits introduced with Storage VMotion, its command line interface severely
compromises the usage, obliging administrators to check each virtual machine settings (or browse the
storage facilities) to retrieve virtual disks names and paths. Additionally, the tool is obviously prone to digit
errors (typos) and has limited diagnostic capabilities, which both slow down the administration tasks.
VMware Infrastructure 3.5 Review                                                                           37

10. Access control and auditing

VI 3.5 offers a packet filtering firewall embedded in each ESX Server that can be centrally managed through
the VirtualCenter GUI. This tool is already locked down so administrators cannot add new services or modify
the ports for the existing ones, and most common network services are not enabled by default.

This grants a smaller attack surface which is mandatory when deploying virtualization hosts in risky areas like

Unfortunately VMware doesn’t offer a way to apply the same firewall configuration to multiple ESX Servers
at the same time, slowing down the hardening process on large scale deployments.

Permissions and Roles
Since VirtualCenter allows performing a wide range of tasks, touching different aspects of the computing
management, most companies may find challenging to enforce the separation of duties and grant the right
accountability for each department involved in the datacenter administration.

For this reason VMware offers an impressive range of permissions to limit interaction with the virtual
datacenter at multiple levels: datacenter, cluster, host, virtual machine, resource pools, datastores, etc.
Administrators can even set permissions to allow control only of certain modules of VI 3.5, like for example
the new Update Manager.

To grant scalability and simplify the assignment of multiple permissions VMware offers a set of pre-defined
roles, which administrators can further customize or enrich with their own roles.
VMware Infrastructure 3.5 Review                                                                               38

Once defined all roles each one can be assigned to
single user accounts or groups. As already said at
the beginning of this review, VirtualCenter relies
on the host operating system where it’s installed
to obtain the user database. Thus if this machine
is part of a domain, VirtualCenter will be able to
assign roles to the entire Active Directory user

Unfortunately, besides Microsoft Active Directory,
VMware doesn’t support other LDAP

What makes the VI 3.5 access control so powerful is the capability to assign a certain role to a certain user,
while specifying the context: it can be the entire infrastructure or a certain host-only, or even a single virtual
VMware Infrastructure 3.5 Review                                                                            39

Alarm system
A very interesting aspect of VI 3.5 is its alarm system, which allows defining trigger conditions and thresholds
for entire datacenters, for groups or even for single virtual machines.

Each of these objects already has few pre-defined alarms but administrators can create their own to react to
specific events.

The alarm system could be much more powerful than what is today, tracking only a small set of aspects for
both virtual machines and hosts, and allowing the definition of a small range of actions to perform.

Event logging
VI 3.5 also logs all the events happening in the virtual datacenter. The logging facility provides a basic
amount of details telling, for example, when an object is modified and which user required such change, but
is not yet able to reveal which modification took place.
VMware Infrastructure 3.5 Review                                                                           40

For further analysis, or inclusion inside a help desk management system for example, event logs can be
exported. Administrators can decide which type of events to export, defining which timeframe should be
considered and which user, but besides that VMware doesn’t offer a flexible search tool to filter events in a
more granular way.

VirtualCenter allows the events export in a wide range of formats, from the simple plain text to the
Microsoft Excel spreadsheet, passing through the XML and the HTML files. These last two options are
particularly interesting because allow administrators to publish the logs inside a corporate intranet at
scheduled times.
VMware Infrastructure 3.5 Review                                                                             41

11. Power management with VMware DPM
With VI3.5 VMware introduces a notable enhancement to its DSR Cluster called Distributed Power
Management. This feature allows limiting the energy wasted by each ESX Server host.

DPM arrives as a DRS cluster option, and once enabled it’s able to influence each cluster in a way that it
consolidates as many virtual machines as possible into a single virtualization host (obviously without
provoking an over-commitment). At this point, if any host remains without VMs, DPM puts it in stand-by.

If, at any time, the running virtual machines demand for more resources DPM can recommend turning on
again the stand-by hosts or can accomplish the task in automatic mode. To complete the task successfully
anyway it’s mandatory that all physical servers participating in the DRS cluster have network cards with
Wake-on-LAN support.

Once the stand-by virtualization hosts are back online, the DSR cluster follows its rules and redistributes the
running virtual machines through VMotion.

Despite it’s still an experimental feature and VMware discourages the use in production environments, DPM
could have a great impact on energy bills for those companies with a large and very dynamic virtual
datacenter. Anyway customers should carefully consider the adoption of DPM: the time needed to bring
back online a stand-by server obviously depends on the hardware equipment, 15 minutes in our tests, and in
some scenarios this may be just too much time.
VMware Infrastructure 3.5 Review                                                                                       42

12. Conclusions
While virtualization itself highlights the new degrees of flexibility we can reach with computing systems,
VMware Infrastructure 3.5 really demonstrates the endless potentials of this technology.

It’s evident that the platform is still at early stages for some aspects and there’s much room for
improvements. Despite that the manipulation capabilities it can offer today are already impressive.

VMotion remains the VMware’s most innovative feature in this version of VI, but also newest modules like
Storage VMotion, Power Distributed Management and Update Manager have good chances to become
compelling selling points over time.

As foundation of the newest capabilities, VI 3.5 grants customers some great features which, alone, justify
the value of VirtualCenter in many scenarios. One of them is the resource management provided by
Resource Pools and DRS clusters, despite these features are not available in every edition:

                                                     Foundation                 Standard                Enterprise
                                                      Edition*                  Edition*                 Edition*
Core Features (vSMP + VMFS)                             Yes                        Yes                     Yes
Consolidated Backup (VCB)                               Yes                        Yes                     Yes
Update Manager (VUM)                                    Yes                        Yes                     Yes
High Availability (HA)                                   No                        Yes                     Yes
VMotion                                                 No                         No                      Yes
Storage VMotion                                          No                        No                      Yes
Distributed Resource Scheduler (DRS)                    No                         No                      Yes
Distributed Power Management (DPM)                       No                        No                      Yes
* Edition available in June 2008

The areas where VMware Infrastructure has to improve the most are the Guided Consolidation and the
Consolidated Backup, two new modules which clearly demonstrate their 1.x version. Additionally, sees the lack of a true high-availability module as a major missing for such datacenter-class
product, also considering that the market doesn’t offer many valuable alternatives today.

A last aspect that that created some perplexity is the VMware decision to include some features still marked
as experimental. While the company clearly states that it doesn’t provide support for them and doesn’t
recommend the use in production environments, they are available in the final code and many customers
may fail in recognizing the risk in adopting them at this early point. Probably an experimental pack to
download and install separately would be a safer choice for most companies.

Overall VI 3.5 gave us two impressions. The first is that customers adopting virtualization technologies can
really feel the value of their investment only after adopting and mastering a powerful management
infrastructure like VirtualCenter 2.5. The second is that this is only an anticipation of the features that
customers will be able to use with VMware Infrastructure.

A final note: a deep analysis of some VI3.5 features (like VMware HA, VMotion or DRS cluster) to verify their reliability
under heavy load was beyond the purposes of this review. encourages the readers to perform extended testing on their own lab infrastructures.

To top