Try the all-new QuickBooks Online for FREE.  No credit card required.


Document Sample
Virtual Powered By Docstoc

Virtualization: Present and the Future

          Garrett Chandler

     University of North Texas

       College of Information

          LTEC 4550.020

              Dr. Jones

         December 6, 2010


Virutalization is a recent trend that is sweeping the globe. It is not a new concept, but it has been recently

updated and upgraded with a certain flare that has many IT professionals talking. With many benefits

such as costs savings, and simpler management numerous IT departments are weighing the options of

“going virtual” or whether to stick with what they know. Is virtualization the wave of the future, or the

wave of the present? This research provides an opportunity to gain a basic understanding of what

virtualization is and how it works. Also provided are questions about where virtualization can lead us to

in the future.

        The virtualization of computers is currently a “hot” topic. It’s a technology that is quickly

being utilized and deployed in many companies and institutions, large to small. In fact,

worldwide thin client sales grew from 2.9 million in 2008 to 3.4 million in 2009, which is a 17

percent increase. (Wong, 2009) But what does virtualization mean? The use of virtualization

allows for, basically, a software implementation of a machine to carry out instructions like a physical

machine would. If it’s the latest trend and being used by so many groups, it has to be a new

technology, right? Wrong. If truth be told, the virtualization of desktops is not a new concept. It

has been around since around the 1960’s and ‘70’s with mainframes. Back then virtual

machines were more commonly referred to as “dummy terminals”. Today, virtual machines are

making a come-back. However, the art of virtualizing desktops has been improved, and

expanded upon to much more broad terms. Operating systems, servers, applications, switching

and networks, are just some of the things that can also be virtualized now. In this paper, I will

focus on exactly what virtual machines are, and illustrate a basic idea of how they work. I will

discuss the benefits and downfalls of implementing this technology into existing data centers,

and provide possibilities to where virtualization can lead to in the future.

        A virtual machine is a software container (application) that can run an operating system

and various applications (Microsoft Word, games, etc.) in it, just like a physical computer would.

It is an environment created within another environment. It contains it own, software-based

processor, memory, hard drive and network interface card (NIC). It allocates these resources

from the host machine as needed. However, it is an isolated environment, allowing you to do

whatever you want with it, without compromising the host machine. For example, if your virtual

machine was infected with a virus, your host machine would still be safe. For many people the

concept of virtual machines is hard to grasp. But when broken down with the right terms, it can

be understood easily. In a sense, you can relate it to opening up a web page. You cannot surf

the Internet without an internet browser on your computer. The web browser, ie: Internet

Explorer, Mozilla Firefox, etc. is a software container that allows web pages to be viewed. The

web browser is programmed in such a way that it knows what to do, how to interpret the data

and much more. The same concept lies with virtual machines. The container can run separate

operating systems, and interprets the data just like a “real” computer does. With this beefed up

browser of sorts, you can use a virtual machine, just like a normal computer, utilizing it for

listening to music, watching videos, word processing, installing applications, various other tasks

you accomplish on your home computer today.

       Many companies and institutions today have taken this idea and run with it. They are

now building and creating architectures supporting virtualization. Instead of supporting and

managing 1,000 physical end-user computers, network managers support and manage a mere

10 servers on the back-end of the network. They are doing this by integrating virtual machines

throughout their system for end-users to use. Most have done this by replacing their old PC’s

with devices called thin clients. A thin client is a physical machine that looks and acts like a

regular computer would. The difference here is that a thin client has no hard disk drive, the brain

of a regular computer, where they operating system lives. The thin client connects to a specified

server via Ethernet to pull down the operating system and become a virtual machine. The only

way that this is possible, obviously, is through certain software (hypervisors) on the market that

provides the virtualization of various operating systems. Some of these programs include,

VMware, Microsoft Virtual PC, Citrix XenDesktop, and many more. Once this software is

implemented correctly onto the servers and thin clients like designed, access is granted to a

number of dedicated servers to communicate with the thin client. These servers each have a

different function depending on your network size and system altogether. For example, the

function of one server could be the delivery controller, which delivers the virtual desktop to the

user. The second server might be the application server. This is an application delivery system

which provides application virtualization. The third server is probably an operating system

delivery manager that deploys various operating systems to the virtual machines. These servers

are supported by a virtual infrastructure of another group of servers which manage the entire

virtual system. Of course, the storage area network (SAN) is the heart of the entire operation,

providing the blood to pump through the system. With this type of architecture in place, the bulk

of the data processing that users demand now occurs on the servers instead of locally on the

client. This architecture is also easily scalable for necessary growth. If installed and configured

correctly, it is possible that average users will not be able to tell the difference between a virtual

client and a physical machine, from a software standpoint.

        There are many benefits to “going virtual”. The primary reason for creating a virtual

architecture is cost. It is cost effective in a few ways. The first, is by going from managing 1,000

computers to 10 servers, the need for support staff is not as critical any more. The second is

that hardware failure on thin clients is less likely, resulting in lower maintenance, upkeep, and

upgrade costs than with traditional PC’s. Union School District located in San Jose, California,

actually spent $15,000 on new thin clients, monitors and software licensing, which was $10,000

less than had they purchased new computers. Power and cooling costs in the data center also

decline. The same school district hosted a study, and found that per every 25 units, they are

saving just under seven times what they would have spent for 25 PC’s. Another reason why

implementing virtual machines into your system is beneficial, is user efficiency. A user’s desktop

is “saved” on the server and can be accessed from any web browser on almost any device,

mobile or not. Log in, and your desktop is brought up. For example, if you are working late, and

decide you want to continue working on what you are doing at home, there is no need to close

out of your open applications. Simply walk away, log in at home, and as your desktop loads,

everything pulls up exactly how you left it at the office. Virtualization also provides reliability,

responsiveness, and quality of service.

        As always, where there are benefits to moving towards a new technology, there are

some disadvantages. Licensing issues, virtual server sprawl and security seem to be the unholy

trinity of issues awaiting battle with network managers. Even though most virtual machines are

isolated, on the backend of the system, a different beast resides, and security is still a major

issue. Since most of the data that passes between virtual machines is unencrypted many IT

professionals are worried. In a survey conducted in November of 2010 by TheInfoPro,

consisting of 214 IT professionals, 32.7% stated they were extremely concerned with the

security in a virtualized environment. Only 7.6% stated they weren’t worried at all. The rest

found a medium of “somewhat” and “minimally” concerned. (Mitchell, 2010) Also, creating a

virtual architecture and infrastructure is not as simple as many make it out to be. It is very

difficult and many problems will arise that a resolution has yet to be created for. Another huge

issue, is that depending on how the system is designed with access rights and permissions,

users cannot load any software they want onto their virtual machine. In certain cases, it must be

loaded onto the server for usage. The user now must go through their IT department for

something as big as studio music recording software, to drivers for their camera when they want

upload their latest Facebook profile picture.

       Only time will tell where current virtualization will lead us. In my opinion, the next step I

see is coming up with a way to get more x64-base machines virtualized as most if not all virtual

machines are x86-based. With that, I mentioned earlier that virtual machines do not realize they

are virtual machines. Operating systems residing on the host machine do not recognize a virtual

machine is running within itself. What if the operating system becomes aware? What benefits

will come from that realization? Can they become aware, and not crash? More hardware

virtualizations are also a safe bet, such as better CPU and I/O virtualization. I can also see

virtual appliances making a breakthrough. For example, a virtual firewall or VoIP appliance can

both contribute to simpler network manageability. If and when all this occurs, I can even

imagine the next step being to literally virtualize an entire data center. Is it possible? Maybe not

right now, but in the future one might hope it will be. We’ll just have to wait and see.


Bost, J. (n.d.). How Virtual Machines Work - What is a Virtual Machine?. Find Health,

        Education, Science & Technology Articles, Reviews, How-To and Tech Tips At Bright

        Hub - Apply To Be A Writer Today!. Retrieved December 3, 2010, from

Brazol, Stacy. (2009). Analysis of Advantages and Disadvantages to Server Virtualization.

        Retrieved December 5, 2010 from

Citrix. (2009). Microsoft and Citrix:Joint Virtual Desktop Infrastructure (VDI) Offering

        Architectural Guidance. Retrieved December 3, 2010



Evans, B. (n.d.). Global CIO: VMware CEO On Future Of Virtualization: Exclusive Interview --

        InformationWeek. InformationWeek | Business Technology News, Reviews and Blogs.

        Retrieved December 4, 2010, from


Mitchell, R. L. (n.d.). The Scary Side of Virtualization - Computerworld. Computerworld - IT

        news, features, blogs, tech reviews, career advice. Retrieved December 4, 2010, from

Radding, A. (n.d.). Virtualization — The Future of Corporate IT | Big Fat Finance Blog. Big

        Fat Finance Blog. Retrieved December 4, 2010, from

Wong, W. (2009, October). Trimming Down. EDTECH, 1, 32-35.

Virtual Infrastructure Overview, Virtualization, Virtual Servers. (n.d.). VMware Virtualization

        Software for Desktops, Servers & Virtual Machines for Public and Private Cloud

        Solutions. Retrieved December 4, 2010, from

VMware. The Future of Virtualization Technology. Retrieved December 4, 2010 from

Shared By: