Try the all-new QuickBooks Online for FREE.  No credit card required.

cashing the check on Green IT

Document Sample
cashing the check on Green IT Powered By Docstoc
					Cashing the Check on Green IT

How Green IT can save your organization money The greenest of the Green IT goals: Rethinking e-waste The link between the green data center and energy consumption




Using power management techniques to green your IT department 10 Greening the data center: Deploy shared storage with the right features Greening the data center: Consolidate your servers Reap the Green IT benefits of thin client computing Investing in virtualization has Green IT payoffs



17 20

“Going green” is a fairly new trend in the business world, and it naturally filters down to the IT department. Implemented correctly, eco-friendly tactics can make your operations more efficient and save you money. The goals of Green IT include minimizing the use of hazardous materials and being smarter with their disposal, maximizing energy efficiency, and encouraging recycling and/or use of biodegradable products — without negatively affecting productivity. In this TechRepublic Cover Story, we will discuss some reasons to implement Green IT and some ways that you can do it in your own organization.

How Green IT can save your organization money
In August 2007, the Environmental Protection Agency, in response to a request from Congress, produced a report that shocked the IT community. Among the stunning findings in the EPA’s “Report to Congress on Server and Data Center Energy Efficiency” were the following statistics: The data center sector consumed about 61 billion kilowatt-hours (kWh) in 2006 (1.5 percent of total U.S. electricity consumption) for a total electricity cost of about $4.5 billion. This level of usage is equal to the energy usage of 5.8 million average U.S. households, or 5% of the U.S. population The level of data center energy use will double, according to an EPA projection, by 2011, to 100 Billion KwH and $7.4 Billion, requiring the construction of 10 additional power plants nationwide. A single fully-populated rack of blade servers requires up to 20-25 kW of power to operate, and an additional 20-25 kW for cooling and power conversion, equivalent to the peak electricity demand of about 30 typical California homes. Due to data retention regulations, such as Sarbanes-Oxley, the amount of data that must be retained is growing at a 50% CAGR (compound annual growth rate), driving extraordinary growth in storage and its associated energy usage. The EPA’s findings weren’t focused solely on the negative implications of this staggering growth in data center energy requirements. The report also included recommendations for reducing energy use, and forecasts of the potential savings organizations could achieve by improving practices. The EPS defined three separate improvement scenarios, each of which could lead to significant savings and Green IT benefits: The “state of the art” scenario, utilizing the complete suite of efficiency practices with So, how do IT leaders integrate the desire for cost efficiency with user growth expectations and “green” corporate policies? Many organizations, from the EPA in its report, to Green Grid and Google, offer guidance on combining the three imperatives of growth, green, and Interestingly enough, however, it’s not only potential savings that are driving CIOs to explore techniques for reducing power consumption. In a survey performed by CIO magazine, 38% of respondents cited “Social Responsibility”, 1% higher than the 37% citing “Reduced Costs”. 54% said that their company has environmental goals for IT that included energy efficiency. Still, the primary motivator for most CIOs is cost efficiency. Especially in this current climate, the ability to deliver more IT services with less facilities cost, less hardware acquisition investment, and less power and thermal expense, is the holy grail. available technology, could reduce power consumption by 55%. This scenario assumes that only the most-efficient equipment and practices are applied across the enterprise. The “best practice” scenario, which assumes the adoption of well-known practices utilized by “Green” leaders, could reduce electricity use by up to 45 percent compared to current trends, and The “improved operational management” scenario offers potential electricity savings of more than 20 percent relative to current trends, assuming that better practices will be applied to an existing data center. The magnitude of power and cooling use outlined in this report got a lot of attention from CIOs and IT analysts nationwide. Virtually every IT trade magazine and analyst has commented on these findings, and scores of internal audits and “green” initiatives were launched to try and realize the savings forecast in the EPA’s recommendations.


cost efficiency. In fact, most make the case that if IT expects to keep growing, greening IT is the only sustainable path, and it brings with it the cost efficiencies organizations seek. It’s critical to remember that, while we’ve been mostly talking about data centers, Green IT encompasses both the data center and the end-user computing domain. Most experts agree that “going green” is more a matter of procedures and practices, and less about a “forklift” upgrade of expensive gear. According to the CIO Executive Board, IT leaders who want to increase their green profile while achieving savings should focus on the following areas: In the data center, focus on: • Hardware, such as high-density servers and more efficient storage methods, and • Facilities, including power, cooling, and real estate In end-user computing, concentrate on: • Using high-efficiency components such as thin clients, laptops, and LCD monitors, • Usage practices, such as turning PCs off when unused, and using power-saving settings such as stand-by, and • Proper asset disposal, and recycling components responsibly. Google also has some high-level recommendations. On its new “Efficient Computing” page, Google presents the following tenets of its green philosophy: 1. Minimize electricity used by servers 2. Reduce the energy used by the data center facilities themselves

3. Conserve precious fresh water by using recycled water instead 4. Reuse or recycle all electronic equipment that leaves our data centers 5. Engage with our peers to advance smarter energy practices How can IT teams take these general principals and turn them into real bottom-line impact? The three scenarios of the EPA offer some great guidance. In its “Improved Performance” scenario, the EPA advocates such relatively painless actions as virtualization, a 5% reduction in servers running legacy applications, and the enablement of existing power management capabilities on all computing devices. The “Best Practice” scenario proposes that IT teams perform a more rigorous virtualization program, shrinking the server population by one-third, and that 100% of new equipment purchases be highly-efficient devices. This scenario also recommends a 50% reduction in storage devices, requiring some innovative planning in order to comply with corporate data retention standards. Finally, the “State of the art” scenario encourages a 66%


reduction in servers through intensive virtualization, and the addition of liquid cooling throughout the data center as well as variable speed cooling fans and pumps, and the eventual migration to renewable energy sources such as solar power. Reminding us that it’s not just about hardware, the Uptime Institute adds some key pointers about facilities design and efficiency. Uptime reports that 90% of the data centers they’ve surveyed have too much space and too much cooling capacity. They also note that this cooling capacity is poorly implemented; they find that 72% of cooling capacity is not directed to the computing equipment in the room! Facilities planning is clearly a key driver of savings, requiring IT teams to plan conservatively in order to avoid over-capacity, while being prudent about expected growth trends so they don’t box themselves in. Even small things, like energy-efficient lighting and vented floor tiles, make a difference. On the desktop side, all the experts recommend a disciplined program of procurement, that continuously seeks the most energy-efficient end-user devices, and responsible disposal and recycling practices. Simple procedural mandates, such as requiring all users to turn off PCs before leaving, can save dollars and reduce emissions. According to Dell, a company with 10,000 PCs could save up to $100,000 a year with simple power management strategies. While they may not have a big bottom-line impact, responsible disposal policies keep toxins like beryllium and cadmium out of the environment and further organizations’ social responsibility agendas. The movement towards Green IT is the perfect balance of pragmatism and responsibility. By adopting some of the Green IT concepts presented by the experts cited here, IT teams can keep pace with the growth expected by their users, keep on providing innovative IT solutions, and still be both socially responsible and cost-efficient. In fact, relative green-ness has become a competitive

differentiator, with everyone from Google to Microsoft to Sun bragging about their green commitments. What organization wouldn’t jump on a chance to get recognition, save money, and do the right thing, all at the same time?


The greenest of the Green IT goals: Rethinking e-waste
In the town of Guiyu, on the Chinese mainland across the straits from Hong Kong, where ten years ago rice grew to feed the nearby city, ancient irrigation canals are now filled with discarded circuit boards and broken class from pillaged CRTs. Groups of children sit in a room sorting plastic chips made from the remains of computer cases. In the same room is a plastic smelting operation, releasing clouds containing the toxin dioxin, famous from the Agent Orange tragedy in Vietnam. The plastic that is too impure to recycle is simply dumped into the street, creating huge melted piles of PVC-laden plastic in bizarre shapes and colors. Some of the parents of these children crouch in the street with CRT’s piled up in front of them, breaking off the valuable copper assembly, and then dumping the remaining CRT glass in the canal or along the riverbank. Certain sections of town are focused on recycling toner cartridges, leaving workers covered in black ink from head to toe, and seeping chemicals into the canals and ponds that formerly watered the rice paddies. Other areas concentrate on recovering copper from wires by burning off the plastic sheathing, emitting a cloud of toxic smoke that villagers and children blithely walk through. Chips and precious metals are stripped from circuit boards through the application of hydrochloric acid, which is then disposed of in the river. The same scenario occurs in scores of villages in China, India, and Pakistan, as documented in the Basel Action Network’s report “Exporting Harm”. The computer revolution that swept the developed world at the end of the last century left a residue of e-waste, created by accelerating product obsolescence and refresh cycles. As documented in BAN’s report, the implications of this huge generation of e-waste has been disastrous for the environment of the receiving countries. While it’s true that this has created opportunities for entrepreneurs in these countries, the ordinary workers are creating their own environmental hell. The very ground they walk on in Guiyu was measured at 0 Ph, the highest level of acidity, which often burned the soles off the rubber boots of workers. Water samples revealed lead levels 2400 times greater than the World Health Organization’s drinking guidelines. Why would countries and communities expose themselves to this level of danger and pollution? Because the potential payoff is tremendous. The EPA estimates that about two and a half million tons of computer gear is disposed of every year, with about 80% of that going directly into landfills. Even the remaining 20%, or about a half million tons, is a latent gold mine; in fact it’s richer than a gold mine. Hewlett-Packard’s recycling operation recovers more precious metals per ton than standard extractive mining: a mine produces six ounces of gold per ton of ore, while recycling computer parts returns 8 to 10 ounces of gold or palladium per ton. When one adds the copper recovered from burned wires and CRTs, the chips recovered for resale through acid bathing, and the plastic smelted and resold as plastic chips or beads, the economic value of e-waste to poor countries is an irresistible temptation, regardless of the environmental consequences. Recall, though, that this shipping of e-waste to poor countries only effects 20% of the waste we produce; the real number sent abroad is even lower, because about a fifth of that amount is recycled in the US. The additional 80% of the total e-waste produced, however, is dumped in landfills right here at home. The injection of toxins like lead, cadmium, beryllium, and mercury into our groundwater has become a target of interest in communities nationwide, and organizations, like the Silicon Valley Toxics Coalition, have begun pressuring computer manufacturers and resellers to respond with more responsible and creative ideas. Many companies, from Best Buy to Dell and HP, have been responsive to these pleas, and have


instituted recycling and take-back programs that could potentially reduce the amount of e-waste significantly. HP claims that its recycling program resulted in a reduction of about 250 million pounds of trashed equipment in 2007. Many providers have upped the ante on these programs; Dell contributes recycling revenue to a national charity, and HP has instituted recycling programs in countries like Bulgaria and Turkey, which are under HP’s control and comply with international environmental protection practices. What are the lessons of this situation for IT leaders? Clearly, Green IT is about more than simply applying efficient data center practices. The impact of obsolete gear disposal, and even the reuse of consumables like toner cartridges, must be considered in any Green IT program. The CIO Executive Board, in a report on Green IT, recommends a three-step approach to dealing with e-waste responsibly: 1. Asset Selection: reduction of e-waste requires IT teams to think about the energy efficiency and performance characteristics of computer hardware throughout the procurement and disposal cycle. By understanding the energy requirements of the gear they select, IT procurement agents can ensure that their equipment will both save operating costs, and have longevity, so it won’t be quickly replaced by more efficient gear and end up on the trash heap. Simply cascading computers throughout the organization during upgrade cycles, migrating less-capable machines to users with less complex requirements, can extend PC life and reduce disposal volume. Laptops are more efficient than

desktops, but desktops typically have a longer refresh cycle, so the choice is not always obvious, so IT leaders need to understand the usage profile of their community, and select judiciously, balancing performance and responsibility requirements. 2. Asset Usage: It’s been estimated that IT contributes 2% of the planet’s emissions of greenhouse gases. Dell, and other manufacturers, state that simply turning off PCs at night can cut this number in half or better. While this doesn’t directly affect disposal, when looking at Green IT as a holistic change in behavior, simple practices like this can deliver high impact. Manufacturers like Intel, with it’s vPro chip enhancements, are building in automated power-off capabilities at the chip level. 3. Asset disposal: responsible disposal, through partnerships with domestic recycling partners who pledge compliance with responsible practices, has become a corporate imperative, as communities and interest groups draw attention to these practices and their global environmental impact. Kaiser Permanente, the vast health care provider, has embraced responsible disposal as a key element of its social responsibility agenda. By partnering with Redemtech, an Ohio-based recycler with a zero-landfill policy, Kaiser reduces the possibility of future liability and fulfills its green goals. The movement towards responsible disposal has become so prominent that both the United Nations and the Environmental Protection Agency have entered the fray. Through its StEP Secretariat, (StEP stands for Stop the E-waste Problem), the UN has initiated a coalition of


universities, governments, and other agencies to study and recommend solutions to the e-waste explosion. Step has established a four-step program for solving the problem: • • • • Politics and legislation Redesign Reuse Recycle of electronics recyclers by awarding their G.R.A.D.E. certification to recyclers who pass muster. By applying the 3-step program recommended by the CIO Executive Board, and by partnering with recyclers who comply with the “ Electronic Recycler’s Pledge” and are highly-rated by IDC’s certification, IT professionals can ensure that their discarded gear will not end up on the streets of Guiyu, melting into a pile of toxic waste and poisoning the environment.

By driving for legislation across the globe that mandates responsible, low impact disposal and recycling of e-waste, StEP hopes to use the prestige of the UN to drive political change. The Redesign initiative focuses on encouraging the use of low-toxin designs, and enhancing the capacity of components and complete products to be reused. Through its Reuse program, StEP strives to increase the percentage of obsolete devices that, for instance, are sent to poorer countries to bridge the famous “Digital Divide”. Because this concept has been abused to allow the shipment of junk components to poor countries, StEP encourages standards in this arena. Finally, through its Recycling efforts, StEP endeavors to ensure that recycling efforts are standardized, monitored, and enforced. In response to corporate interest in responsible e-waste handling policies, swift entrepreneurs have jumped into this space. Established firms like Redemtech, Kaiser’s partner, offer services from refurbishment and resale through recycling and disposal. Dell and HP’s efforts to offer recycling as a standard element of any computer sale have been matched across the industry. IDC, the noted IT analyst firm, has begun monitoring compliance


The link between the green data center and energy consumption
This article is based on data obtained through an interview with Etienne Guerou, Vice President of Chloride South East Asia. Guerou, who has 20 years of experience in designing and building data centers, discusses the increasing importance and relevance of the green data center. You may have thought that a typical data center is inhumanely cold since most data centers run at a temperature of 20 degrees or below. But why? IT professionals are often obsessed with keeping data centers at lower temperatures. Even if the temperature is only a couple of degrees higher, it could have profound implications on the amount of energy that is required to do the cooling. This corresponds directly with cost savings. There is an IBM study that concludes that 25 degrees is the optimal temperature in the data center. Intel recently conducted an experiment in which it cooled a data center with 900 servers — high-performance blade servers to boot — with nothing more than temperate desert air of up to 90 degrees Fahrenheit. Extrapolating from the experiment, Intel estimated annual savings to the tune of approximately $2.87 million in a 10-MW data center. So rather than maintaining ever-lower temperatures, IT professionals should focus more attention on achieving better air flow to eliminate hot spots.

Green IT isn’t just a product
According to Guerou, a common mistake that IT professionals make is to visualize a data center in terms of the infrastructure and the requisite hardware components and then stop there. Far fewer IT professionals make the connection between the larger amount of resources that a more stringent level of availability and redundancy will require. For example, the design and requisite equipment to lay even a basic power feed is intricately tied to the desired level of redundancy. These requirements could mandate the need for a double feeder topology, which means that each feed should originate from separate power plants. This gives rise to the need for substations, separate sets of generators, and UPSs — all of which will contribute to an entirely different set of power efficiency numbers. Rather than specifying an arbitrary number of nines for uptime, IT pros who work for an environmentally responsible organization should think about the kind of uptime they truly need. Where Web applications are concerned, organizations can boast reliability beyond even the magic 99.999% merely by architecting its software to safely failover between different data centers.

The future of green data centers
The greater awareness of Green IT has led to improved efficiencies where individual hardware is concerned. Also, data centers increasingly have to validate their green status, as consumer become more conscious of the environment.

The obsession over temperature
Remember the first time you visited the data center? You probably brought a sweater on your subsequent trips.


Using power management techniques to green your IT department
Power management is both a discipline and an application. Companies such as Cassatt are eager to sell automated power management products to data center managers, and these products fill an important need. But before applying automated tools to the problem of power management, prudent data center managers should perform an energy use assessment and audit. Not only does this create a baseline for measuring future savings and Green IT benefits, it will also identify areas for immediate improvement. A data center design review, with energy usage in mind, will often reveals simple faults that can waste large amounts of energy. Most Green IT initiatives, from virtualization and consolidation to power management and data center redesign, have the primary goal of energy efficiency. By conserving energy in IT operations, we achieve all the meaningful benefits of an IT sustainability program: We decrease greenhouse gas emissions and our firm’s carbon footprint; we responsibly utilize the resources of our data center by reducing acquisition and maintenance costs; and we save on operating expenses. Now let’s turn our attention to case studies, survey and report data, and reports to learn about Green IT initiatives and how some companies are using power management. These metrics, along with a brief look at why some IT leaders are skeptical about power management, offer insight into how power management can lead to financial success and corporate responsibility. use. In addition to the positive message of reducing its carbon footprint and greenhouse gas emissions, EDS had another powerful incentive: The company could save about $480,000 a year. Its plan didn’t involve wholesale replacement of servers or storage devices or virtualizing and consolidating the data center. It also didn’t require the company to pipe chilled water into its facility or install solar panels on the roof. EDS simply used the existing power management capabilities of its 90,000 desktop PCs to turn off the power when idle. This “low-hanging fruit” seemed like a perfect place to gain quick dividends from an energy-efficiency project. Yet even this basic attempt to reap energy savings ran into complications. Some applications didn’t respond well to being turned off and had problems coming back online; backup operations had to be rescheduled; and systems disappeared from management consoles, setting off alarms. EDS was forced to slow down and take a multiphase, conservative approach to this simple project. EDS’s experience seeking energy efficiency tells us a couple of things. First, applying the most basic Green IT tactics, such as turning off the lights, can reap significant green rewards and cost savings. And, second, even the simplest tactics have complications.

Cassatt surveyed 215 IT and facilities personnel in late 2007 and early 2008 to assess their practices and opinions. While more than 40% of CIOs surveyed have an energy-efficiency mission in their organization, only 18% follow EDS’s path and turn off PCs. Less than a quarter of the respondents said they use power management on their servers. And more than 40% said that under no circumstances would they consider turning off their servers -- no matter what the energy benefits.

In 2007, IT consulting company EDS demonstrated it was socially responsible by complying with the U.S. Environmental Protection Agency’s (EPA’s) ENERGY STAR specifications for reducing computer energy


In an EPA study of data center power usage, the organization outlined three improvement scenarios for data center managers to follow: In its improved operations scenario, applying power-saving techniques such as turning off idle computers, the EPA estimates a 20-30% reduction in energy growth trends is feasible. In its best practice scenario, the EPA suggests that by adding a few more sophisticated techniques, such as replacing both computer gear and power and thermal equipment with new, energy-efficient models, the potential savings jump to 45-70%. In its state of the art scenario, which includes applying every proven energy-saving component and practice (such as aggressive server consolidation, completely automated power management, and next-generation cooling and power-distribution gear), the EPA notes that organizations could achieve up to 80% savings.

This report, by an independent organization serving the interests of CIOs, provides a clear path to energy efficiency both in the data center and on the desktop. By renovating existing facilities to fix the flaws mentioned above, and by introducing new computing and power-thermal gear with high-efficiency ratings as they become available, the report concludes that large gains in energy efficiency can be realized. The power-off techniques adopted by EDS on the desktop are also an important component of the Corporate Executive Board’s proposals.

Another interesting report is from Pacific Gas and Electric Company (PG&E) of California entitled High Performance Data Centers. Sometimes it’s surprising to see electric utilities guiding customers to use less of its product, but anyone who has observed the blackouts in California in recent years or heard the debates about building coal-fired or nuclear power plants in any jurisdiction, understands why PG&E is motivated to help customers go green. In fact, 24 utilities from around the United States have created a group called the IT Energy Efficiency Coalition and have developed innovative programs to help IT become more efficient. Seattle City Light is offering a rebate to customers who install power management software, and BC Hydro is offering incentives to clients who consolidate servers. PG&E’s data center guide offers technical advice and case studies in 10 categories, some of which may seem only tangentially connected to energy efficiency, such as Air Management and Humidification Controls; yet, PG&E’s document makes obvious the connection between these strategies and the greening of IT. In its chapter on Air Management, PG&E demonstrates that the adoption of

Corporate Executive Board
In the Corporate Executive Board’s report Green IT Initiatives, the board suggests the bulk of savings in the data center accrue from fixing obvious flaws, which include the following: floor layout, with ventilating cutouts pointed in the wrong direction or blocked by equipment and cables; inefficient lighting that wastes power and heats the room; and uncoordinated cooling, with cooling vents often pointed in the wrong direction.


better data center design practices, such as racking servers in a hot-aisle, cold-aisle configuration, can save up to 60% on cooling costs. The use of free, outside air to cool data centers is a topic of great interest, and PG&E’s report indicates that the use of chilled air collected from the outside atmosphere, and water chilled by outside air and circulated through the data center, can reduce cooling costs by 70%. The report notes that, even in warm climates such as San Jose, nighttime chill and cooler days would enable outside “free cooling” about 35% of the time. For IT leaders and data center managers who plan to embark on an energy-efficiency program, the CIO Executive Board report and the PG&E design guidelines are foundation documents.

A PUE of 1 would indicate complete energy efficiency, with all power going only to computing equipment, while a PUE above 3 would indicate room for improvement. PUE and DCE, which reports the percentage of power going to computing gear, are asserted by The Green Grid as the key metrics for Green IT initiatives that want to quantify their gains. Even the area of Green IT metrics has generated some controversy. The Uptime Institute, another influential organization focused on efficiency, offers its own set of metrics. The argument over metrics touches on questions such as where the load is measured and how it relates to productivity or computing power. The contributions of both groups help data center managers navigate the drive towards greening the data center. The EPA and the Corporate Executive Board agreed that most data centers are still a long way off from the need to accurately measure consumption by kilowatt hour and are more likely to benefit greatly from incremental improvements.

The Green Grid
The drive towards sustainable IT has encouraged the creation of metrics that claim to quantify energy usage and apply objective math to the measurement of data center efficiency. The Green Grid, a consortium of IT industry experts whose stated mission is to “develop standards to measure data center efficiency,” has presented a series of proposals for IT facilities power measurement. The Green Grid proposes two key metrics for data center efficiency: Power Usage Effectiveness (PUE) and Data Center Efficiency (DCE). PUE is defined as:

Skepticism about power management
It’s hard to blame IT leaders for being skeptical about power management. The difficulties EDS experienced with desktops are magnified in the data center. Applications and process are running in the background, and sophisticated data integrity and fault-tolerant procedures are taking place. For most large organizations, especially

Total Facility Power IT Equipment Power
Total Facility Power measures the energy load of all the facilities and equipment that support the computing gear in the data center. IT Equipment Power measures only the direct load associated with computer equipment, including attached network, storage, and print devices. This formula is designed to guide data center designers and managers towards high-efficiency computing resources that require lighter support equipment.


transaction-oriented businesses such as banks, there are no off-hours for servers. The idea of an unattended software agent randomly flipping the switch on their critical servers sends chills down some CIOs’ spines. In IT, there are still fixed ideas about powering servers off that impede implementation of even basic power management. Many IT managers believe that powering servers off and on reduces server or disk reliability and longevity, even though IBM and Hewlett-Packard specify its servers for 40,000 on-off cycles -- many more than are likely in a five-year duty cycle. Some IT leaders worry about application availability and may not know that modern power-management software is application aware. Departments that “own” their application servers often resist active power management for many of the same reasons these users sometimes resist virtualization: They want to retain traditional control of their servers, and they don’t trust automated operations that threaten to disturb their unlimited application access.

using metrics offered by The Green Grid , IT leaders can make the greening of IT using power management techniques an exercise in both financial success and corporate responsibility

Follow a disciplined path
Don’t let skepticism get in your way of at least exploring power management techniques and Green IT initiatives. Automated power management products, such as Cassatt’s Active Response power management products, can be a key enabler for controlling the power and thermal expenditures in your data center. Before you look into these products, the first step is to understand your current situation. By tackling the fundamental issues, such as data center rack placement, cooling effectiveness, and tile layout, IT managers can gain “quick hits” that create the positive enthusiasm to move to the next level of sustainable IT. The follow-up steps, such as integrating outside air cooling or using alternative sources of energy, require more consensus and investment, and should be undertaken once the “low-hanging fruit” efforts demonstrate results. By following a disciplined path as laid out by PG&E and the Corporate Executive Board, and by measuring results


Greening the data center: Deploy shared storage with the right features
With companies doing everything possible to conserve cash, conserving power has quickly become an important part of the IT portfolio. No longer is physical server sprawl an option; in terms of both hardware acquisition costs and ongoing energy and cooling costs, the “throw hardware at the problem” crowd is being replaced by people who attempt to virtualize everything and do everything possible to keep that energy bill low. The right storage solution in the data center works directly toward the green goal, particularly when the storage solution sports the right feature set. Allow me to explain. make use of SAN technology. Besides the obvious savings that comes from simply having fewer servers to energize, consider the direct benefits of a SAN. In many, many cases, organizations performing server consolidation tasks can meet their overall technical needs with fewer disks in the shared storage device that powers the data center. As is the case with servers themselves, simply having fewer disks in the data center means that less power is required to run them. Today’s disks are often more energy efficient than older models, too.

Thin provisioning
We’re at a point now at which there are fewer disks in the data center; perhaps the disks are larger capacity, but it’s likely that the total number is lower than it was when you ran all physical hardware. What could you do to lower that overall number of disks even more? The first shared storage feature that can help accomplish this task is thin provisioning. In short, with thin provisioning, you can assign individual connected servers or virtual machines enough shared space to meet today’s needs, and configure these shared volumes to grow to a predetermined amount of space as the need for additional storage arises. How does this help you achieve your green goal? The one thing that hasn’t changed in server and storage purchasing is the need to buy more space than you’ll probably need. Adding space to an existing server has become a lot easier, but it’s still a task that many don’t want to have to worry about. With thin provisioning, you’re able to achieve better overall storage utilization and less oversight is necessary to make sure that individual server volumes aren’t getting low on space. Better overall storage utilization = less need for additional disks and shelves = less electrical usage.

The disk shelves themselves
Shared storage itself in the form of a SAN can help organizations reduce their carbon footprint by using less electricity. Consider this: Historically, before the days of virtualization, organizations often purchased physical servers that were built for long-term use. As such, that initial server configuration was more than likely to be overkill for the originally intended solution. That overengineering generally included the number of disks housed in the server. After all, even though a server was being purchased for a specific task, who knew exactly what would be required in the future? The result: In general, physical x86-based servers were horribly underutilized, both from a storage and a processing perspective. Even though the server wasn’t running at full capacity, it still required power to run all of the processors originally specified, as well as the disk spindles originally included with the unit. Fast forward to today. It’s now hard to find a data center that isn’t using virtualization in some form in order to consolidate some of these underutilized servers. To support virtual environments, even the smallest organizations


Data deduplication
The final feature that I’m going to cover in this posting is data deduplication. Data deduplication involves eliminating redundancies in data at the storage level. Also known as single-instance storage, data deduplication can have massive benefits when it comes to the amount of space necessary to store data. Consider this: An e-mail message with a 10 MB attachment is sent to 500 users in your organization. 100 of those users save the attachment to their personal folder, which resides on your SAN. The total hit: 100 people times 10 MB = 1,000 MB or 1GB. While this isn’t a ton of space, repeat this process dozens and hundreds of times across the organization, and you can see how quickly space can be eaten away. Enter data deduplication. Now, although that file is stored 100 times on your SAN, your SAN is smart enough to look at the file construction and realize that there is a repeating pattern than can be stored one time with pointers replacing the file in the other 99 locations. Again, this results in a need for fewer disks since less disk space is required. Although it might not directly stave off over-provisioning an initial storage purchase, you might be able to avoid adding another power-hungry disk shelf to your data center.

the best overall option since disk capacity is just as important as overall disk system performance. However, getting close to that disk spindle count/capacity balance will get you a long way toward greening your data center by managing storage alone.

If you wanted to get to the least power hungry disk solution, you’d just buy a bunch of 1.5 TB SATA disks and stick them in a RAID 5. Obviously, this wouldn’t be considered


Greening the data center: Consolidate your servers
Server consolidation projects are being undertaken in many organizations for a variety of reasons. These kinds of projects generally have a number of aims, including: Replacing older hardware with new equipment. Achieving better overall utilization of equipment in the data center. Lowering total costs related to purchasing equipment. Consider this: Today’s multicore, multiprocessor systems are a far cry from yesterday’s single-core behemoths. Modern servers accomplish their workload goals using less power than their older counterparts, even when running at full bore. Further, consider the usage pattern: These days, migrating those old, single application servers to virtual machines running in a virtual machine on new hardware is far from uncommon. The result: A load that would have required 10, 20, and even 30 servers can now be affectively run on just two or three machines in many cases. With a ton of hypervisor solutions available out there and with many of them being free, virtualization is the quickest way to achieve server consolidation goals. In many cases, even a one-for-one replacement of old hardware with new can reduce overall energy consumption. However, by combining the workload from so many servers onto a single unit, a massive energy savings can be realized. Obviously, it’s not quite as simple as throwing in a new server, moving a bunch of workloads, and heading home for the weekend. In order to adequately support so many workloads on a single virtual host, significant storage space is often necessary. But even with the added power requirements of the SAN, most large server consolidation projects still realize major power savings. Power savings alone is a great reason to undertake server consolidation projects, but there are other energy factors at work. Take cooling, for example. Is it cheaper to cool 30 old, inefficient servers or two or three new servers and a SAN? Unless you bought a SAN that takes up an entire room, I’m willing to bet that cooling needs can be dramatically reduced. Consolidation projects don’t have to stop at the data center. In fact, a case can be made for increasing energy usage in the data center. Think virtual desktop infrastructure (VDI). By deploying low-power terminals throughout the organization and deploying a few more energy-efficient servers in the data center, organizations can realize similar green gains in the desktop infrastructure. In short, total cost of ownership for the desktop infrastructure can be reduced, which includes significant energy savings. Suppose, for example, that you deploy 200 thin client terminals on user desktops, replacing 200 energy inefficient thick PCs. Suppose it takes five servers to support these 200 clients. Doing the math, you’ll find that the total energy consumption of the five PCs and 200 thin clients is much, much less than the energy used by 200 regular PCs. You also get some of the other benefits of VDI, such as quick desktop deployment, thus lowering overall management costs, too. Server consolidation projects are becoming more and more common. If your organization has yet to jump feet-first into server virtualization and consolidation, it’s time to start looking. Further, consider the possible benefits of VDI. While not a fit for every organization, VDI can significantly lower costs, both direct and ongoing.


Reap the Green IT benefits of thin client computing
In large, IT-intensive enterprises, the value of desktop and network computing (in the form of enhanced productivity and innovative capabilities) is unquestioned, but that doesn’t negate the difficulties of managing a huge, diverse fleet of PCs. The constant upgrades to PC hardware, applications, and operating systems make the governance and decision process more complex. For instance, how do IT leaders know which upgrades are critical to user service and to stay competitive and which upgrades can be skipped or selectively applied? With the current focus on global warming and Green IT, the question of PC deployment becomes even more challenging. Many users never turn off their PCs or monitors or apply the power-saving capabilities of their desktops, which complicates the drive towards IT sustainability. The bandwidth required to serve applications to all these PCs forces IT departments to buy more servers, network gear, and power and cooling equipment to support it. Security challenges, such as spyware, virus definitions, and firewall signature maintenance, add to the burden as well. Cost efficiency: According to a study by IDC, users of thin clients (when compared to full PC users) saw a decline in hardware and software costs of 40% and saw a reduction in IT operations costs by 29%. IDC found that annual hardware procurement costs dropped from $475 for a PC-based desktop to $285 per thin client, and operating expense for support and maintenance dropped from $498 per PC to $354 per client. IDC also found that IT worker productivity shot up by 56%, due to less trouble calls and hardware repairs required for thin clients vs. PCs. The Green IT benefits of the thin client model are: The tiny firmware boxes have no moving parts and draw significantly less power than a PC. In a study by Thin Client Computing, thin clients drew an average of 10 watts, while PCs drew an average of 69 watts. According to this study, an enterprise with 100 PCs could save In an attempt to deal with the desktop computing model’s shortcomings, many manufacturers offer the thin client computing model. In this model, applications are deployed, managed, and supported at the server level, and the user attaches to the network with a specialized thin client $6,000 in electricity costs annually by migrating from PCs to thin clients. Security: The thin client computing model is inherently more secure, since the applications and the computing power are all housed in the device. These thin client devices are typically specially designed, sealed “black boxes,” containing only the firmware and I/O ports required to connect to the monitor, mouse, keyboard, and network. The boxes are designed to exchange only keystrokes, screen refreshes, and mouse clicks with the application residing on the server; the application runs on the server and feeds screen refreshes back to the thin client. For veterans of the mainframe era of computing, this model should be familiar, as it’s an update of the time-sharing concept common in the green screen days.

Benefits of thin client computing
The benefits of thin client computing are typically categorized in the following three areas:


data center, with its strict rules and disciplines for change control and application installation and revision. With no disk access to install applications, transfer data, or introduce malware, thin clients are a perfect fit for many organizations’ strict security requirements. Manageability: The migration to thin clients (which don’t require the constant patching, configuring, and updating of PCs) can significantly reduce support and maintenance requirements. When virus and malware protection and hardware maintenance are considered as well, the enhanced manageability of thin clients becomes undeniable. As noted, the thin client computing model (which is sometimes referred to as server-based computing since the application and all resources are accessed from the server) resembles a modern version of the classic green screen time-sharing systems, which were popular in the mainframe days. Major manufacturers’ current thin client offerings are much advanced from the text-only, monochrome displays typical of mainframe systems and have evolved significantly from the early Citrix and Microsoft offerings (i.e., Citrix’s early MetaFrame server and Microsoft’s Terminal Server). Many thin clients now come with embedded Windows or Linux operating systems that allow them to display full desktop graphics and offer the familiar UI users expect. Manufacturers such as Hewlett-Packard (through its acquisition of thin-client veteran Neoware) offer thin clients in a laptop configuration that lacks hard drives and USB ports and uses wireless connections to access applications from the company network.

Virtualization has also swept the thin client market. By partitioning a single server into multiple virtual machines, IT teams can give thin client users the experience of a full PC, with their own virtual drives and device, in a virtual instance. Hewlett-Packard and NEC have partnered with VMware to offer specialized systems to help manage virtualized desktop environments based on thin clients. NEC’s Virtual PC Center is a virtualized server appliance with pre-configured VMware and a proprietary management system aimed at thin-client implementations.

Potential pitfalls of a thin client migration
Migrating to thin clients seems appealing based on the benefits outlined above, but there are risks and pitfalls to consider, which include the following: No backup version: While a PC can have a backup version of the application installed locally so users can work in case of a network outage, thin clients don’t have this capability. When the network is down in a thin client environment, work comes to a screeching halt. Network bandwidth and server capacity issues: While network bandwidth demands may be reduced because thin clients are sending only keystrokes, mouse clicks, and screen refreshes, it’s critical to remember that many concurrent application loads and screen refreshes can lead to spikes. Also, contention over network resources can still occur. Many IT organizations find that their requirement for network capacity is not diminished by thin client migration and, in fact, may go up. In virtualized thin client environments, in which each thin client is granted a


“slice” of a virtual machine in order to replicate a complete PC experience, additional server capacity to service all these virtual machines must be considered. If the user’s virtual machine goes down in this 1 slice to 1 thin client scenario, the user is down until an IT pro can locate and repair “their” virtual machine. Directly-attached peripheral device issues: Directly-attached peripheral devices (printers and USB devices) can be problematical in a thin-client environment. Many thin clients lack USB ports (or allow administrators to shut them off) and often lack the capacity to install specific device drivers, especially for unique devices. Psychological hurdles: One of your biggest challenges may be changing users’ mindsets. Some users might vehemently resist losing “their” PC, on which they’ve installed their favorite games or screensavers and has become as personal to them as the photos on their office wall.

users? Many thin client planners go a step further and perform individual performance monitoring to assess the usage characteristics of each thin client recipient. A meticulous plan: Once the assessment is complete, planners must design a server-based computing scenario that incorporates the disk storage, printer access, and server or virtual machine architecture required to support the population’s thin clients. Some vendors offer tools to assist in server or virtual machine sizing, such as HP’s Sizer for VMware, that can assist in this exercise. Planners must also consider redundancy, as thin clients are dependent on the server environment. A Proof of Concept (POC): Thin client providers universally agree that a thin client POC is essential. All the sizing and planning described above are by nature estimations, and an actual, controlled implementation on a selected population is required to test assumptions and learn how server-based computing works in your environment. Experienced technicians recommend analyzing POCs to look for server bottlenecks, network bottlenecks, and virtual machine configuration problems. By using built-in Microsoft tools such as perfmon or third-party tools, IT pros can tune the server, virtual machine, or network to assure successful migration. Selective migration: Most analysts agree that thin client computing is not suitable for every desktop in the organization. Selective migration to a carefully evaluated group of users -- usually in a staggered manner to ensure that issues can be resolved group by group -- is also suggested.

Planning and implementing a thin client migration
For organizations that are driving towards Green IT and sustainability, or desire the manageability and security benefits of thin client computing, experts recommend that a thin client migration should include the following activities: A thorough assessment: Thin client advocates recommend that implementations begin with a complete understanding of the application portfolio running on the desktops under consideration for migration. It’s key to inventory printers and peripherals and to understand how much disk storage each user would typically require. And, it’s important to understand the network protocols in use across the infrastructure, as many thin clients utilize specific protocols (e.g., Microsoft’s Remote Desktop Protocol) to exchange data with the client. It’s also critical to understand the user community’s characteristics; for instance, are they high-bandwidth “knowledge workers” who use their computers all day every day or low-intensity occasional

Enhanced Green IT and ROI
For organizations wishing to save acquisition and operating costs, to enhance security and manageability, and to promote Green IT and sustainability, thin client computing is a key component of the puzzle. Applying a selective and rigorous methodology to the migration to thin clients will position IT teams to reap the enhanced Green IT and ROI benefits that this computing model offers.


Investing in virtualization has Green IT payoffs
Analysts agree that one of the key enablers of Green IT is virtualization. In order to save money on facilities, power, cooling, and hardware, analysts say moving to a virtual data center is a fundamental first step. By utilizing the untapped processing power of today’s high-power servers and storage devices, IT teams can deliver the same, or improved, performance with reduced operating expenses, a smaller data center footprint, and significantly curtailed greenhouse gas emissions. Even in these fragile economic times, CIOs are investing in virtualization. A 2008 survey from CIO Research indicates that 85% of CIOs surveyed have implemented virtualization in the data center, and 81% believe that their virtualization efforts have resulted in significant savings. Apart from the savings, and the corporate responsibility benefits from a Greener IT profile, CIOs cite other important benefits, which include simplified maintenance, improved disaster recovery plans, and the ability to provision systems and new applications more quickly. While the green benefits of virtualization are acknowledged, the survey also revealed some surprising pitfalls amidst the virtualization euphoria. 42% of CIOs surveyed said that political and organizational challenges are as big a problem as technical issues. Many CIOs also noted that IT teams are still operating in silos, creating integration difficulties that can impede virtualization success. and support fewer physical servers, hardware costs in the virtualized IT organization decrease substantially -- up to 50% according to some analysts. Maintenance costs can decline in parallel. Facilities savings are also significant, as reduced data center footprints result in savings in real estate, power, and cooling expenses. Many organizations have utilized virtualization to avoid building new data centers and to shrink the footprint of their existing data centers by up to 60%. Virtualized IT environments enable enhanced backup and data recovery operations by facilitating automated failover. These environments can also help speed development efforts by making it easier to bring up development servers though the use of template-driven provisioning. Every silver lining has a cloud, and virtualization is no exception. As previously mentioned, there are often numerous political and organizational challenges related to virtualization. For instance, users often become accustomed to having their own dedicated servers that they can control and modify according to their schedule and whim. These “server huggers,” as they are often known in the virtualization community, can be resistant to the idea of migrating their applications to shared servers The most obvious benefit is cost savings in hardware acquisition. Because virtualization allows enterprises to buy and often have significant political clout.

Benefits and challenges of virtualization
So how can organizations achieve the stellar returns associated with virtualization, both in cost efficiency and in terms of Green IT benefits, while avoiding the pitfalls? Let’s analyze the key benefits and challenges of virtualization.


As anyone who has undertaken a virtualization project quickly discovers, many applications and hardware devices are not amenable to easy virtualization; therefore, a detailed assessment of applications and hardware in use is a key prerequisite to successful virtualization. Even if virtualization is technically feasible, many vendors offer limited support for virtual instances of their software, and some even refuse to honor warranties and maintenance contracts if their apps are running in a virtual environment. Savings can be jeopardized by application vendor licensing policies, which in many cases have not caught up with the surge in virtualization. As popular as virtualization has become, it’s still a relatively new technology, and finding qualified technicians and support organizations can be difficult. Finally, the virtualization landscape is still in flux. While VMware is the recognized leader in virtualization software at the moment, new products from Microsoft, Xen, and some upstart providers can drive the market to evolve quickly and unpredictably, threatening investments with obsolescence.

Green IT spin on your virtualization project is a key success factor. The business case is critical not only because it outlines the size, scope, and ROI of the virtualization effort, but also because it sets the stage for the critical consensus building required. It’s a lot harder for server huggers to hang onto their individual machines when the company has a strong social responsibility initiative, and the ROI is compelling to the CFO and the rest of the organization. In these constrained economic conditions, persuading your organization to invest in a new approach to IT requires a set of clearly stated and supportable expectations, both green and economic. By estimating the number of servers to be virtualized, the percentage of consolidation expected, the ultimate potential savings, and the Green IT impact, and by demonstrating that you’ve thought through the implementation details and pitfalls, you become much more likely to build the executive support that a complex project such as virtualization requires. • Perform a virtualization assessment: As noted, not all applications and devices are subject to being virtualized. Many organizations lack a complete hardware and software inventory, and few have performed the in-depth utilization and criticality analysis that virtualization requires. Even fewer have analyzed their current power and thermal status, which is critical for setting green goals. Don’t underestimate the complexity and resource intensive nature of this exercise; a deep understanding of the organization’s hardware, software, and operating expenses is a critical success factor and often requires significant digging and analysis to deliver. Ask yourself these questions:

Going down the virtualization path
With all these factors to consider, how can IT organizations proceed down the virtualization road while avoiding the potholes? I suggest following this path: • Develop a Green IT business case: Because more IT leaders replied to the CIO survey that they were motivated by green initiatives rather than cost efficiency, putting a


- Is your organization delivering overcapacity, overpowering, and overcooling to the data center, leading to a significantly larger carbon footprint than necessary? - Which applications are amenable to virtualization? - How many multi-core machines are available to become part of the virtual environment? - Will the virtualization project affect vendor maintenance contracts or application warranties? The more completely your team can answer these questions, the more likely your chances for success. • Select the appropriate hardware and software: Once you understand the current situation, you must determine how you’ll approach your virtualization project. For instance: - Will you utilize only existing servers or purchase new energy-efficient gear? - Will you virtualize storage at the same time or save that for a later effort? - Will you select VMware due to its market leadership, Xen because of its open source heritage, or Microsoft because of its existing relationships and applications? As in every software and hardware selection effort, a structured approach to decision making, with a disciplined analysis of pros and cons will lead to a better outcome. • Define your virtualization migration path: Will you follow an incremental path to virtualization, starting small and spreading your program out over months or years, or will you attempt to achieve the cost-savings and Green IT benefits quickly by migrating at a rapid rate? Will you migrate critical applications first or test your theories and plans on low-impact apps to be sure you understand the consequences before completely jumping in? Defining your plans before you begin the migration activities ensures that you think about the risks and implications before you plunge in and helps persuade your executive sponsors that you’re taking a prudent approach to virtualization.

• Perform a Proof of Concept (POC) project: As with any major IT project, you can’t know the hidden difficulties and gotchas before you attempt implementation. By learning your lessons in a POC project, you can protect the actual migration from unforeseen risks and difficulties, and learn how to deal with your complex new virtual environment before you migrate production systems. Demonstrating some “quick hit” cost savings and Green IT benefits can go a long way toward persuading doubters to jump on the bandwagon. • Develop operational and maintenance procedures: Often left to last, operational and maintenance procedures are critical to the success of your virtual environment; these procedures should be designed before the implementation, not afterwards. As previously noted, many vendors and outsourced service providers lack familiarity with virtual environments, or have restrictive guidelines for honoring contracts and SLAs when applications are virtualized. In addition, qualified internal resources can be rare, and existing staff members often require specialized training to get up to speed. Think through these issues up-front. Remember that one major disruption in your new virtual environment can cause a massive loss of faith among your user community. • Migrate to a virtual environment: According to industry research group Info-Tech, the typical virtualization implementation takes about seven months, including the POC. Create a detailed project plan that defines the tasks and steps required to perform your migration. Think through your back out plan in case there are unforeseen difficulties, and test every element of your consolidation as you go to ensure no unpleasant surprises for your users. • Measure your Green IT benefits and savings: Be sure to perform an “after-action report” to compare your projected benefits, both economic and Green IT, to your actual achievements. These lessons are essential, as


virtualization is an ongoing program that will be continually applied as new applications come online, more energyefficient hardware is available, and new developments in the virtualization market compel additional efforts. • Continue to virtualize and consolidate: Once you perform a successful virtualization, codify your learnings and processes so that subsequent efforts can continuously gain in efficiency. One of the key observations of a Green IT migration is that staying Green is as challenging as getting Green, so IT teams need to learn their lessons, stay engaged, and don’t allow their Green investment to dissipate. Create processes that enable user groups to request virtualization, build templates to speed provisioning, and create “run books” that document your operational policies so they’re not dependent on the expertise of individuals. The savings and Green IT benefits of virtualization are achievable, as evidenced by the great majority of CIOs who express high satisfaction in their virtualization programs. By following a structured approach, such as the one recommended above, organizations can integrate virtualization into the fabric of their IT practices and gain efficiencies on a continual basis.

Copyright ©2009 CNET Networks, Inc., a CBS Company. All rights reserved. TechRepublic is a registered trademark of CNET Networks, Inc. Cnet Networks, Inc. 235 Second Street San Francisco, CA 94105 U.S.A.

john kimingi john kimingi ceo
About just a whizz kenyan boy