VIEWS: 108 PAGES: 47 POSTED ON: 11/11/2009
Incentives for Energy Efficient Data Centers March 1, 2007 William Tschudi email@example.com 510-495-2417 LBNL Project Staff Bill Tschudi Subcontractors Dale Sartor – Ecos Consulting – EPRI Solutions Steve Greenberg – EYP Mission Critical Tim Xu Facilities Evan Mills – Rumsey Engineers Bruce Nordman – Syska & Hennesy Jon Koomey Paul Mathew Arman Shehabi Data Center Research Roadmap A “research roadmap” was developed for the California Energy Commission and outlined key areas for energy efficiency research, development, and demonstration Data Center research activities Benchmarking and 22 data center case studies Self-benchmarking protocol Power supply efficiency study UPS systems efficiency study Standby generation losses Performance metrics – Computation/watt Market analysis LBNL Data Center demonstrations “Air management” demonstration (PG&E) Outside air economizer demonstration (PG&E) – Contamination concerns – Humidity control concerns DC powering demonstrations (CEC) – Facility level – Rack level Case studies/benchmarks Banks/financial institutions Web hosting Internet service provider Scientific Computing Recovery center Tax processing Storage and router manufacturers others IT equipment load density IT Equipment Load Intensity 100 90 80 2005 Benchmarks 70 Ave. ~ 52 60 Watts/sq. ft. 50 2003 Benchmarks Ave. ~ 25 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Data Ce nte r Number Benchmarking energy end use Electricity Flows in Data Centers HVAC system local distribution lines lights, office space, etc. uninterr uptible load to the building, 480 V computer UPS PDU computer racks equipment backup diesel generators UPS = Uninterruptible Power Supply PDU = Power Distribution Unit; Overall power use in Data Centers Courtesy of Michael Patterson, Intel Corporation Data Center performance differences Variation in Data Center Energy End Uses 100% 90% Other % of total energy use 80% 70% Lighting 60% UPS Losses 50% Total HVAC 40% DC Equipment 30% Servers 20% 10% 0% 5 7 8 9 1 2 3 2 1 5 1 2 E 4. 4. 6. 6. 8. AG R Facility Number VE A Performance varies The relative percentages of the Computer Loads energy actually doing computing 67% varied considerably. Lighting Other Office Space 2% 13% Conditioning 1% HVAC - Air Movement Electrical Room 7% Cooling 4% Lighting Data Center 2% Cooling Tower Server Load HVAC - Plant 51% Chiller and 4% Pumps 24% Data Center CRAC Units 25% Percentage of power delivered to IT equipment IT Equipment load Index 0.75 All values are shown as a fraction of the respective 0.74 Average 0.49 0.70 data center total pow er 0.68 0.67 0.66 consumption. 0.63 0.59 0.59 0.59 0.60 0.55 0.49 0.49 0.47 0.43 0.42 0.38 0.33 1 2 3 4 5 6 7 8 9 10 11 12 16 17 18 19 20 21 22 Data Center Number HVAC system effectiveness We observed a wide variation in HVAC performance HVAC Effectiveness Index 4.0 Ratio of IT Equipment Power to HVAC Power 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 1 2 3 4 5 6 7 8 9 10 11 12 14 16 17 18 19 20 21 22 Data Ce nte r Numbe r Benchmark results helped to find best practices The ratio of IT equipment power to the total is an indicator of relative overall efficiency. Examination of individual systems and components in the centers that performed well helped to identify best practices. Best practices topics identified through benchmarking HVAC Facility IT Cross-cutting / misc. – Air Delivery Electrical Equipment issues – Water Systems Systems Air Cooling UPS Power Supply Motor efficiency management plant systems efficiency optimization Air Free cooling Self Sleep/standby Right sizing economizers generation loads Humidification Variable AC-DC IT equip fans Variable speed drives controls speed Distribution alternatives pumping Centralized air Variable Standby Lighting handlers speed generation Chillers Direct liquid Maintenance cooling Low pressure Commissioning/continuous drop air benchmarking distribution Fan efficiency Heat recovery Redundancies Method of charging for space and power Building envelope Design guidelines for 10 “Best Practices” were developed in collaboration with PG&E Guides available through PG&E’s Energy Design Resources Website Design guidance - a web based training resource A web-based training resource is available http://hightech.lbl.gov/dctraining/TOP.html Performance metrics Couple existing computing benchmark programs with energy use Computations/Watt (similar to mpg) Energy Star interest Jon Koomey led effort to establish first protocol Encourage improved “Air Management” Goal: Obtain better cooling and energy savings through improvements in air distribution. Concepts Better isolation of hot and cold aisles will improve temperature control and allow air system optimization Airflow (fan energy) can be reduced if air is delivered without mixing Air system and chilled water systems operate more efficiently at higher temperature differences Temperatures in entire center can be raised if mixing is eliminated. It may be possible to cool with tower water rather than use of chillers. Isolation scheme – cold aisle isolation Isolation scheme – hot aisle isolation Fan energy savings – 75% If there is no mixing of cold supply air with hot return air - fan speed can be reduced Temperature control can be improved Cold Aisle NW - PGE12813 90 Baseline Alternate 1 Setup 85 Setup Alternate 2 80 75 Temperature (deg F) 70 65 60 55 50 Low 45 Med High 40 6/13/2006 12:00 6/14/2006 0:00 6/14/2006 12:00 6/15/2006 0:00 6/15/2006 12:00 6/16/2006 0:00 6/16/2006 12:00 Time Better temperature control allows raising the temperature in the entire data center ASHRAE Recommended Cold Aisle NW - PGE12813 Range 90 Baseline Alternate 1 Setup 85 Setup Alternate 2 80 75 Temperature (deg F) 70 65 60 55 50 Low 45 Med Ranges with High 40 6/13/2006 12:00 6/14/2006 0:00 6/14/2006 12:00 6/15/2006 0:00 6/15/2006 12:00 6/16/2006 0:00 6/16/2006 12:00 Time aisles isolated Incentives for outside air economizers Issue: – Many data centers are reluctant to use economizers – Concerns are outdoor pollutants and humidity control Incentive strategy: – Encourage use of outside air economizers where climate is appropriate – Address concerns: contamination/humidity control – Quantify energy savings benefits Overcoming barriers Identify potential failure mechanisms Address contamination in data centers Address humidity control Illustrate that contamination is within guidelines or easily controlled Case studies of economizers in data centers Particle bridging Only documented pollutant problem Deposited particles bridge isolated conductors Increased RH cause particles to absorb moisture Particles dissociate, become electrically conductive Causes current leakage Can damage equipment LBNL measurements Features of study: Measurements taken at eight locations Approximately week long measurements Before and after capability at three centers Continuous monitoring equipment in place at one center (data collection over several months) Findings Water soluble salts in combination with high humidity can cause failures Static electricity can occur with very low humidity New ASHRAE particle limits drastically lower than manufacturer standard Particle concentration typically (no economizer) an order of magnitude lower than new ASHRAE limits Economizers, without other mitigation, cause particle concentration to reach new ASHRAE limits Current filters are only Class II 40% efficiency DC powering data centers Goal: Show that a DC system could be assembled with commercially available components and measure actual energy savings – a proof of concept demonstration. Data Center power conversions AC voltage conversions 5V Internal Drive 12V External Drive Bypass 3.3V PWM/PFC I/O Switcher Unregulated DC 12V 1.5/2. DC/DC In Out AC/DC To Multi Output 5V Memory Controller Regulated DC 12V 1.1V- Voltages μ Processor DC/DC 1.85V Battery/Charger Inverter 3.3V Rectifier SDRAM 3.3V Graphics Controller Voltage Regulator Modules AC/DC Multi output PS Prior research illustrated large losses in power conversion Factory Measurements of UPS Efficiency (tested using linear loads) 100% 95% Power Supplies 90% Efficiency in IT equipment 85% 85% 80% Flywheel UPS Double-Conversion UPS 80% 75% Delta-Conversion UPS 75% 70% 0% 20% 40% 60% 80% 100% % Efficiency 70% Percent of Rated Active Power Load 65% 60% Uninterruptible Power 55% Supplies (UPS) Average of All Servers 50% 45% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% % of Nameplate Power Output UPS draft labeling standard Based upon proposed European Standard Possible use in incentive programs Included in the demonstration Side-by-side comparison of traditional AC system with new DC system – Facility level distribution – Rack level distribution Power measurements at conversion points Servers modified to accept 380 V. DC Artificial loads to more fully simulate data center Additional items included Racks distributing 48 volts to illustrate that other DC solutions are available, however no energy monitoring was provided for this configuration DC lighting Typical AC distribution today 480 VAC 480 AC/DC DC/AC AC/DC DC/DC 12 V VRM 12 V Bulk Power Volt AC Supply 5V Loads using PSU VRM Legacy UPS PDU Voltages 3.3 V VRM VRM 1.2 V Loads 1.8 V using VRM Silicon Server 0.8 V Voltages VRM Facility-level DC distribution 380V.DC 480 VAC 480 AC/DC 380 VDC DC/DC 12 V VRM 12 V AC VoltSupply Bulk Power Loads 5V using DC UPS PSU VRM Legacy or Rectifier 3.3 V Voltages VRM VRM 1.2 V Loads 1.8 V using VRM Silicon Server 0.8 V Voltages VRM Rack-level DC distribution 480 Power 480 VAC Bulk AC/DC DC/AC AC/DC DC/DC 12 V VRM 12 V Volt AC Supply PSU 5V Loads using VRM Legacy UPS PDU 380 VDC Voltages 3.3 V VRM VRM 1.2 V Loads 1.8 V using VRM Silicon 0.8 V Voltages VRM Rack Server AC system loss compared to DC 12 V 480 VAC AC/DC DC/AC AC/DC DC/DC VRM 12 V Bulk Power Loads Supply using 5V PSU VRM Legacy UPS PDU Voltages 3.3 V VRM VRM 1.2 V Loads 1.8 V using VRM Silicon 7-7.3% measured 2-5% measured Server VRM 0.8 V Voltages improvement improvement 380 VDC 12 V 480 VAC AC/DC DC/DC VRM 12 V Bulk Power Loads Supply using 5V DC UPS PSU VRM Legacy or Rotary UPS Voltages Rectifier 3.3 V VRM VRM 1.2 V Loads 1.8 V using VRM Silicon Server 0.8 V Voltages VRM Implications could be even better for a typical data center Redundant UPS and server power supplies operate at reduced efficiency Cooling loads would be reduced. Both UPS systems used in the AC base case were “best in class” systems and performed better than benchmarked systems – efficiency gains compared to typical systems could be higher. Further optimization of conversion devices/voltages is possible Industry Partners in the Demonstration Equipment and Services Contributors: Alindeska Electrical Contractors Intel APC Nextek Power Systems Baldwin Technologies Pentadyne Cisco Systems Rosendin Electric Cupertino Electric Dranetz-BMI SatCon Power Systems Emerson Network Power Square D/Schneider Electric Industrial Network Manufacturing Sun Microsystems (IEM) UNIVERSAL Electric Corp. Other firms collaborated Stakeholders: 380voltsdc.com Morrison Hershfield Corporation CCG Facility Integration NTT Facilities Cingular Wireless RTKL Dupont Fabros SBC Global EDG2, Inc. TDI Power EYP Mission Critical Verizon Wireless Gannett Hewlett Packard Picture of demonstration set-up – see video for more detail DC power – next steps DC power pilot installation(s) Standardize distribution voltage Standardize DC connector and power strips Server manufacturers develop power supply specification Power supply manufacturers develop prototype UL and communications certification website: http://hightech.lbl.gov/datacenters/ Discussion/Questions??
Pages to are hidden for
"Incentives for Energy Efficient Data Centers"Please download to view full document