Docstoc

Virtual Private Machines

Document Sample
Virtual Private Machines Powered By Docstoc
					Providing QoS with Virtual Private Machines
Kyle J. Nesbit, James Laudon, and James E. Smith

Motivation for QoS


Multithreaded Chips
 

Resource sharing Higher utilization


E.g., Niagara



Inter-thread interference Soft real-time applications




Applications


Cell-phones and game consoles Scheduling and synchronization Hosting services



Fine-grain parallel applications




Server consolidation


QoS Objectives
  



Isolation Priority Fairness Performance
Objectives are combined




E.g., Isolation and performance

QoS Framework


Separation of Objectives, Policies, Mechanisms


Well structured solutions
Proc. 1 L1 Cache Proc. 2 L1 Cache

Objectives
Mechanisms Local Policy

Proc. 3
L1 Cache

Proc. 4 L1 Cache

Interconnect Bandwidth K

Global Policy

Local Policy Local Policy

L2 Cache (Capacity C)
Memory Controller Bandwidth L Main Memory (Capacity M)

Local Policy Resource-Directed
Service Isolation Allocations – Minimum Service Unallocated or Unused Allocated Consumed Priority Vector – Relative Service Same Priority Unused Work Conserving Policies

Higher Priority Consumed

Fairness Multiple Definitions?

Global Policy Virtual Private Machines
Real-Time Thread Background Thread Background Thread Fairness Policy
VPM 1
Proc. 1 L1 Cache BW .5K L2 Cache (Capacity .5C) Memory Cntl. BW .5L Main Memory

Background Thread

VPM 2
Proc. 2 L1 Cache BW .1K L2 Cache (Capacity .1C) Memory Cntl. BW .1L

VPM 3
Proc. 3 L1 Cache BW .1K L2 Cache (Capacity .1C) Memory Cntl. BW .1L

VPM 4
Proc. 4 L1 Cache BW .1K L2 Cache (Capacity .1C) Memory Cntl. BW .1L Main Memory

Main Memory

Main Memory

Priority = 3

Priority = 0

Priority = 0

Priority = 0

Global Policy Performance-Directed


Global optimization problem
 

Use local policies to control resources Optimize one bottleneck and the bottleneck appears somewhere else



Performance-directed policies need to fit into the VPM policy


E.g., optimize aggregate performance within a priority level

Status


Completed


Secondary cache [ISCA ’07] and SDRAM memory system mechanisms [Micro ‘06]
 

Bandwidth mechanisms Cache capacity mechanisms



Ongoing and Future Work
   

Multithreaded Processors Work conserving cache capacity policy Priority policy Aggregate performance policy

Conclusion






Objectives: Isolation, Priority, Fairness, Performance Implementation: Separation of policies and mechanisms Abstraction: Virtual Private Machines


Composable global policies that coexist on a per application basis

Questions and Comments
Do Virtual Private Machines meet all of the requirements of software controlled microarchitecture resource management?




				
DOCUMENT INFO