to Solaris Zones

Document Sample
to Solaris Zones Powered By Docstoc
					30 June 2010                                Debbie Sheetz                                        Page 1



    Modeling Server Consolidation ‘What-if’ to Solaris
                        Zones


Sun virtualization options were first offered with the introduction of containers/zones in Solaris
10, then enhanced by the introduction of CPU capping in a later release of Solaris 10
(http://dlc.sun.com/pdf/817-0547/817-0547.pdf and http://docs.sun.com/app/docs/doc/817-
0547/gghqo?a=view). Where CPU shares are used by Solaris to manage competition for CPU
resources between active zones. When CPU shares with caps are specified, the share is
interpreted as a hard limit on CPU consumption.

Predict has specific features available to represent zones, CPU shares, and shares with caps.

Modeling (‘what-if’) to Zones/Containers

If you want to model the change in performance between multiple standalone systems and the
same work configured as zones, there are three modeling activities required.

    A. Build baseline model(s) containing virtualization candidates
    B. Combine the virtualization candidates onto a single system within one model1
    C. Configure the zones as desired

Note that only C. is unique to Solaris zones, where the other activities are general to any server
consolidation using Predict.

Note also that CPU sizing techniques (using either CME Sizer/BPA Virtualization Planning or
Visualizer) can be used as a preliminary activity prior to modeling – this streamlines the
modeling process considerably as well as ensures proper selection of time interval(s) for
modeling.2

The steps for performing the entire modeling study are as follows:

1) Build and baseline a model containing all the systems which you will be combining. If this
   isn’t physically possible, you can build multiple baseline models, exporting workloads as
   needed (see step 2).

    When defining workloads for this type of modeling, the simplest workload structure is the
    easiest to work with, i.e. a single workload, “zzz” should be used if feasible. If all the work
    isn’t being moved to a virtual system, two workloads are enough, e.g. “work_to_be_moved”

1
  Another possible capacity planning project would be to include one (or more) system(s) which already have zones
configured as the target(s) for consolidation.
2
  Customer Support can recommend additional documentation showing these techniques on request.
30 June 2010                          Debbie Sheetz                              Page 2


   and “zzz”. A more complex workload structure can also be used, but requires a little more
   work later in the modeling process.

2) Select the workloads to be moved to a virtualized system.

   a) Workloads, right-click a workload, Workload Export (Windows console)
   b) Workloads, select a workload, File -> Save to Library (UNIX console)

This step should be repeated for each baseline model built in Step 1.

3) Upgrade an existing Computer to representing the new Solaris zone system in an existing
   baseline model1.

   a) Computers, right-click computer, Properties -> Configuration, Browse (Windows
      console)
   b) Nodes, select node, Edit -> Upgrade CPU Type (UNIX console)




                                           An UltraSPARC T2 plus configured with 32
                                           cores has been specified
30 June 2010                                  Debbie Sheetz                                            Page 3


Note that processors using CMT (i.e. UltraSPARC T1 and T2) can be optionally represented “by
thread” instead of “by core” – this option is detailed in the Understanding How to Represent
Sun T1 and T2 Processors paper.

Optionally rename the computer to represent its new purpose, e.g. New_Zones_T2.

4) Define the desired processor sets/zones, using the Allocation to represent the number of
   processors, e.g. 25% of 32 processors is 8 processors3.

    a) Computers, right-click a computer, select Properties -> Advanced, check Use shares,
       then select Shares (Windows console)
    b) Nodes, Edit -> Edit a computer -> Advanced, check Use shares, then select Shares
       (UNIX console)




3
  Allocation is always used to determine a relative percentage. In the example, the Allocations are given to total 32,
the same as the physical processor configuration, but you could just as well use 4, 3, 1, 1 to achieve identical results.
30 June 2010                                Debbie Sheetz                                         Page 4


5) Move the workloads onto the new computer4.

If the baseline model already contains the workloads:

    a) Workloads, right click, Properties -> Users. Replace the baseline system name with the
       new system name (Windows console)
    b) Workloads, select a workload, Edit -> Users. Replace the baseline system name with
       the new system name (UNIX console)

If the baseline model doesn’t already contain the workload:

    c) Workloads, right click, Import Workloads (Windows console)
    d) Workloads, Edit -> Add from Library (UNIX console)

6) Specify the pset/zone for each workload5.

    a) Workloads, right-click a workload, Properties -> Users, Shares -> Parent share group
       and Allocation (Windows console)
    b) Workloads, select a workload, Edit -> CPU Share, specify CPU Share Value and
       Parent share group (UNIX console)




4
  Note that Windows console 7.4 has a new Move Workloads feature which simplifies step 5. It is highly
recommended to obtain this console if multiple consolidation studies are expected. The feature supports VMware,
AIX/HP consolidations, as well as “regular” non-virtualized consolidations.
5
  Allocation is always used to determine a relative percentage. In the example, the Allocation is given as 1, and
since there are no other workloads assigned to the same pset, the workload gets 100% of the pset. If you added a
second workload and specified a value of 1 for it, the two workloads would each get 50% of the pset. Use the
computer/node view to see all allocations together:
30 June 2010                                 Debbie Sheetz                                          Page 5




7) Optionally specify CPU caps to the share allocations6.

8) Evaluate the model.



6
 GUI Support for specifying capped CPU allocations currently unavailable. This can be implemented via a Predict
command file, for example capped_zones.cmd:

MODIFY NODE New_Zones_T2
       USE-SHARES CAPS

Additionally, a patched version of Predict 7.4.10 is required (30 March 2010 or later). Any version of Predict 7.5
can be used.

When a capped allocation is exceeded, traditional CPU cutback/saturation messages are issued and all modeling
reports show the effect of inadequate CPU resources:

W-CUTBACK: CPU on node New_Zones_T2 with share CAP on is saturated and the model is
      evaluated with a cutback on throughput.
W-WKLSAT:     Throughput for transaction sample@Sample_Solaris running on node
      New_Zones_T2 with share CAP on is less than specified arrival rate.
30 June 2010                          Debbie Sheetz                                 Page 6




Overall results are shown on the Computer Summary report (UNIX console, html format
shown above). Specific results by zone are shown on the Computer CPU Share Statistics by
Workload report (UNIX console, html format shown below):




Of particular interest is Degradation, which indicates how much additional CPU waiting is
occurring due to contention amongst the zones. A degradation value of 1 means there is no
additional waiting, 2 means that it takes twice as long to obtain the required CPU as it would
without competition at the CPU, etc.

And finally, you can view the change in workload response time compared with the baseline
using the Workload Relative Response Time report or GUI display of relative response time
(UNIX console shown below):
30 June 2010                        Debbie Sheetz                                Page 7




As detailed in the Understanding How to Represent Sun T1 and T2 Processors paper, the per
core representation is optimistic and the per thread representation is pessimistic in term of
relative response time changes. When comparing the two alternative representations for this
modeling scenario:

          (1) Predicted CPU Utilizations are identical: 16.64 / 32.0 = 52% vs. 13.31 / 25.6 =
              52%
          (2) Relative response times are reduced by 47% when evaluating by core, and
              increased by 258% when evaluating by thread.
30 June 2010   Debbie Sheetz   Page 8

				
DOCUMENT INFO
Categories:
Tags:
Stats:
views:0
posted:3/20/2013
language:English
pages:8
qingqing19771029 qingqing19771029 http://
About