Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

The Best-Kept Insider Secret - Hitachi Data Systems

VIEWS: 0 PAGES: 34

									THE BEST-KEPT INSIDER
SECRET: VMWARE VSPHERE 5
CLOUD DEPLOYMENT
MICHAEL HEFFERNAN,
VMWARE SOLUTIONS PRODUCT MANAGER,
HITACHI DATA SYSTEMS

PATRICK ALLAIRE,
SENIOR PRODUCT MARKETING MANAGER,
HITACHI DATA SYSTEMS
WEBTECH EDUCATIONAL SERIES

STORAGE IN THE CLOUD SERIES

 The Best-kept Insider Secret: VMware vSphere 5 Cloud Deployment
    September 21, 9am PT, 12pm ET
    ‒ Learn why the industry’s most demanding customers are deploying clouds with the
      storage virtualization leader. Hear Michael Heffernan, Hitachi VMware Solutions
      Product Manager, and Patrick Allaire, Senior Product Marketing Manager, give the
      inside information you need to understand why VMware vSphere 5 cloud deployment
      on Hitachi infrastructure is the way to go.
   Storage Virtualization: Delivering Storage as an Utility for the Cloud
     September 28, 9am PT, 12pm ET
    ‒ Attend this informative session to learn how the Hitachi Command Suite can help you
       meet the demanding storage requirements of private cloud computing.

MAINFRAME SERIES
 Advances in Mainframe Storage, October 19, 9am PT, 12pm ET

 Replication in a Mainframe Storage Environment, October 26, 9am PT, 12pm ET

 Hitachi VSP Performance in a Mainframe Environment, November 2, 9am PT, 12pm ET
AGENDA


 Top VMworld 2011 myths

 Storage Design and Architecture for vSphere
  ‒ VMware and Hitachi integration
  ‒ Hitachi AMS 2000 and Virtual Storage Platform formula
  ‒   VMware storage APIs
      ‒ vStorage APIs for Storage Awarenes (VASA)
      ‒ vStorage Awareness API Integration (VAAI)
MYTH #1
It makes more sense to deploy vSphere 5 over
Network File System
WHY NETWORK ATTACHED STORAGE?
THE ADVANTAGES


 Flexibility and cost savings
  ‒ vSphere supports iSCSI, FC, FCoE and NFS (IP)
  ‒ All functionality of vSphere can be exploited over NFS with a few
    exceptions
       ‒ Cannot cluster virtual machines (VMs) using Microsoft Cluster Server
       ‒ Cannot boot the physical host directly from NFS (requires an internal disk)
       ‒ No true multipath I/O engine (although network can provide fault tolerance)

  ‒ NFS/IP Ethernet perceived as less costly, less complex and more
    flexible in deployment
  ‒ NFS provides a level of virtualization that enables it to abstract
    some physical level constraints such as simple provisioning, LUN
    queue mgmt, VMFS SCSI reserves
  ‒ NFS provides the ability to dynamically re-size the virtual machine
    datastores
WHY NOT NAS?
THE DISADVANTAGES



 Reliability
  ‒ Failover is rapid and clean over Fibre Channel (FC), NFS
    implementations have higher timeouts
  ‒ IP/Ethernet networks, while redundant, are not generally as robust

 Performance
  ‒ While you can make NFS perform equal to FC for a given
    workload with the right resources, it will consume more host CPU:
    15% and upwards (substantial in sequential I/O)
  ‒ Native multipathing and load balancing (NFS cannot load balance
    I/O within a datastore)
  ‒ VAAI-enabled subsystems have addressed SCSI reserve issues
WHY HITACHI NETWORK ATTACHED STORAGE
(NAS) FOR VSPHERE?

 Hitachi NAS offers a highly scalable platform
  ‒ Up to 8 high-performance nodes in a single cluster (almost 4X
    more scalable than the leading vendor)
  ‒ File systems can be extended to 256TB (up to 16X)

 Hitachi NAS provides a tiered file system to maximize vSphere
  performance.
  ‒ Accelerates metadata look-ups when processing snapshots
    across many large VMDK files

 Hitachi NAS provides JetClone for space efficient copies of VMs

 Hitachi NAS is VMware certified

 JetMirror provides object-based replication over WAN
HITACHI AMS AND VSP ARE STRONGER

PURE NAS PLATFORMS HAVE WEAK BLOCK IMPLEMENTATION

 Despite NAS having many software features and deduplication,
  these platforms do not have robust Fibre Channel capabilities:
  ‒ Dual-node failover time (15 to 45sec outage) vs. VSP fault
      tolerant architecture with a 100% data availability warranty
  ‒ Does not have active-active symmetric controllers load balancing
      like Hitachi AMS 2000
  ‒ LUNs are files on a file system, rather than having native block
      capability, they are liable to fragment over time
  ‒ Operating system (OS) and parity checksum overheads result in
      lower useable capacity
  ‒ No integrated encryption offering
  ‒ Limited virtualization capabilities
  ‒ VMware View Composer 5 supports intrinsic inline dedup while
      current primary storage dedup are post processing and affect
      host I/O response time
DEPLOYMENT RECOMMENDATION



 Evaluate both options
 It’s not about block vs. file
  ‒ Both are valid options that have strengths and weaknesses

 Think of your storage as a service
  ‒ NFS is a valid option when layered on top of our enterprise level block
    platform
  ‒ NAS scalability is a must in large or high-growth environments
  ‒ Storage is the bottleneck – use automated tiering to balance performance
    and cost
MYTH #2
VMware Storage Distributed Resource Scheduler (DRS)
and profile-driven storage support
Tier 1 application requirements
STORAGE DRS AND PROFILE-DRIVEN STORAGE


                                Overview

 High IO
                                 Tier storage based on performance
 Throughput
                                  characteristics (i.e. datastore cluster)
                                 Simplify initial storage placement
                                 Load balance based on I/O



  Tier 1      Tier 2   Tier 3
                                Benefits

                                 Eliminate VM down time for storage
                                  maintenance
                                 Reduce time for storage planning and
                                  configuration
                                 Reduce errors in the selection and
                                  management of VM storage
                                 Increase storage utilization by optimizing
                                  placement
WHAT DOES STORAGE DRS PROVIDE?


 Storage DRS provides the following:
  1.   Initial placement of VMs and VMDKs based on available space and I/O
       capacity
  2.   Load balancing between datastores in a datastore cluster via Storage
       vMotion based on storage space utilization
  3.   Load balancing via Storage vMotion based on latency

 Storage DRS also includes affinity and anti-affinity rules for
  VMs and VMDKs
  - VMDK affinity – Keep a VMs VMDKs together on the same
    datastore (this is the default affinity rule)
  - VMDK anti-affinity – Keep a VMs VMDKs separate on different
    datastores
  - VM anti-affinity – Keep VMs separate on different datastores

 Affinity rules cannot be violated during normal operations
STORAGE DRS (SDRS) OPERATIONS –
INITIAL PLACEMENT

Initial Placement – VM or VMDK create, clone, relocate

 When creating a VM select a datastore cluster rather than an
  individual datastore and let SDRS choose the appropriate datastore

 DRS will select a datastore based on space utilization and I/O load
  trend

 By default, all the VMDKs of a VM will be placed on the same
  datastore within a datastore cluster (VMDK affinity rule), but you can
  choose to have VMDKs assigned to different datastore clusters

                                  2TB
                                                  Datastore Cluster




                      500GB 500GB 500GB 500GB
                                                               Datastores


                     300GB 260GB 265GB 275GB
                     available available available available
STORAGE DRS OPERATIONS – LOAD BALANCING


Load balancing – DRS triggers on space usage and latency
 threshold
 Algorithm makes migration recommendations when I/O response time
  or space utilization thresholds have been exceeded

 Space utilization statistics are constantly gathered by vCenter, default
  threshold 80%

 Load balancing is based on I/O workload and space which ensures
  that no datastore exceeds the configured thresholds

 Storage DRS will do a cost/benefit analysis

 For I/O load balancing Storage DRS uses storage I/O control
  functionality
STORAGE DRS WORKFLOW




                                               DRS
 I/O load trend is evaluated                triggers
  every 8 hours based on a
  past day history

 Default threshold 15ms
                                  DRS                     DRS
                                triggers
                                       POI              triggers
                                       NT
              DRS WITH A TIER 1 APPLICATION


                      Customer processing sample for SAP with an Oracle Database
                            Average usage of 21%, peak size 3 to 5x average
              100
               80
                           DRS OFF                                                                                                            DRS OFF
Utilization




               60                  SAMPLING                      SAMPLING
                                                          SAMPLING                                     SAMPLINGSAMPLING

               40

               20

                0
                                                                8:00
                                                                       9:00
                            1:00
                                   2:00


                                                  5:00
                                                         6:00




                                                                                                                                      20:00


                                                                                                                                                      22:00
                                                                              10:00


                                                                                              13:00
                                                                                                      14:00




                                                                                                                                              21:00
                    0:00




                                          4:00




                                                                                                                      17:00
                                                                                                                              18:00




                                                                                                                                                      23:00
                                                                                      12:00




                                                                                                              16:00
                                                                       TIME OF DAY

                            Day 1                Day 2           Day 3                Day 4             Day 5                 Day 6              Day 7
MYTH #3
Storage features like automated sub-LUN tiering
no longer make sense with
vSphere 5 Storage DRS
STORAGE DRS VS. AUTOMATED SUB-LUN TIERING

1. DRS                                                         Eliminate VM downtime for
                          DRS
                                                                storage maintenance
                        triggers
                                                               Reduce time for storage planning
                                                                and configuration
                                                               Reduce errors in the selection
                                                                and management of VM storage
                                              Datastore
     DRS
   triggersP
                                     DRS
                                   triggers
                                               cluster         Increase storage utilization by
           OI
           N
           T                                                    optimizing placement


         Page I/O weights
         and tier ranges
2. SUB-LUN TIERING
                                                               Virtualize devices into a pool of
                                                                storage and allocate by pages
 Page relocations
                                                               Eliminate waste by allocating only
                                                                the pages that are used
                Cycle                                          Optimize storage performance by
                                                                spreading the I/O across more
                                                                arms
                                                Datastore
         Monitor physical IO
         to pages                                cluster       Simplify management tasks
                                                               Further reduce OPEX
                                                               Further improve return on assets
It’s Time to Rethink Storage Design
and Architecture for vSphere 5
HITACHI AND VMWARE INTEGRATION
VMWARE STORAGE APIS


       vStorage APIs for Array Integration               Storage
                                                         vMotion

                VMware ESXi 5.0
                                                      Provision VMs
                                                      From Template

                             APIs                         Improve
                                                     Thin Provisioning
                                                     Disk Performance

                                                       VMFS Share
                                                       Storage Pool
                                                        Scalability

                                                       Dead Space
                                                       Reclamation

It is all about the ecosystem
 Standardized and open for all vendors
 Operating system is API-driven which eliminates custom plug-ins
 APIs leverage each other behind the scenes
   VSTORAGE API FOR ARRAY INTEGRATION


      Write Same Zero (Block Zeroing)                                         Full Copy (Xcopy)

 Eliminates redundant and repetitive                          Leverages storage array’s ability to mass
 write commands, which means less I/O                         copy, snapshot and move blocks via SCSI
 used for common tasks                                        commands.

 Benefit: speeds provisioning of new VMs;                     Benefit: speeds up cloning and storage
 key to supporting large scale VMware or                      vMotion; allows for faster copies of VMs
 VDI deployments


          Hardware-assisted Locking                                Thin Provisioning (vSphere 5.0)

 Stop locking LUNs; start locking blocks only.                Thin Provisioning Stun – error code to
 Offloads SCSI commands to storage array.                     report “out of space” for thin volume
 Benefit: removes SCSI reservation conflicts;                 Unmap: zero page reclaim for virtual
 enables faster locking; improves VM density                  disks in conjunction with using “write
 performance                                                  same” command on thin volume

*Note: VAAI is currently supported on the Hitachi Adaptable Modular Series 2000 family, VSP, USP V and USP VM.
# Thin Provisioning API will be supported with ESXi 5.0
       FULL COPY – HITACHI VIRTUAL STORAGE
       PLATFORM TEST RESULT

                     ESX Host IOPS
   6000

   5000

   4000
IOPS




   3000

   2000

   1000

       0
           1   11   21   31      41      51    61      71       81
                         Time 5sec Intervals
                                                                          ESX Host IOPS
                                                       6000

                                                       5000

                                                       4000
                                                    IOPS




                                                       3000

                                                       2000

                                                       1000

                                                            0
                                                                1    11   21   31    41      51      61   71   81
                                                                               Time 5sec Intervals
BLOCK ZEROING – VSP TEST RESULT


                          Block Zeroing
                           Write-same functionality – storage array writes
                            content of a logical block to a range of logical
                            blocks, external virtualized storage
                          Benefits
                           Eliminates redundant and repetitive write
                            commands
                                                 96 to 98% Improvement

                         PROVISIONING 160GB EAGERZEROEDTHICK
                                 VMDK IN HDP VOLUMES
                                                    HDP Pool
                       VSP Storage   VAAI Status                     Time
                                                     Usage
                                        OFF           ~160GB        00:06:05
                         Internal
                                         ON            .6GB         00:00:12
 LUN – Internal or
 Virtualized Storage                    OFF           ~160GB        00:15:15
                       Virtualized
                        Storage          ON            .6GB         00:00:23
   VSPHERE 5 INTRODUCES VMFS 5
   WITH MASSIVE IMPROVEMENTS

FEATURE                                              VMFS 3                  VMFS 5


2TB+ VMFS volumes (up to 64TB)                         Yes                        Yes
                                                 (using extents)
Support for 2TB+ single VMFS                            No                        Yes

Unified block size (1MB)                                No                        Yes

Atomic test and set enhancements                        No                        Yes
(part of VAAI, locking mechanism)

Sub-blocks for space efficiency                  64KB (max ~3k)         8KB (max ~30k)

Small file support                                      No                    1KB



              VMFS 5 will further leverage Hitachi thin provisioning technology
REMOVE LAYERS OF COMPLEXITY


    A Single 1PB Liquid Pool of Storage Capacity
           for All Your Virtualized Storage




               UP TO 60TB
              SINGLE VMFS
                VOLUME


       Let the storage hardware do all the work
      CLOSER INTEGRATION OF APPLICATIONS AND STORAGE
      IS NEEDED FOR DATA CENTER TRANSFORMATION

       The need for integration
               - Applications have a software view and have no visibility into infrastructure
               - Storage has an infrastructure view and no visibility into applications

                                                                                             ESXi 5.0 64TB
                                          SOFTWARE VIEW                                                    STORAGE VIEW
                                                                                             Single VMFS
               VM                             VM             VM
                                                                                       VM         VM

     VM
               VM
                         VM                   VM
                                                        VM         VM
                                                                                                            VM
                                                                                                                       HDP Volume
                                   vMotion                                   vMotion    VM

                                                                                                                       (Virtual LUN)
                                                                                             VM                  VM
     VM        VM        VM                   VM        VM         VM                                  VM
                                                                                       VM
                                                                                                                                            2TB VMFS Volume
VM        VM        VM        VM             VM    VM         VM        VM             VM    VM        VM         VM



VM        VM        VM        VM             VM    VM         VM        VM             VM    VM        VM         VM



VM        VM        VM        VM             VM    VM         VM        VM             VM    VM        VM         VM




           ESX                      ESX                 ESX                   ESX             ESX                      HDP and
                                                                                                                       HDT
                                                                                                                       Pool



                                                                                                                                       LDEV LDEV LDEV LDEV LDEV LDEV LDEV LDEV
                                                                                                                       LDEVs


                                                        LU
HITACHI DYNAMIC PROVISIONING (HDP)
INTERNAL AND EXTERNAL VIRTUALIZED STORAGE


                                              Thin provisioning:
                        A powerful form of storage virtualization
                An example of thin provisioning and VAAI:
                 A 60TB VMFS volume is created in a 1PB HDP* pool
                 Many VMDKs are created consuming only 5.3TB
                 The other 54.7TB is available for other applications
                Additions for space efficiency plus performance:
                 Single virtual disk of 31GB consumes only 1GB capacity
                 vSphere 5.0 reclaims dead space automatically when a
                  virtual disk is deleted or processed by vMotion



                           54.7TB
                60TB                                        1GB



                           5.3TB                            31GB
THE HITACHI AMS 2000 FORMULA – VSPHERE 5.0

                     HITACHI AMS 2000 FAMILY
                                                                            VMware
                                  Cluster                                   vCenter
                                                                             Server
     VMware ESXi                VMware ESXi                 VMware ESXi
                                                                          Profile-driven
                    Native Multipathing (NMP) Round Robin
                                                                           Storage and
                                                                          Storage DRS



                   Active - Active Symmetric Controllers




                                                                              vStorage API for Storage
                                                                                  Awareness (VASA)
                    vStorage API for Array Integration
                          (T10 – 5 x Primitives)


                      Hitachi Dynamic Provisioning

                      Up to                     Up to
                      60 TB                     60 TB
                                 VMFS 5
   HITACHI VIRTUAL STORAGE PLATFORM FORMULA
   – VSPHERE 5.0


 256 VMFS                                          Cluster
                                                                                                  VMware vCenter
  Volumes          VMware ESXi                  VMware ESXi                   VMware ESXi            Server
  per ESXi                          Native Multipathing (NMP) – Round Robin
    Host
                                                                                                   Profile-driven
   Cluster                                                                                          Storage and
                                                                                                   Storage DRS


                                                                         vStorage API for Array




                                                                                                       vStorage API for Storage
     256 x 60TB = 15.36PB                                                      Integration
       VMFS Datastores                                                              +




                                                                                                           Awareness (VASA)
                                                                            Hitachi Dynamic
                                                                              Provisioning



Externalize      60TB     60TB       60TB        60TB         60TB      60TB         60TB
   up to                                       VMFS-5
  255PB       EMC DMX
                          Thunder          Lightning
                                                             AMS 2000     CLARiiON     IBM DS
                          9585V™            9980V™
THE BOTTOM LINE

HITACHI DATA SYSTEMS AND VMWARE TOGETHER
 Lower your costs
 Accelerate your time to value
 Transform your data center
QUESTION
AND ANSWER
ROUNDTABLE
UPCOMING WEBTECH SESSIONS

September Cloud Series

 Storage Virtualization: Delivering Storage as an Utility for the Cloud,
  September 28, 9am PT, 12pm ET

Mainframe Series

 Advances in Mainframe Storage, October 19, 9am PT, 12pm ET

 Replication in a Mainframe Storage Environment, October 26, 9am PT, 12pm
  ET

 Hitachi VSP Performance in a Mainframe Environment, November 2, 9am
  PT, 12pm ET

Please check www.hds.com/webtech next week for more information and:

 Links to the recording, the presentation and Q&A (available next week)

 The schedule and registration for upcoming WebTech sessions
THANK YOU

								
To top