Appendix B Current Mainframe Environment and Services - DOC

Document Sample
Appendix B Current Mainframe Environment and Services - DOC Powered By Docstoc
					                                       1beec081-6418-4c67-819f-d3c67789c18e.doc




          STATE OF TEXAS




Texas Department of Information Resources


         Data Center Services
       REQUEST FOR OFFER
               APPENDIX B


MAINFRAME ENVIRONMENT SERVICES


            MARCH 31, 2006




           Appendix B – Page 1 of 14
                                                               1beec081-6418-4c67-819f-d3c67789c18e.doc




Service Provider Guidelines
     This Appendix to the RFO contains specific information supplied by DIR for the Service
     Provider’s use when responding to the RFO.

Service Provider Instructions
        1.    The Service Provider will not modify or change anything contained within this
              Appendix.
        2.    The Service Provider’s response to the RFO should reflect the Service Provider’s
              complete understanding of the information contained in this Appendix. If the Service
              Provider has any questions regarding the content of this Appendix, the Service Provider
              should use the question and answer process described in the document titled
              “Instructions – Part 2 (Scope, Contact Information, and General Requirements)” of this
              RFO.




                                 Appendix B – Page 2 of 14
                                                                                                     1beec081-6418-4c67-819f-d3c67789c18e.doc




                                               TABLE OF CONTENTS
1.0     ENVIRONMENT OVERVIEW .................................................................................................... 4
  1.1      General .......................................................................................................................................... 4
  1.2      Texas State Data Center (TxSDC- Mainframe) ............................................................................ 4
      1.2.1       TxSDC Disaster Recovery .................................................................................................... 5
  1.3      Agency Data Centers (Mainframe) ............................................................................................... 5
      1.3.1       Agency Data Center Disaster Recovery (Mainframes) ......................................................... 5
2.0     MAINFRAME STORAGE ............................................................................................................ 6
      2.1.1       TxSDC Storage Subsystems ................................................................................................. 6
      2.1.2       Agency Data Center Mainframe Storage Subsystems .......................................................... 6
3.0     MAINFRAME UTILIZATION AND SERVICE LEVEL STATISTICS ................................. 8
  3.1      TxSDC Utilization Summary ........................................................................................................ 8
  3.2      Agency Mainframe Utilization Summary ..................................................................................... 9
  3.3      TxSDC Service Levels ................................................................................................................ 10
4.0     MAINFRAME HIGH VOLUME PRINT ENVIRONMENTS................................................. 12




                                                      Appendix B – Page 3 of 14
                                                                   1beec081-6418-4c67-819f-d3c67789c18e.doc




1.0    ENVIRONMENT OVERVIEW

1.1    General
The State of Texas currently operates 16 mainframes supporting 10 agencies plus ASU within 6
data centers, including the Texas State Data Center (TxSDC).

1.2    Texas State Data Center (Mainframe)
The Texas State Data Center (TxSDC) was established by the 75th Legislature to provide
computer operations and disaster recovery services to tax supported organizations and to reduce
information technology costs to the State of Texas. The state-owned facility is housed at Angelo
State University, in San Angelo, Texas, and is operated by Northrop Grumman Technical
Services, Inc. (NGTSI) under DIR's oversight. This environment includes systems that are
entirely managed and supported by NGTSI, and systems that receive operational support from
NGTSI. DASD, Virtual Tape, and Tape Silo environments are shared amongst systems. (See
Exhibit 10 for detailed list of equipment, Exhibit 11 for detailed software assets, Attachment 12A
for Third Party Software Licenses and Appendix I for Agency specific information) A summary
of this environment follows:


      Mainframe                            MIPS        Operating      Agency          Level of
                                           rating      System         Supported       Support
      IBM 2066-0B1 zSeries 800             115         OS390          HHSC-E          Managed
                                                                      (MHMR)
      IBM ESERVER zSeries 990              1215        zOS            OAG-CS          Operated
      IBM 9672-RA6                         88          OS390          TEA             Managed
      IBM 2064-102                         449         zOS            TDCJ            Operated
      IBM 2064-103 zSeries 900             645         zOS            TDCJ            Operated
      IBM 9672-R16                         117         OS390          TDI             Managed
                                                                      (TWCC)
      IBM ESERVER ZSERIES 990              855         zOS            TxDOT           Operated
      IBM 9672-R36 Coupling Frame          N/A         N/A            Supports        Managed
                                                                      the
                                                                      complex
      IBM 9672                             80          VM             ASU             Managed



All of the mainframe environments, with the exception of the TxDOT and ASU mainframes,
participate in a minimally configured common Sysplex where the Sysplex timer and syslogs are
shared.




                                   Appendix B – Page 4 of 14
                                                                1beec081-6418-4c67-819f-d3c67789c18e.doc



TEA is in the process of transitioning the last of the agency’s legacy systems hosted on the
mainframe to a server environment. This process is expected to be completed by September 07.
ASU is also in the process of transitioning the agency’s legacy systems hosted on the mainframe
to a server environment. This process is expected to be completed by December 06.

1.2.1      TxSDC Disaster Recovery
All of the mainframe environments operating at the TxSDC have disaster recovery managed by
NGTSI. The environments are currently utilizing a DR contract with SunGard, and are tested
annually.

1.3     Agency Data Centers (Mainframe)
In addition to TxSDC, there are 7 mainframes located within agency data centers in Austin,
Texas. A summary of those mainframes follows:
      Mainframe                         MIPS        Operating     Managing
                                        rating      System        Agency
      Unisys IX6802-7 ClearPath         734         OS2200        HHSC-E
      IBM Enterprise 3000 7060 H30      60          VSE 2.6.2     OAG
      IBM 2003-116                      35          OS390         RRC
      IBM 2064-102                      450         zOS           TWC
      IBM 2064-103                      647         zOS           TWC
      IBM 2064-104                      835         zOS           TWC
      IBM 2003-215                      30          VSE           TYC
The TWC mainframes operate in an IBM Parallel Sysplex environment. TWC takes advantage of
all sysplex structures including full DB2 data sharing, XCF, WLM, GRS star, JES2 checkpoint,
RACF, ARM (automatic restart manager), SFM (sysplex failure manager), shared dasd, shared
tape, shared catalog, shared timer and system logger across all LPARS (7 LPARS).
The HHSC-E Unisys mainframe environment is managed by Northrop Grumman. The HHSC-E
identified major project, Unisys Cost Optimization and System Migration, is migrating legacy
applications from the HHSC Unisys mainframe to the HHSC Service Oriented
Architecture/Utility Computing Environment (SOA/UCE) environment. This project is
scheduled for completion in Fiscal Year 08.

1.3.1      Agency Data Center Disaster Recovery (Mainframes)
OAG, RRC, and TYC contract for disaster recovery services with Northrop Grumman under the
state’s Master Services Agreement for Data Center and Disaster Recovery Services. The
services utilize a DR contract with SunGard. These agreements specify an amount of test time
per year for the agency, which varies depending upon configuration. Each agency can then use
the time, blocked to meet their individual needs.
TWC contracts for disaster services with IBM Business Continuity and Recovery (BCRS)
Services. This agreement specifies 100% recovery of the production environment within 72



                                  Appendix B – Page 5 of 14
                                                                1beec081-6418-4c67-819f-d3c67789c18e.doc



hours of an event declaration. The contract provides for 72 hours of test time and an additional 8
hours of network testing, and includes two test periods each year. Print and Mail Services are
recovered at the MailGard hot-site facilities.
HHSC- E disaster recovery is included in the Northrop Grumman contract for the Unisys
mainframe. The service utilizes a DR contract with SunGard, which includes two annual eight
hour test periods and recovery of 70% of the production capacity.


2.0     Mainframe Storage

2.1.1      TxSDC Storage Subsystems
The Texas State Data Center is a multi vendor environment, with 3 major DASD storage vendors
and 2 major tape vendors supporting the mainframe environments. The DASD storage systems
are as follows:

        DASD Storage System Vendor            Agencies Supported
        Shark 2105-800      IBM               Shared by all agencies
                                              within SYSPLEX
        Shark 2105-800           IBM          TDCJ
        Hitachi 9980             HDS          OAG-CS
        DS8100                   IBM          TxDOT

The tape storage systems are as follows:

        TAPE Storage System      Vendor       Agencies Supported
        Magstar 3590 Virtual     IBM          Shared by all agencies
        Tape System                           within SYSPLEX
        Magstar 3590 Virtual     IBM          TxDOT
        Tape System
        (3) STK 9310 Tape        STK          Shared by all agencies
        SILO                                  within SYSPLEX
        STK 9310 Tape SILO       STK          OAG-CS

Note: The STK SILOS are also shared with the distributed server environment.



2.1.2      Agency Data Center Mainframe Storage Subsystems
Storage subsystems, both DASD and TAPE are an integral part of the agency data center. The
Agency data centers deploy various DASD and TAPE storage subsystems and features,




                                   Appendix B – Page 6 of 14
                                                             1beec081-6418-4c67-819f-d3c67789c18e.doc



depending on the size and complexity of the individual center. A summary of the Agency
subsystems follows:

       Agency        DASD                    TAPE
                     Subsystem               Subsystem
       HHSC-E Unisys EMC 8730                STK9310/9311
       Environment
       OAG           Integrated              Hitachi 7480
                     w/CPU
       RRC           IBM 9393-002            IBM 3480
       TWC           IBM 2105-800            IBM 3590
                     Shark                   Magstar VTS
       TYC           Integrated              IBM 3490
                     w/CPU




                                 Appendix B – Page 7 of 14
                                                                                        1beec081-6418-4c67-819f-d3c67789c18e.doc



        3.0      Mainframe Utilization and Service Level Statistics
        The resource information summaries below represent the information as it is available at the release of the
        RFO. It is expected that this information will be updated and refreshed during the due diligence period.




        3.1      TxSDC Utilization Summary



                                                              Assessm ent Base Year - Monthly Average

                                                    HHSC-E                    TDI                              TDCJ
         Resource Description           Units                    TEA                   OAG-CS    TxDOT                     Total
                                                   (MHMR)                   (TWCC)                         (2 Machines)

Mainfram e Services
CPU/DASD Utilization
     IBM - Application Hours          CPU Hours          13            89        97        838     1,326          1,026       3,389
     IBM - Installed MIPS               MIPS            115            88       113      1,215       885          1,094       3,510
     IBM - Average Monthly Utilized
     MIPS                               MIPS             60            56        66      1,004                      831       2,017
     IBM - Monthly Peak Allocated
     DASD                                GB             216         563         405         34     1,713          3,950       6,881
     IBM - Installed DASD                GB             355         798         520     13,387     2,120          6,300      23,479
Tape Inform ation
     Application Tapes in Storage     Reel/Cart.      2,212       2,412        1,992    12,826     1,200          1,079      21,721
     Application Tapes in Storage -
     Virtual Tape Storage                GB           5,540            2          4     11,733     1,104          6,800      25,183
     Occupied Automated Application
     Tape Slots                       Cartridge       1,982       2,291        1,877    10,296     1,595          4,000      22,041
     Manual Application Tape Mounts     Mount            75            9         26      6,500        77              0       6,687




                                                   Appendix B – Page 8 of 14
                                                                                   1beec081-6418-4c67-819f-d3c67789c18e.doc




3.2      Agency Mainframe Utilization Summary



                                                              Assessm ent Base Year - Monthly Average

                                                                                                  TWC
                                                           HHSC-E
         Resource Description                  Units                    OAG          RRC           (3        TYC
                                                          (Unisys)
                                                                                                Machines)

Mainfram e Services
CPU/DASD Utilization
      IBM - Application Hours               CPU Hours                         64           23        1,718         17
      IBM - Installed MIPS                    MIPS                            60           35        1,923         30
      IBM - Average Monthly Utilized MIPS      MIPS                           40                     1,748         30
      IBM - Monthly Peak Allocated DASD         GB                        110           328          1,641         75
      IBM - Installed DASD                      GB                        140           438          4,625     108

      Unisys - Application Hours            CPU Hours        1,010
      Unisys - Installed MIPS                  MIPS            734
      Unisys - Average Monthly Utilized        MIPS            734
      Unisys - Average Monthly Peak
      Allocated DASD                            GB           1,495
      Unisys - Installed DASD                   GB           1,792
Tape Inform ation
      Application Tapes in Storage           Reel/Cart.    125,288        130         3,654         64,472     540
      Application Tapes in Storage - Virtual
      Tape Storage                              GB                        NA            NA          55,999      NA
      Occupied Automated Application
      Tape Slots                             Cartridge      50,834        NA            NA           2,630         12
      Manual Application Tape Mounts           Mount         2,516        600         8,694         88,547         16




                                            Appendix B – Page 9 of 14
                                                                               1beec081-6418-4c67-819f-d3c67789c18e.doc




3.3       TxSDC Service Levels
The customers of the TxSDC have a common set of service level metrics. In addition, individual
agencies may have additional service levels. Historical service statistics through December 2005 may be
found at http://www.txsdc.com following the Service Statistics Site Links.
The common service levels are as follows:


                                                                                         Financial
                                                                                                         Operational
                    Category/Severity                           Criteria                  Penalty
                                                                                                           Goal
                                                                                         Threshold



 Contact Time                                        (Call back to customer)
 Sev1                                                Within 15 minutes                         95.0%          100.0%
 Sev2                                                Within 2 hrs                              95.0%          100.0%
 Sev3                                                Within 12 hrs                             90.0%          100.0%
 Sev4                                                Within 24 hours                           90.0%          100.0%
 Sev5                                                Within 8 working Days                     90.0%          100.0%


 Hold Time
 Call Hold Time                                      Answered within 60 secs                   90.0%           95.0%
 Call Hold Time                                      Answered within 180 secs                  95.0%          100.0%


 Problem Resolution
 Severity 1                                          Closed within 2 hrs                       50.0%          100.0%
 Severity 2                                          Closed within 4 hrs                       60.0%          100.0%
 Severity 3                                          Closed within 24 hrs                      70.0%          100.0%
 Severity 4                                          Closed within 168 hrs                     85.0%          100.0%
 Severity 5                                          Next Routine Maintenance                  97.0%          100.0%


 System Availability
 Operating System LPAR                                 7 days per week 24 hours                99.0%          100.0%
                                                           per day except for
                                                        scheduled maintenance
 Operating System LPAR                                      Monday - Friday                    99.5%          100.0%
 Prime Hours                                              6:00 a.m. - 8:00 p.m.
                                                             except for any
                                                        scheduled maintenance


 Network Availability
 Network Availability                                  7 days per week 24 hours                99.0%          100.0%
 (NOTE 3)                                                per day except for any
 (NOTE 4)                                               scheduled maintenance




                                        Appendix B – Page 10 of 14
                                                                                           1beec081-6418-4c67-819f-d3c67789c18e.doc



                                                                                                     Financial
                                                                                                                     Operational
                   Category/Severity                                         Criteria                 Penalty
                                                                                                                       Goal
                                                                                                     Threshold

Network Availability Prime Hours                                         Monday - Friday                   99.5%          100.0%
(NOTE 3)                                                               7:00 a.m. - 7:00 p.m.
(NOTE 4)                                                                  except for any
                                                                  scheduled maintenance


Host On-Line Response Time (NOTE 2)
Mainframe Online Applications                                 1 Sec or Less                                                  99%
TSO (NOTE 1)                                                  3 Sec or Less                                                  97%


Mid-range Online Applications (NOTE 5)                        3 Sec or Less                                90.0%


Batch Service
                                                                      All Jobs as schedule in
All Scheduled Batch Application Processing                             scheduling package                  98.0%          100.0%


Online Application Availability
Production                                                       7 days per week/24 hours                  99.0%          100.0%
(NOTE 6)                                                                per day except for
                                                                  scheduled maintenance


Production Prime Hours                                                   Monday - Friday                   99.5%          100.0%
(NOTE 6)                                                               7:00 a.m. - 7:00 p.m.
                                                                          except for any
                                                                  scheduled maintenance


Maximum Outage Frequency                                         7 days per week/24 hours                        4                 0
(NOTE 7)                                                                per day except for
                                                                  scheduled maintenance


Maximum Outage Frequency Prime Hours                                     Monday - Friday                         2                 0
(NOTE 7)                                                               7:00 a.m. - 7:00 p.m.
                                                                          except for any
                                                                  scheduled maintenance


Development, Test, Special                                       5 days per week/16 hours                  98.0%          100.0%
(NOTE 6)                                                                per day; except for
                                                                  scheduled maintenance


NOTE1 - Period 1 transactions only as measured by RMF, Host internal measurements only.


NOTE 2 - Time measured is only host internal time.


NOTE 3 - As stated in the contracts, problems that are not in NGTSI direct control, are not taken
         into the calculation for determining financial penalties .




                                               Appendix B – Page 11 of 14
                                                                                      1beec081-6418-4c67-819f-d3c67789c18e.doc



                                                                                                    Financial
                                                                                                                     Operational
                   Category/Severity                                     Criteria                    Penalty
                                                                                                                       Goal
                                                                                                    Threshold



NOTE 4 - Network Availability includes all network components within NGTSI responsibility.


NOTE 5 - Response times are measured at a device at the border of NGTSI responsibility.
         It is inclusive of the network segments within NGTSI responsibility.
         It represents the response time seen by a client at the edge of NGTSI's area of responsibility.
         It shall not include client network segments outside of NGTSI's area of responsibility.


NOTE 6 - Downtime will be measured until the application is returned to production.


NOTE 7 - Frequency penalty will be enforced based on the number of problem occurrences in the monthly reporting period.
         Penalties will accrue for each incident over the penalty threshold.




Backup/Restore Operations (NOTE 8)

                                                                                                     Corrected to
                                                                                                     100% within
Quarterly file recovery test                                      Customer selected file                ten days            100%
                                                                   from regular backup




                                                                                                     Successfully
Disaster recovery test                                             Full system recovery            tested per plan          100%


NOTE 8 - Individual customers should address their backup/restore and disaster recovery requirements
         including testing and penalties within their contract.




4.0      Mainframe High Volume Print Environments
         Several agencies currently support high volume print environments. These print operations
         support a wide breadth of output and distribution business needs using a variety of equipment to
         create high-speed production print. (See Exhibit 10 for detailed list of equipment, Exhibit 11 for
         detailed Software Assets, Attachment 12A for detailed Third Party Software Licenses, and
         Appendix I for Agency specific information.)
         Some of the unique business requirements from several of the Agencies follow:
                - HHSC runs the production Print/Insert shop for several eligibility determination
                  applications. Throughout the month, the HHSC print shop will print between 7 and 8
                  million pages with 2 to 3 million insertions. It is estimated that 50% of these processes




                                               Appendix B – Page 12 of 14
                                                         1beec081-6418-4c67-819f-d3c67789c18e.doc



  occur during the 5-7 days during the monthly processing (cut-off). The remaining 50%
  of the print processes occur fairly steadily throughout the month.
- HHSC supports special print requirements during the month of March, for the Federal
  Poverty Income Limits (FPIL) runs, during the month of September, for the Cost of
  Living Adjustments (COLA) and some of the RSDI runs, and December, for the RDSDI
  runs. During the months of September and December, HHSC increases its print
  operations by approximately 200,000 pages.
- HHSC contracts with an external vendor to perform sort functions. The vendor is
  responsible for delivering HHSC’s mail to the post office. On average, there are an
  increased number of mail-outs on the first three Tuesdays of the month. With an
  additional 200,000 letters on the first Tuesday, and an additional 100,000 and the second
  and third Tuesdays of the month.
- OAG-CS produces over 30,000 mail pieces consisting of over 200 different letters,
  notices, writs, applications and other forms which are printed, folded, inserted, and
  mailed each day. Approximately 60,000 Custodial Parent Monthly Notices are generated
  once per month and approximately 250,000 billing statements are printed and mailed on a
  single monthly cycle. OAG-CS currently contracts with an Automated Mail Center
  (AMC) vendor.
- The TDCJ production print supports the generation of documents that support the
  administration and management of approximately 40,000 employees, 150,000
  incarcerated offenders, 78,000 parolees, and the generation of letters for crime victims
  and trial officials as required by statute. The documents are printed and distributed from
  centralized sites located throughout the State in support of the specific business process.
- The TDCJ print operation focuses on the generation of documents to support on-time
  needs (e.g., W-2 forms, employee time reports, Notice to Trial Officials (NTO) and
  Victims Letters [required by Statute], Inmate Bank / Trust Fund statements, Release
  Certificates, etc) at the designated business unit, which in turn effects distribution of the
  print product. In addition, approximately 98,550 monthly inmate bank statements are
  printed and distributed.
- TWC prints high volume with critical timeframes for generating output related to core
  services. One example is the Unemployment Insurance Appeals packet that contains 25
  to 30 duplex sheets per packet. Depending on unemployment levels, there are 20,000 to
  40,000 pages printed daily. Currently, TWC averages 28,600 sheets per day, which
  equates to 1,500 to 2,500 packets and requires 9 to 10 clock hours for printing. During
  times of high unemployment, clock hours for printing have increased to 15 hours daily.
- TWC prints more than 64 million images annually. In addition, TWC processes and
  delivers over 14 million pieces of mail per year, which includes printing and inserting
  more than 2.7 million Unemployment Insurance warrants per year. Volumes increase at
  cyclical intervals according to statutory requirements (e.g., quarterly Tax runs generate
  1,000,000 images in a 2-week period.)
- TxDOT prints 440,000 original motor vehicle titles and 1.4 million vehicle registration
  renewal forms per month. The titles are stored in a secure area, with stringent
  requirements for handling and storage. 1300 square feet of floor space is required for



                        Appendix B – Page 13 of 14
                                                   1beec081-6418-4c67-819f-d3c67789c18e.doc



storage. The mail insert function is performed by an outside contractor. The mail insert
vendor also maintains the titles in a secure area with limited access.




                     Appendix B – Page 14 of 14