Application Decommission Project Plan BackBridge Decommission Post Implementation Overview May

Document Sample
Application Decommission Project Plan BackBridge Decommission Post Implementation Overview May Powered By Docstoc
					BackBridge Decommission
Post Implementation Overview

   May 4th, 2006

   Project Background
   Technical Overview
   Scope and Objectives
   Technical Challenges
   Overcoming Technical Challenges
   Solution Architecture
   Implementation Approach
   Performance Summary

BackBridge Decommission (BBD)
   When COMET was implemented its underlying database, CDB, was
    intended to be the system of record for member and employer data
   Legacy mainframe applications needing these data could not access CDB
   The enterprise devised a solution that would apply CDB changes to MBR,
    EPR, and MBR Address databases through a nightly batch process called
   MBR, EPR, and MBR Address are VSAM files (key-sequenced indexed
   The Backbridge Decommission (BBD) enables legacy programs to retrieve
    data directly from CDB
   This effort entailed a proof-of-concept (POC) that proved the approach. The
    POC was followed by the implementation
   Programs that called MBR, EPR, and MBR ADDR now call CDB

Technical Overview
                      Load Process

                EPR                  MBR


      Legacy              Module                        New    PL/SQL
                                     New   Middleware
     Programs            (DBA001                              Processes   CDB

                       No Change

                RIBS                 CRS

BBD Scope and Objectives
   Establish a sound, tested infrastructure that supports applications
    that rely on member and employer data
   Enable legacy applications to access CDB in real-time
   Decommission the BackBridge infrastructure
   Decommission MBR, EPR, and MBR Address databases
   Meet the November-December 2005 deployment timeframe
   Deploy in a manner that is “transparent” to the business areas
   Provide a “safety-net” strategy that will allow for seamless rollback
   Implement a maintenance plan for ongoing support

Technical Challenges

   More that 200 batch and online COBOL and
    Natural programs that access VSAM databases
   Performance
     Network vs. resident data source
     Relational database vs. native flat
     Sequential and skip-sequential reads
     Online requests < 1 second response time
     Existing batch programs require multiple hours to run
   Ensuring 24/7 access to the database and

Solution Architecture




                                      Interface Module                                  EntireX
 Client Programs
                     Batch                                                             Java RPC            CDB
                                                     RPC                                Server
  COBOL Batch                    DBA001   CDB001                                                         (HP-UX)
  Natural Batch
                   Nat Online
  Natural Online                 INQ970                       RPC                RPC              JDBC
                                                     RPC              EntireX           CDB              PL/SQL
                   Online CICS            CDB970     Client           Broker            Client           Packages
   CICS Online                   READ
    Assembler                    CDBO

                                           VSAM      RPC
                                          Address    Client

Physical Architecture
   z/OS 1.5
        COBOL and Natural
        Common Modules (Assembler)
        Common Modules (COBOL)
        SoftwareAG EntireX Broker and RPC Client
   Windows 2000 SP4 VMWare, 2.2Ghz CPU X 4, 1Gb RAM
        SoftwareAG EntireX RPCServer implemented in Java
             4 Online Servers
             1 Batch Server
        JDBC thin driver
   HPUX 11i 800 MB CPU X 8, 12 Gb RAM
        ORACLE
        PL/SQL Packaged Procedures
   1 Gigabit Network – TCP/IP

Overcoming Technical Challenges

   Leverage common modules
   Re-factor existing PL/SQL
   Connection pooling (configurable)
   Pre-fetch and multithreading
   Read ahead
   Data caching
   Splitting large programs
   Monitoring
   Ping program

High-Level Schedule

         6/05        7/05                 8/05       9/05              10/05       11/05             12/05            1/06




                                     Development/Unit Test

                                                   System and
                                                 Performance Test

                                                             IST/CAT Testing                      END PROD



                                                                                                               Maintenance and

         6/05        7/05                 8/05       9/05              10/05       11/05             12/05            1/06

Status and Accomplishments
   Proof-of-Concept (two - three months)
        Built and validated middleware solution for Legacy online and batch programs
        Implemented error management strategy
        Demonstrated load balancing
        Accommodated problematic programs (skip/sequential)
        Proved the solution can handle production volumes
             Less than 30% increase in completion times for production batch processing
             Negligible difference in production online response times
   Implementation (six months)
      Acquired team commitments and management support
      Established development and test environments
      Conducted comprehensive testing
      Refined infrastructure
      Minimized modifications to existing programs
      Deployed ahead of schedule
      Delivered under budget

Accomplishments (Cont.)
 “On November 11th, the Legacy Enrollment Database and its
 associated "backbridge" process were decommissioned. For those
 of you who have lived the pain of reconciling data discrepancies
 between COMET and the legacy systems, the significance of this
 achievement is huge! The solution went into production seven
 weeks ahead of schedule and well under budget (the budget was
 originally estimated at $4-5 million, but delivered for only $1.2
 million) and the ongoing annual savings to CalPERS is
 estimated at $500,00 - $700,000. However, the greatest benefit to
 this project is that we've eliminated a major source of our data
 integrity and redundancy problems.”

 - Fred Buenrostro, CEO CalPERS

Other Challenges and Risks
   Ongoing Enterprise Obligations
     COMET    October Release
     R Street Project
     Forte Migration Project
     Year End Processing
     Ongoing Production Support
 Performance
 Sufficient Testing
 Annual Programs
Performance Test Results
   Summary
        Batch performance was 1.2 to 3 times longer in the new infrastructure
              Although performance times for selected batch programs were as high as 3 times, the overall impact
               did not translate into a similar increase in total batch processing time.
              Programs that access MBR/EPR data represent only a percentage, roughly 30%, of total batch
               processing time.
        Performance times for selected batch production job processes, consisting of multiple job
         steps and programs, were only 1 to 1.4 times longer--well within the acceptable threshold of
         2 times.
        Change in online performance was unnoticeable.
        All legacy programs including “problematic” programs that perform sequential reads of can
         be accommodated in the new infrastructure.
        No noticeable degradation in SmartDesk performance after change to access CDB for
         MBR/EPR data.

   Conclusions
        Is a technically feasible solution suitable for production application
        Can accommodate legacy programs that perform both direct and skip/sequential database
         accesses with minimal or no changes
        Is scalable to handle large volumes of database calls under actual production conditions
        Can be configured and tuned to achieve similar performance results in production as during

Implementation Results
 Deployed on time and on budget
 Stable environment
     No production downtime
     Supports online and batch without issue

 Decommissioned old infrastructure
 Enhancements included 11 annual


Appendix: POC Test Results
    Program        Type       Database       Time         Time       Perf. Factor*         Volume                     Comments
                                            (CDB)        (VSAM)     CDB vs VSAM
 CRS230            Batch    MBR-ADDR      49:15 Min.   39:17 Min.         1.2        955,947 VSAM reads       Each call returns 90
                                                                                     of Member Address        addresses, so the per-SSN
                                                                                     database, resulting in   rate is about 3 ms or 19,350
                                                                                     10,622 CDB calls         SSNs/min.
 CRI100            Batch    MBR           17 Sec.      10 Sec.           1.7         1093 Records
 CRW120            Batch    MBR-ADDR      3:11 Min.    1:38 Min.         2.0         13,092 VSAM reads
                                                                                     of Member Address,
                                                                                     resulting in 13,066
                                                                                     CDB calls
 CRS560            Batch    MBR           9 Sec.       3.5 Sec.          2.6         952 I/Os, resulting in
                                                                                     533 CDB calls
 CRI250            Batch    MBR           5 Min.       1:37 Min.         3.1         15,922 I/Os, resulting
                                                                                     in 3,297 CDB calls.
    Process        Type       Database        Time        Time       Perf. Factor*           Volume                   Comments
                                             (CDB)       (VSAM)     CDB vs VSAM
 Estimates         Batch     MBB,EPR      4:48 Min.    3:37 Min.          1.3        408 Estimates            Times are averages.
 Slip Daily                                                                          – 7516 calls to CDB
 (Batch)                                                                             – 35,000 I/O to MBR
                                                                                     and EPR
 CRS Nightly       Batch    MBR           2 Hr.        2 Hr.             1.0         Representative of        Represents: Adjusts,
 Batch Snapshot                           (approx.)    (approx.)                     average nightly batch    Daily Refunds, Pre-Proc,
                                                                                     run for CRS.             Corrections, Daily
                                                                                                              Weekly Refunds
 Estimates        Online    MBR, EPR      <1 Sec.      <1 Sec.        Negligible     Selected test cases,     Online Natural application
                                                                                     under low volume
 CIS / PF4        Online    MBR, EPR      <1 Sec.      <1 Sec.        Negligible     Selected test cases,     Online Assembler
                                                                                     under low volume         application
 SmartDesk        Online    EPR           N/A          N/A               N/A         Selected test cases      Changed in production to
                                                                                                              access CDB data
 MemberCalc       Online    CRS, MBR,     N/A          N/A               N/A         Selected test cases      Online PowerBuilder appl.
                            EPR                                                                               demonstrates the adequacy
                                                                                                              of performance and
 *Performance Factor is a comparison of performance between the new infrastructure (CDB source) vs. the current backbridge
 infrastructure (mainframe MBR/EPR database). For example, a factor of 1.2 means that the new infrastructure performs 1.2
 times longer than currently.

Description: Application Decommission Project Plan document sample