Agency Name: Department of Licensing
Project Name: HP3000 Replatforming
Project Level: Level 3
Date Presented to ISB: March 6, 2008
Project Description: The Department of Licensing (DOL) successfully completed the
second of three projects that move agency computer applications off mainframe
computers and onto DOL’s State-compliant Windows server-based platform.
The HP 3000 Replatforming project migrated the Vehicles Field System (VHS) and
Heating Oil (HOAP) applications onto DOL servers as web-based systems during the
2005-2007 biennium. This was a time-critical change because Hewlett-Packard had
announced a December 2006 end-of-life for the HP3000 line of computers and software.
Without HP support, a computer or software failure would cause a state-wide shutdown
of all title and registration transactions for vehicles and vessels and halt the daily
collection of $2-3 million dollars of State revenue. A failure would also directly impact
the 39 County Auditor offices and 140+ subagent offices that get income from these
transactions. Fortunately during the project, Hewlett-Packard extended the end-of-life
date from December 2006 to December 2008.
A secondary goal of this project was to remove the over-night delay in moving title and
registration information from the HP computer to the agency’s data repository.
“Immediate Updates” of the repository provide value to law enforcement, businesses and
government agencies by delivering up-to-the-minute information on vehicles and vessels
ownership and registration status. Immediate Updates also allows the public to complete
their business during one versus two visits to a licensing office when they need to make
more than one transaction per vehicle or vessel.
Other project goals included improving agency responsiveness to mandated licensing rule
changes, reducing the number of different computer platforms being supported and
positioning the agency to improve customer services via expansion of E-Government.
The project was completed $132,000 below the original ISB-approved $7.85 million
budget, with the bulk of the work completed on-schedule. The deployment of the
“Immediate Update” enhancement was delayed three months, partly because of the delay
in stabilizing the replatformed VFS application and partly at the request of the users who
needed DOL to avoid making this change during the busy summer months. The project
provided innovative user training via an instruction booklet illustrated with copies of
computer screen plus a PowerPoint-based presentation with voice-over. Users
appreciated this approach because it allowed them to review and repeat the training at
times convenient to them. DOL’s training cost was $47,000 out of $259,000 provided in
a separate training budget.
Original Project Scope: Re-Host DOL’s HP3000 Vehicle Field System to Microsoft
Did the project deliver the functionality that was intended? Yes No
If no, please describe.
Original Budget: $7.85 million Original Schedule: June 2007
Final Cost: $7.72 million Completed: September 2007
Executive Support: Does the Executive Sponsor have a global view of the project, set the
agenda, arrange the funding, and articulate the project’s overall objectives? Is the
Executive Sponsor actively involved, an ardent supporter, responsive, and accountable
for the project’s success?
User Involvement: Do the primary users have good communications skills allowing them
to clearly explain business processes in detail to the IT organization? Are the primary
users trained to follow project management protocols? Are the users realists and aware
of the limitations of the project?
Experienced Project Manager: Does the Project Manager possess technology and
business knowledge, judgment, negotiation, good communication, and organization
skills? Does the Project Manager possess soft skills, such as diplomacy and time
management? Is the Project Manager able to communicate with executives as well as
business representatives? Is the Project Manager able to say “no”?
• This project started out with two project sponsors: one from the business and one
being the CIO. The business sponsor was not an aggressive owner of what or how
the project was doing, and soon all sponsorship responsibilities defaulted to the CIO
and the project became “a computer project.” It is recommended that each project
have only one sponsor and that that person be from the agency function that benefits
from or needs the output of the project.
• The CIO should remain a key member of the Project Steering Committee and
continue to support the Project Management and IS side of the project work.
• The CIO provided strong and effective support of the Project Manager. This was
particularly important when dealing with Fujitsu relationship and contract
• The CIO also was most helpful in encouraging the project team when project
• In addition to the “normal” planning, tracking, controlling, reporting and leadership
responsibilities, the Project Manager must work with IS management and staff such
that groups that now provide services from isolated functional teams, instead join into
the project team as collaborative partners. Groups that are especially critical to
include in this collaborative project team culture are the Configuration Management
group and the Network Support Services (LAN Support / Infrastructure) group (NSS).
• Clear documentation of project team roles and responsibilities helped the project team
smoothly and effectively work together and perform the required work. This also
helped smooth transitions when there were changes in the staff filling team roles.
• It might have helped if Work Group Leads had been reminded more often of their
personal accountability for Work Group deliverables.
• The Project’s Business Liaison should be physically located with the project team
and assigned to report directly to the Project Manager. This person is responsible for
establishing and strengthening two-way communications between the business and
the project. Each day this individual will be getting information to or from the correct
person on the business side. The role also is responsible for setup and leadership of
activities involving business-side management or end-users, such as User Training,
and User Acceptance. This person also creates project newsletters and high-level
status updates for the business managers, user support staff and application users.
• Involve Business Liaison in approval and tracking of deliverables.
• A Project Test Lead is the second-most critical member of the project, next to the
Project Manager. This person must have intimate hands-on experience with the
business area and the applications into which the results of the project are to be
provided. As with the Project Business Liaison, this person must be located on-site
and report directly to the Project Manager.
• Each project must have one person assigned as the Technical Lead / Technical
Expert. This person will ensure (1) architecture of the new application, (2) the
building of environments for Dev, QA (and Production) are completed just in time for
use by project staff, (3) communications and coordination of work between the
Project and the LAN Support team so that work executes smoothly and follows the
associated Configuration Management and Network Design / Documentation
procedures, and (4) Lead the Deployment Work Group to produce the deployment
checklist and collect the detailed installation instructions and scripts, then lead the
• The Technical Lead should also be responsible for setting up, executing, and
reporting on Stress and Application Performance testing.
• If possible, the project team should be located physically near to the impacted users,
and / or the business staff who support those users. (Travel time between locations
can be a significant loss of work time.)
• For larger projects, break the team into Work Groups who are responsible for
specific deliverables. Each Work Group is to have a responsible Lead who is a
member of the Project Core Team.
• Each member of the Core Team is directly responsible for keeping their work on
schedule. That will likely include leading the clearing of all hurdles (technical and
people issues) that would otherwise impede progress on the Work Group’s tasks.
• When any Work Group task is stopped, that fact is to be immediately communicated
to the Project Manager, but responsibility to fix that work stoppage remains with the
Work Group Lead. The Project Manager may choose to assist, but will not take-
over this responsibility.
• Projects that include batch jobs need to provide staff (Developers, NOT the Technical
Expert) to setup and load these jobs and calendaring on the development
environment (DEV) such that it is compliant with Agency standards. The knowledge
to do this cannot continue to reside just with one person within NSS. Once correctly
operating in DEV, the configuration management services group (CMSG) can easily
and safely promote it to QA, and NSS can promote it from QA to Production.
• Make sure Work Group Leads understand they are responsible for saving into the
electronic and paper repositories all records of their Work Group’s activities. Every
Decision must be logged into SPLAT, or other electronic repository. Records must
be kept up-to-date (not more than five days old).
• Having the business’s most-experience user-support staff as the Test Team resulted
in creating comprehensive and innovative test scenarios and scripts. Having an
experienced Testing Lead from the business ensured both high quality leadership of
testing work, and that test planning and test data preparation would reflect all the
important business scenarios.
• The IS Liaison for the business and the Project’s Business Liaison need to stay in
close communication and support each other during the project. Both must stay
equally informed, and because they work at different levels and in different ways with
the business, they form a powerful team in representing the project while being very
inclusive of business needs and issues. Both are required and neither has the
bandwidth to do the work of the other.
• The Project Core Team should consist of the Project Manager, Business Liaison,
External QA, a Technical Lead, and Work Group Leads. The Project Manager uses
the weekly Core Team meeting to provide information and to collect information that
will drive decision making. It is recommended each Core Team attendee report
weekly on (a) accomplishments during the past week, (b) hurdles to current progress,
and (c) worries about future issues.
Clear Business Objectives: Are the project objectives clearly defined and understood
throughout the organization? Is the project measured against these objectives regularly
to provide an opportunity for early recognition and correction of problems, justification
for resources and funding, and preventive planning on future projects? Does the agency
understand the impact of the proposed business process changes?
Minimized Scope: Is project scope realistic and able to be accomplished within the
identified project duration? Is it measured and managed regularly to eliminate scope
See Formal Methodology.
Responsive Business Requirements Process: Does the project employ a responsive
requirements process that manages requirements issues quickly and without major
conflicts? Requirements management is the process of identifying, documenting,
communicating, tracking, and managing project requirements as well as changes to those
requirements. This is an ongoing process and must stay in lockstep with the development
Standard Infrastructure: Has the project established a standard technology infrastructure
that includes operational and organizational protocols? Is the infrastructure commonly
understood and regularly assessed?
Formal Methodology: Does the project follow a formal methodology that provides a
realistic picture of the project and the resource commitments? Are steps and procedures
reproducible and reusable thereby maximizing project-wide consistency?
• A clear escalation path and contact information should be provided for each work
area and technical topic. Typically this will be the Work Group Lead.
• One of the most important communication links within the project is between the
Business Liaison and the Test Team Lead.
• Including an experienced person from the business as Business Liaison, responsible
for two-way communication with the Project, in addition to IS Managers to provide
two-way communication between the Project and IS developers, saved time and
• Provide regular, high-level status updates (accomplished, next steps) to all business
stakeholders and users by a one-page newsletter with an eye-catching graphic
associated with each topic. If it’s short and attractive, it will get read. If just text, it
won’t get read.
• Engage the user and user support staff in reviews of the user interface and in User
Acceptance, and even in testing if they are interested. We did a period with of Tester-
for-a-Week volunteers that was very successful in spreading real information about
the changes that were coming.
• Good involvement of the Vehicles Services staff in the project activities during
immediate update and personalized specialty plate phase (IU+PSP) made things go
smoother than they did for the Replatforming pre- and post-deployment activities.
• The Project Manager and Business Liaison should take every opportunity to attend
meetings and user conferences and to present the project’s objectives and timeline.
Building relationships with individual users, user support staff, and the DOL
managers was key to having everyone pulling together when we deployed.
• Make sure to use SPLAT (issue tracker) to record EVERY decision made, the date,
and who was involved. They MUST be recorded on the date they are made or you’ll
loose track of them. The Work Group Lead, if not otherwise assigned, is responsible
for recording decisions made in his/her Workgroup. The Project Manager or assistant
is responsible for recording decisions made in general meetings and discussions. The
Business Liaison is responsible for recording decisions made by the business. That
way all decisions are recorded in one easily-accessible place. Depending on email for
that repository does NOT work as there are too many places to look.
• It is important to have a solid post-deployment communication plan for reporting
status of post-deployment defects and activities taking place to fix those problems.
This is especially important when money is involved and users will wonder what
actions they need to take and what documentation they will have to pass an audit.
The communication plan must address what and how information will be reported
both inside and outside the Agency. We did not do well with this initially following
the VFS deployment, but quickly fixed those problems.
• Having communications about the post-deployment status sent to field offices, not
only by the Project via normal vehicles mailboxes, but also by electronic letters from
the Agency Director, was important in calming external fears and concerns.
• Daily post-deployment status reports to the Project Steering Committee also helped
disseminate accurate information in quantitative terms. This was required to combat
reports that made problems appear much bigger than they actually were.
• It is critical to provide everyone with up-to-date status on application defects so those
who reported defects know their issue has been logged and some perspective on when
it may be fixed. They also need to see the relative priority given to their request
compared to other defects and can understand why it may take time to get their issue
fixed. Use of an intranet site was suggested as a good way to make that information
Project Information Management
• SPLAT was DOL IS’s tracking and reporting application. This function is now
migrating to SharePoint, so staff are now learning to use this new tool.
• Assignment of prioritized Action Items was an effective way to guide busy
developers through work needed by the Project.
• Testers, Developers and Project Managers need a way to view EVERY change to
the infrastructure made by NSS. At least four times during the project, three-to-
five days were lost trying to troubleshoot why a program, that had been working, all
of a sudden stopped working. In all cases, NSS reported they had not made any
change, and each time it was proven that someone in NSS did make the change that
broke the application.
• The structure of project repositories on drive P: needs improvement to better align
with the information collected during the project. The structure should allow faster
retrieval of the needed information.
• A much better approach is needed to store email messages and their attachments into
the project repository. The current Save-As process is extremely time-consuming.
• The Project Manager’s leadership was critical in forcing a very clear vision and scope
statement for the Immediate Update enhancement. Without these constraints, the risk
of scope creep is a high risk.
• Task estimates need to be given more buffer against external impacts on the project
from other project’s need for resources and the impact of legislative-mandated
changes upon the project. With no restrictions on Legislative impact, these added 30
percent to the time required just to do the testing.
• The business’s Personalized Specialty Plate (PSP) design team did an excellent job
preparing the SRS (specifications). This was a VERY complex change that required
a huge number of design decisions. The PSP specs were very stable due to this
team’s involvement of all stakeholders in the design and review process.
• If specification documents are found to be unclear or ambiguous, work should stop
and the specifications should be fixed immediately via collaborative reviews. Scope
impacts on the project need to be reported from this work, but will be less than
spreading out this clarification over many weeks.
• The RFP must inform the vendor that no work on a Change Request may be done
until it receives the required sign-offs by DOL.
• The vendor is to provide a not-to-exceed bid for the work described in each change
request. If the work is completed in less time, the vendor is to charge DOL only the
actual hours or actual cost of the change.
• Changes requiring up to 12 hours of effort from the contracted Change Request
Buffer may be approved by the Project Manager.
• Changes above 12 hours of effort or a cost to the project requires signature approval
by the DOL Chief Information Officer.
• DOL may request the change be completed by a certain date, in which case the
vendor is to respond with the impact on other project tasks assigned to them in order
to meet that date. Within 5 business days, DOL will either select one of those options
or negotiate a different approach.
Configuration Management (Version Control)
• The Project Manager must ensure the Dev and QA environments are setup at the
beginning of the project and that the associated areas are established within the
Configuration Management tool (VSS/TFS). All developers are to be reminded use
of DOL’s configuration management and code promotion processes are required.
• Areas within the configuration management tool must be setup at the beginning of the
project to also hold the deployment checklist, instructions and scripts.
• There continues to be significant staff resistance to use centralized configuration
management (version control) for software. This is caused in part by the
relinquishing of code control to a third party that feels to developers like a hurdle
rather than a helpful safeguard. The configuration management procedures appear to
be still evolving. They do not yet fill the needs for the dynamic multi-concurrent-
project work environments of DOL and are just gaining procedures to support
development interruptions from Legislative-mandated and changing business needs.
• Currently, code held by the Configuration Management Services Group (CMSG) is
not routinely nor reliably updated to reflect changes made via emergency releases.
Similarly, when emergency releases don’t work correctly, fixes to these releases are
applied on the fly and are an additional risk to having the checked-in code reflect the
executables in Production. Additional procedures, education and commitment are
needed to make this a stable process.
• Configuration Management processes were evolving during this project. Many code
revision management problems would have been avoided with the procedures that are
now in place.
• Having one member of the Configuration Management group dedicated to processing
project requests was effective and important. This CMSG representative became
familiar with how the project was managing code and this allowed her to be pro-
active and helpful.
Design & Development Process
• All architectural and functional designs must go through a peer review by DOL staff.
No exceptions! Time must be put in the schedule and Quarterly Plan for this activity.
• DOL staff are responsible for taking the time and asking questions such that they
fully understand what they are approving. This may require perseverance when
dealing with aggressive vendors and when staff do not understand the technology
being applied. DOL staff’s signatures on peer reviews MUST indicate that staff fully
understand the design being presented and approve it as being an approach for which
they are personally willing to take ownership.
• Peer reviews of Fujitsu’s replatforming designs took place too late (they evolved
some bad solutions that were not reviewed with DOL staff). The problems were not
evident until code was being delivered. The vendor was most reluctant to have DOL
staff review their decisions early on, and the DOL Project Manager should have been
more aggressive in not allowing that.
• DOL IS needs written instructions on how to setup a batch environment. Currently
there is only one person who really understands how batch needs to be setup and how
calendaring of jobs must be defined.
• All requests for database changes must include a statement to “Restore Permissions”
following the change. Without this, permissions for application programs and users
will be lost.
• In making change requests to NSS, simply requesting replacement of an .exe or .dll
on the server frequently requires you also request the server be “refreshed” to activate
• Developers should have the knowledge to setup their own batch environments in Dev,
and then have it approved by LAN Support, so batch setup does not become this
stressful, critical-path, late-project activity.
The Test Lead and Test Team for this project were exceptional.
• To do adequate testing, the testers MUST come from the business area using the
application, and have hands-on experience using the application. This is required
for identification of the test scripts that must be run. Developing test scripts from
design specifications can be done, but will risk missing tests that can only be
known when the changes being tested are put into the context of the business
workflow. For replatforming projects, there are no application design
specifications from which to establish a test plan.
• The Test Team did a GREAT job at identifying test cases. This was possible
because of their in-depth knowledge of the VFS application from years of hands-
on experience in the field and in field user support.
• The Test Team used a highly-effective peer review process for Test Plans and
Test Scripts. The reviews went step-by-step through each script, and often also
considered what was happening to the data during the test.
• The Testing Lead’s experience allowed her to provide very accurate estimates of
• The Test Lead from DOL and the Test Lead from Fujitsu established an excellent
system for tracking test scripts. This is also be used on the Border Crossing
• To get the testing work completed on time, as many as six concurrent testing
environments were active at a time. While there was considerable push-back
from CMSG and LAN Support to having parallel testing environments, and it
required careful management to reconfigure environments and update the data in
each environment, the HP Project staff did this well and the needed productivity
from the parallel environments was achieved.
• It was critical to have one person within the Test Team track code deliveries from
the vendors and track what version of code and what test data were active in each
of the testing environments. This person also initiated and tracked all
configuration and pointer changes that needed to be made with changes installed
to each environment. CMSG did not consider this tracking and management
within their scope of responsibilities and LAN Support has no procedure for
tracking the current state of each environment.
• If Test Scripts are to be re-usable they must (1) be kept up-to-date with all
application changes (scheduled and emergency releases), and (2) have scripts to
automatically establish the initial data conditions required to run the test.
• To prepare test scripts to test changes (Legislative-mandates or internal requests),
the Test Team must be provided with a narrative of what each change modifies in
the workflow, calculations, and user interaction with the screen. A description of
the program change is not sufficient.
• The Test Team needs to be promptly informed when Change Requests are
submitted to developers to modify behavior of programs because test scripts and
test data frequently must be modified to test the change.
• On an application like VFS with many data-controlled paths, at least 100 percent
more time needs to be scheduled for running through more of the different
business scenarios. During the UAR Project, it was sufficient to create test
scenarios to verify all the programs at least once. Scenarios were designed like
strings threading through beads. When all the beads were included on one or
more strings, the needed tests were defined. For the HP Project, data changed at
one node actually changes the behavior of the programs that followed, so each
program needs to be tested through many paths, not just one. Testing must include
all values the navigation-controlling variables may assume.
• Contention for testing environments and staff for batch and end-to-end testing
required the original end-to-end test plan be “chopped up” such that the workflow
was more complicated and took longer than originally planned. This was made
worse by the amount of retesting imposed on the testers due to bugs in the
delivered code. The schedule needed a bigger contingency buffer at this point so
that so many testing activities did not have to be stacked upon each other.
• During the stabilization of the replatformed VFS, many of the bugs reported
couldn’t be reproduced by the Test Team. We were never sure if a prior bug fix
took care of the problem or if Fujitsu had fixed more than the bug but only
reported fixing the original defect. Lack of open, candid information exchange
from Fujitsu to the Test Team was one of the factors that caused the protracted
Batch Job Setup & Testing
• Batch Testing setup and execution took four times as long as originally estimated
because (a) setup of all the environments the batch jobs needed to interface with
was more complex than anticipated, (b) only one environment at a time can be
pointed to the QA environment with interfaces to other applications and externals,
so batch testing and online testing had to be scheduled at different times of the
• Establishing bug-free batch job calendaring (daily job schedule) took much longer
than planned. The current job flow diagram is a critical and required tool to do
this, but it needs improvement because many mistakes appear to still be made and
the process is somewhat a trial-and-error process.
• Batch and End-to-end testing took between four hours and all day because of the
old, slow servers that were provided for Testing (both batch and on-line). The
Project Manger should have purchased new fast servers for this activity as soon as
the impact on testing was surfaced. Unfortunately, the slowness of the test
servers was not raised early on as a problem.
• Batch testing will need to have on-line scripts (activities) executed to prepare data
for the batch runs. This must be considered when putting batch testing on the
project schedule because it may interrupt and add to other time-sensitive work
being performed by the Test Team. Likewise the schedule must reflect that the
Test Team will be called upon to evaluate reports and screen values generated by
the batch jobs to verify the batch job performed the expected work.
• Be sure to make data backups of conditions at the start of batch tests and at many
mid-points during each batch test. That way, if the test needs to be restarted, a
minimum of time is required to do data conditioning for that restart.
• Plan testing of batch jobs to begin with daily jobs, then move to weekly,
monthly, quarterly, and yearly so data is being built for the next round of testing
as tests are run.
• When doing batch testing, make what might be considered excessive numbers of
data backups to ensure if data needs to be re-constructed for this testing the
amount of work to do so is minimized.
Application Performance & Stress Testing
• Fujitsu claimed to have experts in stress testing, but after working with them this
proved false. They hired Microsoft (as specified in the Contract) to setup and
perform the stress testing, but the work was so tightly constrained by cost and
time, the initial attempts were inadequate. It was not until DOL took over
leadership of this task and hired Microsoft (rather than have Fujitsu continue to
constrain the work) that reasonable results were obtained.
• Stress Testing will require test scenarios / scripts and test data from the Test
Team, but must be planned and executed by the Technical Lead as the Stress
Testing requires technical knowledge of the server environments to set up, as well
as technical knowledge to understand the technical metrics reported by the test
• When preparing to perform a Stress Test, make sure to reboot the servers to clear
any problems (such as memory) that may remain from a prior test, and thus affect
the new test results.
• DOL had a hard time getting folks from the field to participate in the User
Acceptance process. This was made more difficult each time the dates for this
work were changed.
• User Acceptance made effective use of the HLB PC Training Room to provide
reviewers hands-on experience with the on-line screens.
• Batch processes were not included in the User Acceptance, but copies of all
reports were provided in book form along with a description of the data and
situation that led to the printing of each report shown in the book.
• User acceptance reviews by DOL staff and field staff were held on different days,
knowing in advance that the questions would likely be different.
User Shakedown Testing
• Every new application and significant change to an application should go through
a User Shakedown such that the associated deployment requires little more than a
replacement of test data with production data in the databases.
• User Shakedown is (a) to allow the users to verify usability of the change about to
be deployed, (b) a chance to break the application by intentionally doing things
incorrectly to ensure the system traps and reports mistakes, and (c) verify the
application continues to perform well under load. THESE TESTS ARE
CRITICAL TO HAVING POST-DEPLOYMENT STABILITY.
• The tiny field user involvement in the VFS Replatforming User Shakedown is
directly one cause for the extensive and protracted time required for post-
deployment stabilization. Authorization to deploy to Production without a
thorough User Shakedown causes a HIGH risk to the project.
• All user organizations should have been required to do extensive User Shakedown
testing, both to ensure performance and quality of the code to be deployed and to
get first-had experience using the application prior to fielding requests for help
• For the VFS application, allow for deployment to the Education Mode site well
before deployment, so then users can have time for hands-on experience that (a)
allows them to verify their understanding of how to use the new features work as
preparation for effective use when the changes go-live; and (b) provides field
testing (user shakedown) of the application changes by users prior to the go-live.
• For the IU+PSP deployment, the Education mode could not be activated prior to
deployment because the Test Team was still testing application and only one
environment could be set up with links between VFS, VHS, external (test)
interfaces and the batch system.
• The above comment means all testing of the application must be scheduled and
required to be complete well before User Acceptance and User Shakedown are
planned to start.
• Fixes to bugs need to be tested during a different part of the day than users are
doing User Shakedown activities because both activities cannot co-exist. (Again,
only one environment can be active at a time with the requisite links.)
• Very few users participated in even minimal use of the application during the
shakedown period. DOL had to force users to even attempt to logon so we could
verify all users had their access set properly. Had users done this testing we
would have known the replatformed VFS wasn’t really ready for deployment.
• Active user participation in a real shakedown should be required for all
replatforming transitions, and is recommended for any other deployment of a
significant application or application change. There is no substitute for actual use
of the application as a readiness test prior to the official deployment.
Reliable Estimates: Does the project have a history of providing realistic estimates? Are
the current estimates reasonable and reliable?
• It was very helpful to make each contracted project deliverable a milestone on the
project schedule. Flexibility can be achieved by listing the milestone at the same
indent level as the summary task for work required fro the milestone.
• Insist all project participants report weekly the hours they spent on each assigned task
during the previous week and the new estimate of hours-to-complete, made for fairly
accurate status tracking. We did not allow reports of Percent Complete, but
calculated that value from the participant’s input. Reports of percent complete from
staff typically prove to be incorrect.
• Recruiting and hiring paperwork and processes take weeks, and typically a month.
Project Mangers need to take this long duration into account to anticipate and start
building project teams and obtain contracts well before the resources are needed.
(This is very different from private industry where high-quality staff or contractors
can be on-site and working within one week.)
• High stress and human cost of forcing IU+PSP into a three-month window should not
be repeated. This required 310 hours of over-time for the testers, immediately
following 373 hours of over-time the previous quarter needed to test stabilization of
the replatformed VFS.
• Rework and re-testing added to the project by Legislative-mandated changes was
very disruptive. Legislative changes should have been blocked during a project of
• We under-estimated the time to establish and test the Batch functionality during both
the VFS Replatforming and Immediate Update phases. In both cases it took four
times longer than pessimistically estimated. If there is even moderate complexity in
the batch jobs and their calendaring (scheduling), a one month estimate is likely in the
• DOL IS staff that are making application changes that require they upgrade their
knowledge and skills to contemporary technology, did not appear to be motivated to
aggressively work to achieve the required knowledge and skills before needing to
actually make code changes. While this appears somewhat a natural reaction, it
means considerable time must be planned for the Project Manager and staff
supervisors to provide continuous tracking of progress and mentoring. It might have
been better to have given these developers a new-technology assignment before the
project to get them over the learning curve and give them confidence in their use of
the new tools and skills.
• Time needs to be built into work schedules for “thinking,” “evaluating,” and
“investigating” and for upgrading technical skills.
• Time needs to be available within each project for the Project Manager to develop
project-specific procedures and standards, as well as to write white-papers to clarify
both technical and project issues.
• Tasks in the project schedule should be grouped in association with each deliverable
being produced. Tasks should not be grouped by who is performing the tasks because
this makes it difficult to see all the work required to complete a deliverable.
• Make sure the project schedule includes “buffers” for unknown negative impacts on
the workflow. There should also be an overall project buffer placed at the end of the
• The Project Manager should identify, as part of the development of the Project
Charter, if installation and testing the application (or application change) at DOL’s
Enterprise Disaster Recovery Center is within the required scope.
• Resource needs and identification of when during the project those resources will be
needed should be an OUTPUT of the Project Schedule development.
• To keep testing on schedule, the Test Lead needs to have quick access to the Project’s
Technical Lead so that needed environment and database changes will be made
• Batch testing is dependent on having the online programs and data sources that feed
data to batch processing fully operational before batch testing can begin.
• Defining the required sequence of batch jobs and how to restart batch processes at the
point they may fail takes more time to work out than was expected. This is
especially true when moving from one scheduling software to another.
• Project Managers need to work with the IS Managers to learn the true Windows .net
skill and knowledge of the assigned staff. If staff are in transition from supporting a
mainframe application to supporting a server-based application, task estimates should
be double checked by others and the estimates adjusted based on technical knowledge
and hands-on experience of the staff in transition.
• Estimates for Batch setup and testing need to be carefully established by developers
and LAN Support TIDAL experts. These estimates need to include buffers that
represent the level of batch setup and testing experience of the developers who lead
and do that testing. Also factor in how long the batch jobs take to run because testing
requires many increments and must typically be run outside of normal work hours, so
you may be able to do only one testing cycle per day.
Skilled Staff: Has the project properly identified the required competencies, the required
level of experience and expertise for each identified skill, the number of resources needed
within the given skill, and when these will be needed? Are the staff available and
assigned to the project? Soft skills are equally important when identifying competencies.
• Resistance to change requires persistent, continued person contact with vendors and
DOL staff to maintain relationships, and provide positive reinforcement by the
Project Manager. This takes considerable time and needs to be planned for in the
project schedule and done on a daily basis.
• Close tracking, status reporting, and feedback of progress achieved needs to be
reported to all project staff as well as DOL management. Reporting task status by
person is a good thing as long as the planning estimates were provided by those doing
the work (so were as accurate as possible) and individuals are not perpetually
reported as either early or late.
• The IS Division needs to significantly improve the technical maturity of its staff in
the application of contemporary technologies, methodologies and tool. Managers
need to continue analyzing staff skills and experience to make forward-looking
training and experience goals.
• When staff are assigned to a project, the supervisor of those individuals must get the
Project Manager’s approval for every leave and training request. During this project,
supervisors approved requested leave when that staff member was on a time-critical
task. This happened because the individuals requesting leave frequently did not tell
their supervisor the impact their leave would have on the project. In other cases, the
supervisor required staff to attend training without first checking with the Project
Manager, with similar results. What made this worse, the Project Manager did not
even have advanced notification, which otherwise might have allowed the leave
impact to be mitigated.
• Everyone on the project must be made aware they are personally responsible for
meeting agreed-upon deadlines. If you are dependent on someone else to complete
your task, you must (a) give them reasonable notices of when they will need to work
on your task, (b) be given any information and code to do that work before they need
to start work, (c) have clear instructions of the work to be done, the expected
deliverable(s), and the required delivery date. You, not they, are responsible for
meeting your due dates. “I’m waiting on --- is NOT an acceptable excuse.”
• DOL IS staff are using analysis, design, and development techniques of twenty years
ago. This is impeding their productivity in both development and troubleshooting.
Foundational techniques in the .net world, such as object oriented analysis & design
are not yet in use and there seems so much pressure to develop systems and changes
that I see no progress in growing these skills use of modern techniques within the
staff. Further, as these methods are not in use within DOL, even the contractors being
hired do not use these tools as they would be difficult to integrate with the existing
code being modified.
• Anticipated, late delivery on any task needs to be immediately brought to the Project
• Opportunities to complete work early should be taken, unless other issues would
make this inefficient for the project or another project.
• When someone is to do work for you at a specific time in the project, the objective is
to give them a clear description of the task well before they need to do the work, then
provide a two-week, one-week, three-day and one-day count-down on when you will
provide the input they need to start work.
• If getting an answer or action from another person is urgent, DO NOT rely on email
to communicate that fact. Call them or visit them to ensure you both understand
when the work will be completed and agree on that timing. Keep the Project
• Staff should not get frustrated when someone does not promptly reply to their emails.
Either call or visit the individual and find out what you need to know. (This will
lower your own stress.)
• It is critical that the Technical Lead, supported by the Project Manager, communicate
regularly with the Infrastructure teams to ensure that environments needed by the
Project are fully operations and available to the project team when needed. It is
recommended that Project-NSS status checks happen at least weekly, and escalate to
daily, as deadline approach. The execution of changes to Project environments must
similarly be executed when agreed. (During the HP Project, NSS’s Just-in-time
environment deliveries and environment changes were provided late so many times,
the resulting testing delays so frustrating to Fujitsu; it essentially nullified DOL’s
contractual leverage to force Fujitsu to do work on schedule.)
Contract Negotiation and Management: Is the project using resources experienced in
contract negotiations? Does the project organization include a resource whose sole
function is contract management?
Request for Proposal (RFP)
• RFP preparation was rushed to completion at a time when the DOL managers
who needed to participate were overwhelmed with other critical projects. As a
consequence, sufficient thoughtful review and consideration was not put into the
RFP’s content and organization. Wording and missing information in the RFP
(that became part of the contract) was the main and on-going source of problems
with the selected vendor.
• The RFP was written by a contractor who helped DOL prepare excellent RFPs in
the past, but was left to her own too much when collecting and organizing the
information. Mistakes in earlier RFPs were repeated rather than avoided. The
write-up of the required deliverables was out of touch with the needs of this
• The timeline and time estimates stated in the RFP were unrealistic. When the
DOL and vendor Project Managers prepared the initial schedule, the work
extended three months longer under optimal considerations. On top of that, the
RFP required the vendor to convert and integrate Legislative-mandated changes
imposed during the project. It was not made clear in the RFP that this would
likely require a significant amount of rework and retesting by the vendor. In fact,
this added two months of re-conversion and four months of re-testing. The
incorrect estimates and the imposed rework had a major, negative impact on the
DOL-Vendor relationship, making the job for the Project Manager most difficult.
• The incorrect timeline in the RFP and the amount of rework required by the
Legislative changes added Change Request costs to the project amounting to
$857,035 (37 percent over the initial $2,290,000 contract.
• The RFP included many “mandatory” items for which the vendor had to mark
they would comply or not comply. Most of these items were defined by lists of 4
to 20+ conditions that all had to be met. Vendors complained about this “strong
arm” approach which forced them to agree to comply with details that they did
not agree with, or risk being eliminated. To be fair and get more honest
proposals, these multi-part items should have been “scored” rather than being
forced into “comply” or “not comply” status.
• Although the impact was too small to affect the scoring totals, some items should
have been put into the Management versus Technical categories of the RFP. The
rushed publication did not allow these mistakes to be fixed.
• Work must be done BEFORE the RFP is created to establish quantitative metrics
that define the baseline performance for normal and stressed operations as well as
define the performance-under-load targets that must be achieved in quantitative
• A good requirement was the vendor’s Project Manager and all key members of
the vendor’s team be on-site throughout the contract. This is a significant help in
ensuring good vendor-DOL technical communication. It also facilitates
monitoring and tracking of vendor-effort being extended on the project.
• The RFP and Contract did not adequately protect DOL and the Project Manager
from poor quality work products by Fujitsu. This added extra work and
responsibility onto the DOL Project Manager and a litany of conflicts with the
vendor who focused on keeping his costs within the fixed bid of the contract.
o The RFP must specify the required deliverable quality, the metrics by
which it will be judged, the form of the delivery (code as source code, not
executables), documents (Word document, Excel Spreadsheet, Visio
Diagram, etc.) and how the delivery is to be made.
o All deliverables listed in the contract are not considered “delivered” until
DOL has held a formal review, at which the appropriate vendor technical
experts must be present. The DOL staff identified in the RFP for that
deliverable review must formally sign-off acceptance of the deliverable.
The vendor, not DOL, is responsible for achieving that sign-off promptly,
but must allow DOL a minimum of five business days per deliverable
o The RFP must specify the metrics by which the quality of the code
produced will be judged. For example: post-delivery defect counts, time-
to-stabilize, readability and maintainability of code, compliance with MS
best practices, performance under load as observed on user workstations,
o RFP designers should consider placing penalties on the vendor if
deliverables are not approved by DOL within a number (3?) approval
review cycles. This should be deducted immediately from the holdback
amount available to the vendor at the end of the project.
• The RFP must remind the vendor DOL owns all work done under the contract, so
the vendor must supply any finished and in-progress work products upon request
by DOL. No exceptions!
• The RFP should ask the vendor to provide a written answer to the questions:
“What are the top five product quality considerations for this project? What is the
vendor going to do to deliver the highest quality work products? What metrics
will the vendor use to track and report product quality? This then could be a topic
for contract negotiations if DOL considered the answers inadequate.
• The RFP should specify where in the project (project phase or association to
another deliverable) each deliverable will be provided to DOL. This will keep
delivery of deliverables in the sequence intended by DOL. The sequence of
deliverables is critical when those deliverables are quality gates for work to
• Put in the RFP that DOL expects the vendor to share, at least weekly, through
formal status reports not only the accomplishments of the vendor, but the
problems (technical and otherwise) being experienced by the vendor and the risk
these problems pose to the project. DOL will collaborate with the vendor when
possible to suggest or help solve these problems as a partnership toward achieving
• The vendor MUST use the tracking tool used by DOL for bugs, action items,
decisions, risks, etc. as their primary tracking and reporting instrument. If they
choose to use a different tool internally, all information in that alternative tool
must be recorded by the vendor into DOL’s tool on the same day and with the
same completeness of information as entered into the vendor’s tracking tool. (No
• Payments need to be based on the value to DOL of the deliverable, not the amount
of effort required by the vendor to produce the deliverable.
Description of Work
• In replatforming and code conversion projects, the RFP must emphasize the
vendor’s responsibility is to replicate the behavior of the application on its prior
computing platform and language, not just the logic of the code.
• Specify that the replatformed code is to perform at least as quickly as it did on the
prior platform, be easy to maintain, and follow DOL’s design standards for the
target language. Performance must be defined in quantitative metrics such as
screen-to-screen transition time (possibly for every screen transition), allowed
delay between user action and start of printing, total batch job execution time, etc.
Throughput should be defined by a measurable metric such as total completed
business transactions per minute. The load at which these targets must be
achieved must also be defined by a measurable metric such as transaction counts
submitted per minute for each pertinent type, active concurrent users per minute,
• Specifics on the nature and quality of training and training materials to be
prepared by the vendor need to be as specific as possible. The RFP should ask the
potential vendor to also suggest improvements on the requested training approach
and specified training deliverables. Request the cost impact of their proposed
• The RFP should require the vendor to work with DOL to ensure: the application
will work within DOL’s environment, the computer environments are configured
according to DOL standards, so the application will perform as needed.
• The vendor is responsible for achieving application stability (no aborts, timeouts,
or database deadlocks) and performance (screen-to-screen transition times, delays
before print starts, times to complete data transfers) under loads that match or
exceed quantitative values defined within the RFP. That means planning the
stress test and measuring the existing performance must be completed BEFORE
the RFP is prepared.
• The RFP must state the vendor is responsible for ensuring their code follows
Microsoft best-practices so that the application achieves performance targets. It
must also state the application must manage memory, temporary files, and
database space so performance and application stability is maintained over the
long-term life of the application.
• The RFP must indicate the applications will live in an environment where Load
Balancers will manage the application’s execution over many servers and
database activities execute on clustered configurations. The vendor is responsible
for ensuring efficient and effective management of such resources by their
• DOL-required architectural design constraints must be collected from the DOL IS
Architecture team and either included in or attached to the RFP.
• The RFP should be much more specific in defining the vendor’s responsibilities in
setting up the batch processing environment and job calendaring (scheduling).
This and the batch testing defaulted to DOL staff when this responsibility was not
made clear within the RFP.
• The vendor was responsible for leading the testing activities and collaborating
with the DOL Test Lead and DOL Tester to do that work. Half way through the
project, she left the project. The project was too far along and in too intense a
stage to have the vendor provide a new test lead, so the vendor Project Manger
took on the responsibility. In practice he was only a testing status clerk, and real
test leadership responsibility transferred to the DOL Test Lead. From hind-sight,
the vendor Project Manger should not have been allowed to act as Test Lead and
the vendor should have been forced by DOL to provide an even higher-qualified
test lead to learn and take-over Testing Leadership. While the DOL Test Lead did
an exceptional job, she assumed a very stressful responsibility and test planning
mistakes made by the vendor. This also left batch testing in the lurch until DOL
staff took that over. (It may be that the vendor realized that they didn’t have the
knowledge to plan and lead the batch testing.)
Vendor Participation in Quality Assurance
• Quality assurance activities (e.g. design reviews, code reviews) the vendor will be
required to participate in, and take action from, must be spelled out. Simply
stating the vendor will comply with DOL SDLC and QA activities is NOT
specific enough to be measurable or enforceable.
• The RFP needs to state the vendor must aggressively and pro-actively seek-out
defects and fix them. This includes looking for additional locations within the
code that the same or similar bug may exist and fixing all occurrences of the
defect, not just the one reported. Fujitsu was not always proactive in seeking out
and fixing potential problems in their code. They only fixed what DOL proved to
them was defective. This is not acceptable vendor behavior.
• Define how defects will be counted. Include how “Can’t reproduce” and
“Training issue” are included, or not. Need to make sure vendor is motivated to
not ignore or just declare issues “Can’t Reproduce.”
• Fujitsu was not open in discussing their code conversion process or the resulting
code. The Project Manager and External QA contractor should have insisted the
vendor participate in all design and code review processes with DOL staff from
the start of the project.
• The RFP needs to state when the vendor fixes reported defects, they must
document, in a fair amount of detail, everything they changed to effect the fix.
(Fujitsu refused to do this, not wanting to report the extent of their mistakes.)
However, this information is critical to the planning of re-testing of the
corrections, and may show that re-testing of more processes than just the fixed are
required. Having such information will also be helpful in future troubleshooting
by DOL when we are doing application maintenance.
• The deliverables in the RFP were copied from the prior UAR RFP rather than
from what evolved during UAR to be the needed deliverables. The list further
stated the vendor was to provide a PowerPoint presentation with each deliverable,
which is not a value-adding or required action. The needed deliverables include
formal DOL sign-off on (a) the vendor’s analysis and confirmation of the work to
be done, coupled with a MS Project schedule; (b) the proposed architectural
design as it will integrate into DOL’s infrastructure; (c) the specific code
conversion, error reporting and event tracking that will take place, with emphasis
on DOL buy-off on the plan details before any work is done; (d) identification of
all development environments, the QA environment, and interface architectures
and designs that will be required, and the promotion workflow and workflow
timing; (e) DOL acceptance of code reviews, including online, batch, services,
and batch calendaring (f) testing plan, scripts, and conditioned data; (g) technical
documentation to assist for DOL’s maintenance of the application (h) user
training materials and delivery; (i) data conversion and management; (j) Weekly
status reporting and project schedule updates that are integrated with input from
DOL and other vendors; (k) detailed installation instructions and scripts; and (l)
post-deployment support. Details of these deliverables need to be put into the
RFP to maintain quality control through the replatforming process to comply with
DOL standards and to ensure acceptable quality of the work product resulting
from that process.
• The RFP must make it clear sign-offs for Architecture Reviews, Design Reviews,
and Code Reviews are contract deliverables. The vendor assumes all risk if they
proceed with work that has not been signed-off at the previous step.
• If the vendor chooses to change an approved architecture or design, that change
must be reviewed and approved by DOL.
• When a vendor is required to convert code and or data via an automated tool,
specify they will be required to provide auditable proof the conversion is accurate
and complete. The penalty of not doing so will be forfeiture of xx% of the
• If DOL does not have staff available with sufficient knowledge, experience, and
skills to evaluate architecture, design, and code quality, or can’t do so within five
days of each associated delivery by the vendor, specify in the RFP this may be
contracted to Microsoft or other experts. Then do so.
• Without quantitative metrics, stress and performance goals have no “teeth.” The
RFP should specify the following Performance & Stress Test results:
o The quantitative, existing baseline performance under normal and stress
o The level of stress to be applied in quantitative terms
o The required application performance under load that must be delivered
o The method by which performance must be measured
o There must be no database locks or timeouts
o There must be no application timeouts or aborts
o In general, users must experience screen-to-screen transitions that are sub-
second; however, some screens are known to have heavy processing load
and thus transitions for these screens must match or exceed that of the
o Additionally, no application performance degradation may negatively
impact the user’s workflow processing speed
o Queries to the databases must specify “NOLOCK” to avoid transactions
from being blocked
• Reports on Stress and Performance Testing must contain both performance graphs
produced by the Stress Test tool AND a narrative interpretation of the results
must be a deliverable. The RFP must specify the minimum metrics to be reported,
the stress levels to be applied, and the performance levels that must be achieved.
• Stress Tests should NOT be planned or run by the vendor who did the coding and
code replatforming, and must be done under DOL’s control.
• The RFP should state the level of testing that the vendor must perform and quality
of testing results (no bugs?) the vendor must achieve before they hand-off code to
the test team. . Fujitsu’s criterion was the screen should appear more-or-less
correctly and the program must compile without errors. This led to weeks more
than planned of bug fixes and re-testing due to screens and functionality aborting
or not working correctly.
• The interview, followed by a required live (not PowerPoint) demo, was again an
excellent evaluation approach. It showed clear differences between vendors that were
not apparent from the vendor proposals.
• The code conversion the vendors are required to demo as part of the selection
interview must be representative of the types of code to be converted and the level of
difficulty of the project. The Heating Oil (HOAP) replatforming that served as the
pilot stage for the HP 3000 Replatforming Project was too simple to judge the
vendor’s process or skills at doing the VFS replatforming. For example, the pilot did
not include any electronic interfaces with which the Fujitsu code must interact, while
there were over 100 interfaces in the VFS project conversion.
• It was not a good idea change the RFP after it had been published. The change
specified a different vendor do the interface replacement part of the replatforming.
Although the interface vendor did an excellent job, it required the primary vendor and
the interface vendor to work in close collaboration and this added a risk to the project.
This change in the RFP was made so late, several vendors who were preparing
proposals had already worked out their bid for that work, and had included it with
Contract Terms & Conditions
During the contract negotiations, DOL required Fujitsu to have Microsoft do quality
reviews of and approve all code conversion approaches to be implemented by Fujitsu.
(DOL realized that it did not have staff with the bandwidth and technical expertise at
the time to do such reviews, but DOL needed to ensure Fujitsu replatformed the DOL
applications such that effective and efficient code was delivered.) Fujitsu reported
that the Microsoft reviews were done, but refused to supply documentation to the
effect. (They said that providing such information was not an appropriate
requirement by DOL for a fixed bid contract.) Subsequent reviews of the code by
Microsoft during Stress Testing, done at DOL’s expense, showed many serious
design flaws that Microsoft said had been reported to Fujitsu early in the project. This
fact brings into question whether Fujitsu actually took action on Microsoft findings.
Fujitsu wound up having to fix most of the Microsoft-reported problems to achieve
code stability and adequate performance of their code under load.
• If DOL wants a vendor to review and guide the quality of work performed by another
vendor, DOL needs to contract directly with the vendor doing the reviews, not have
the developer sub-contract that quality check.
• Include in the contract a reasonable number of no-charge, Change Request hours for
small-to-medium effort changes; this continue to be a valuable and effective approach
to minimizing change request overhead for small-cost issues. This project had a 500
• Partial payment for systems must be contingent on achieving at least ten consecutive
post-deployment days without a new bug being identified. The amount of the
associated payment should be significant so the vendor is driven to complete
stabilization quickly. With this constraint, Fujitsu worked very hard to achieve
stability during the Unisys Replatforming Project. Without this constraint, Fujitsu
notably did little to speed achievement of stability. In fact, they did not apply the
needed resources and blamed DOL continuously for not providing sufficient
information to find reported bugs. Without the constraint there was no proactive
effort by the vendor to fix defects.
• Start the “warranty period” only after the ten consecutive post-deployment bug-free
days. During the warranty period which the vendor is required to provide quick-
response (1 hour response, 8 hour correction) of any additional bugs that are found.
This requires that the vendor’s technical experts from the project must remain
available and have the top priority assignment of responding to DOL defect reports.
They may or may not remain on-site at DOL. The warranty period should have
duration of 60 to 90 calendar days. A final vendor payment of the remaining 20
percent holdback is made at the end of the warranty period.
• If deployment does not take place so that the warranty period will be completed by
end-of-biennium, the associated holdback payment should be forfeited.
• If the warranty period is not completed by the end-of-biennium, DOL will determine
the portion of the associated hold-back payment to be made to the vendor.
• The contract boiler-plate states that because the work is being done under a public
contract all documents and information produced under the contract are public
property, and the vendor may not withhold any such information. However, because
of the fixed-price nature of the contract, information related to the vendor’s internal
costs and payments to sub-contracts may not be required to be disclosed by other
laws. There needs to be a clause in the contract that says the vendor will promptly
(this may need definition) deliver to DOL, uncensored and unmodified, any and all
technical information produced by or supporting the vendor’s execution of the
contracted work. I recommend this because of Fujitsu’s refusal to disclose the
amount of code converted by automated tools vs. manually, and their refusal to
release Microsoft’s comments and suggestions on the Fujitsu code conversion design
and resulting VB.net code.
• Put into the RFP criteria for making payments for completion of code testing steps,
not the delivery of code.
• Holdback amount should be large enough to put some teeth into the vendor
completing work promptly. The 20 percent holdback used in the UAR and HP3000
projects seemed to provide minimal leverage.
Implementation: Has the project developed a reasonable plan for implementation? Are
the duration and amount of user training adequate?
• Do more PowerPoint with voice-over style training. Field office users were very
appreciative of the user training provided by this project. It put the ability to
replay the training under the control of each individual as well as provide a hands-
on environment (EDUCation Mode) in which to practice before the application
• Brad Benfield has equipment and the “professional voice” to record the voice-
• Make booklets of detailed feature instructions for each change / enhancement in
addition to PowerPoint based training.
• Delivery of BOTH the PowerPoint training and the detailed booklet was
considered by the field as much more beneficial and longer-lasting way to deliver
training than the previous, time-consuming on-site training by DOL staff.
• It might be helpful to DOL to provide a knowledge / skill test with this training to
give users feedback on how much of the critical information and skills that they
actually mastered from this training.
• Involvement of the Test Team (with their experience using the modified
application) both in preparing user training materials and delivering training (e.g.
implications and interpretation of Exceptions when IU went live) was very
effective for the User Support teams.
• Make sure to actively explore with users the impacts on them from changes made
to the application. Talk through each screen that they use. Talking in terms of
database changes is too abstract for users to see potential issues.
• IS staff did not rate the technical training by Fujitsu to be complete or very
effective. The technical trainer was reported by DOL staff to not clearly
understand the application and wasn’t able to supply answers to DOL developers’
questions about how business functions were implemented. He also deferred
questions and then never provided answers. The follow-up session with Fujitsu’s
lead developer was more helpful.
• Part of the problem with the technical training came from the fact that key
documentation by Fujitsu wasn’t provided until well after the technical training
• Part of the DOL staff’s problems with the technical training was due to the fact
that DOL staff hadn’t had enough hands-on experience working in the Windows
.net environment. They had received training in .net tools, but not sufficient
hands-on experience. Their work had remained primarily focused on the HP
environment. They did not at the time fully understand how to manage code or do
troubleshooting in the Windows environment.
• Although all DOL staff that are now supporting the replatformed VFS application
learned the needed skills by the end of the project, the technical training lost value
due to DOL staff’s lack of meaningful .net maintenance and troubleshooting
• Documentation by DOL is now needed on how to change the VFS EDUCation
mode to allow users hands-on experience with the to-be version of VFS prior to
deployment into Production. (See also “User Shakedown” in this document.)
Following deployment the EDUCation mode will match Production to allow
training /retraining of field staff.
• A Work Group with a Lead other than the Project Manger should be established three
months before deployment to do the detailed deployment planning and to ensure all
details are carefully reviewed. The goal should be no change in the Deployment
Checklist or the associated detailed installation instructions be made on deployment
weekend. It takes too much effort and time to plan the environment details, establish
a checklist, develop detailed instructions and scripts, and have at least four rounds of
detailed reviews and revisions for this work to be lead by the Project Manager.
• As each deployment detail is documented, the associated back-out instruction and
scripts must be prepared. It is difficult and time-consuming to prepare back-out
• Involving the Test Team (with their hands-on experience with the modified
application) in preparation of business user support staff was most effective.
• The activity of creating the Deployment Checklist and then using it to track activities
is still a best practice that should be scaled for and used with every project.
• Addition of the matrix that maps the relationship between checklist items,
deployment instruction documents, and deployment scripts was a very helpful
addition during the IU+PSP deployment for tracking deployment readiness. This
matrix also helped LAN Support review and execute the deployment instructions.
• Supplying detailed deployment instructions to LAN Support for review in June when
the deployment was not until September did not result in an earlier or more thorough
review. The Project Manager must facilitate those reviews to make them happen.
• Developers, not LAN Support, must make sure all the required changes in web.cfg
and machine.cfg are documented for the deployment.
• One of the checklist reviews should focus on the task dependencies. Make sure (1)
all dependencies are identified, and (2) if there are opportunities to do any task
earlier, that is also identified. One finding is that we would have been able to start
some of the Operational Readiness testing sooner, and thus finish sooner.
• The Deployment Checklist and all detailed instructions and scripts must be finalized
two weeks before deployment.
• DOL Network Support Services (NSS) requested the Deployment Checklist have hot
links to the referenced detailed instruction documents and installation / roll-back
scripts. This allows them to more quickly get to the needed information and execute
• NSS wants to get the detailed deployment instructions early so they have time to
review them, and in some cases re-organize them into group actions of one type (e.g.
modifying a web.cfg file) so they can see if there are conflicting instructions and can
do tasks more efficiently on deployment day. The Deployment Workgroup Lead
(supported by the Project Manager) must insist all deployment materials are ready
early enough to have three or more reviews with NSS.
• When the replatformed VFS application was deployed, NSS staff took hours longer
than planned to do their tasks because they decided on that day they needed to set
things up differently than had been done over prior weeks. This caused a high risk to
the deployment’s success. This will not be repeated.
• The lead developers associated with the application(s) being deployed should be
physically present at HLB in the NSS area to answer any questions that come up by
the NSS team during deployment.
• DOL will pursue automating all of the application deployment activities. This
includes both the installation and any back-out steps. Automation of application
installations has been common practice for many years and should also be used
within DOL to avoid risks of human mistakes. It would allow the deployments to go
faster, without risk of mistakes that happen when staff is under stress and tired.
Deployment automation should be built to be re-usable and adaptable for each change
to the application; having a simple mechanism for selecting which of the components
can remain as-is and which must be replaced.
• The Test Lead’s verification that TREC, Liaison, and USS all understood the changes
being deployed with IU+PSP immediately before that deployment resulted in those
groups being able to correctly and effectively support users and their questions.
Training provided by the Test Lead also covered how to process the new exception
reports created by the application and the trouble reporting process that would be in
effect the week immediately following deployment. This just-in-time training was
very effective and resulted in prompt and smooth issue handling following
• It is important to treat users calling in to report post-deployment problems with
courtesy and respect. We had complaints on several instances the response was, “We
already know about that problem,” without a “Thank you for reporting that.” DOL
support staff and mangers need to be reminded just before deployment to (a) be
sensitive to the user community as people in needing post-deployment help, and (b)
as customers offering information as partners willing to collaborate on fixing
• If there are application security features within the application, make sure DOL’s
Network and Application Security staff are trained in the post-deployment support
• Go / No-go Decisions: The Project Manager should supply daily status updates to the
Steering Committee until he/she can recommend and gain “Go” approval.
• Just before deployment, put out a quick notice to all users, user support staff, and
business managers reminding them of the date, what to expect (what’s changing), and
how to report problems and get help.
• A “dress rehearsal” for the deployment should be included in the project schedule to
ensure that checklist, instructions and scripts work. In this case an opportunity to do
a deployment dress rehearsal occurred when the EDUCation mode version of VFS
went live on a Pre-Production server to support User Shakedown.
• When planning post-deployment support activities, procedures, and communications,
start by creating a vivid vision of the worse-case situation. Plan actions and
communications to address that level of problems. This proved very helpful in
identifying a solid strategy that worked well and could easily be scaled back as it
became clear the post-deployment was going to be smoother than expected.
• Those who provide user support for the new / modified application should have had at
least two weeks of hands-on use of the applications prior to deployment.
• Relocating some of the testers into the user support work areas at HLB immediately
following deployment, let the Testers extensive experience with the application be
immediately available to that staff. This improved the confidence of the support staff
in working with users and got the correct information to the users when they called
with questions or to report problems.
• Use the intranet site to keep everyone internally and in the field informed on the
status of reported bugs. (Requires someone to maintain.)
• Test Lead holds daily, early a.m. status review with TREC, Liaison, and USS.
• Publish Daily Status Update for agency executives and the project Steering
Committee. List Activities & Summarize Bug status in short, understandable terms.
• PDSI and DOL IS staff provided great support and collaboration with the Test Team
during the IU+PSP work, both before and after deployment.
• The Test Team learned during the replatformed VFS post-deployment period how
critical it was to provide Fujitsu with detailed information about how to reproduce
reported bugs. Fujitsu didn’t understand the business situations in which the bugs
were appearing, so they didn’t have an adequate starting point for troubleshooting.
• Fujitsu did not put in the effort to effectively collaborate with the Test Team and this
led to it taking almost four months to stabilize the Replatformed VFS.
• Transactions reporting failures were frequently not the location of the defect. It is
important when capturing failure information to learn what the user was doing
immediately prior to the failure was reported. The actual problem was likely in that
• There was not enough communications between the Project and those supporting
users (TREC-Liaison-USS) immediately prior to and immediately following the VFS
deployment. This was corrected during the IU+PSP deployment and made for a very
effective user-support partnership for IU+PSP because of that change.
Additional Project Processes to Recommend
• Take the time as soon as possible to make the project schedule as comprehensive as
possible and use it to estimate when and how many resources will be needed.
Communicate that need in person to all impacted.
• Update the project schedule weekly.
• Keep attempting to get effort-based (rather than duration-based) estimates for tasks
from staff and vendors. Keep attempting to manage using Earned Value. We will
likely have to use “hours” as the “value” rather than “cost’ as fixed-bid contractors
would not want to divulge cost.
• Keep the whole project team recording decisions into the project’s tracking tool. (We
could have done much better here, but were still better than most projects.)
• Our tracking of deliverables-based contracts, hourly contracts, mixed deliverables /
hourly contracts, and all associated documentation was very good. This may become
more complex when using the iECMS computerized tracking system over
customizable Excel spreadsheets.
• Schedule monthly Steering Committee meetings AFTER the 15th of the month (AFRS
close date) so you can report the “final” values from the prior month.
• Using the weekly Core Team meetings to do the non-technical problem solving and
decision making with the key business representatives established a strong level of
business participation that also improved communications both to and from business
• The Project Manager was effective in getting NSS out of the loop at critical times in
the testing, such as negotiating for the Test Team to be able to do their own database
backups and restores. A backup or restore typically took two hours to complete
because of the formal request and approval process. But 10 to 20 such changes were
required on some testing days. By temporarily allowing testers to directly run batch
jobs that did these repetitive backups and restores in the QA environment, no laps in
security was opened yet considerable time was saved.
• Not having a Technical Lead with excellent knowledge of Windows internals and the
.net Foundation until late in the project was a considerable hurdle to good
communications and collaboration with NSS. The representative originally supplied
by NSS to fill that role did not have sufficient technical knowledge and
• Creating and using a Deployment Checklist with detailed installation instructions and
scripts continued to be a best practice, especially for deployments as complex as those
associated with application replatforming.
• Hold user interface reviews with application users as soon as a few screens are
ready. Repeat two-to-three times to identify required changes as early as possible to
minimize development re-work. Get formal sign-off from those doing the review.
Be sure to include representatives from field offices if they are application users.
(This sign-off should be a project schedule milestone / deliverable and managed
within the Training Work Group.)
• SWAT Teams that included most IS managers plus business managers were quickly
formed when major issues surfaced. One team was formed immediately following
the VFS deployment. A second was formed when we observed a dramatic slowdown
of VFS that was impacting users. These relatively short-lived teams made sure the
resources and the focus was placed on fixing these problems. They were very
effective. The daily status reports distributed by the Project Manager on SWAT
Team actions and progress helped all parts of the Agency and the user community
understand what was being done, established a strong problem-solving partnership
and reduced apprehension about the problem.
• As DOL IS staff develop technical knowledge, skills, and maturity with the now-
standard .net platform there will be problems that must be quickly escalated to outside
expert help. Troubleshooting the VFS slowdown was one such case. Internally we
collected as much information as we could and then brought in Microsoft, Right!
Systems, and others. DOL staff are mastering the new technologies and making great
strides in finding solutions. There is important growing confidence in our internal
ability to use and gain the productivity boost from the new methods and tools. We
are learning from these experts, but will continue to need expert advice to quickly
solve complex technical problems as we mature technically.