Docstoc

thought

Document Sample
thought Powered By Docstoc
					     Lecture 4

 Process and Method:
 An Introduction to the
Rational Unified Process
  Traditional Structured Analysis
 Described by W. W. Royce, 1970, IEEE WESCON, Managing the
   development of large software systems.
 Decomposition in terms of Function and Data
 Modularity available only at the file level
    – cf. C language's static keyword (=="file scope")
 Data was not encapsulated:
    – Global Scope
    – File Scope
    – Function Scope (automatic, local)
 Waterfall Method of Analysis and Design
            Waterfall Method
 Requirements Analysis
  – Analysis Specification
     • Design Specification
        – Coding from Design Specification
           » Unit Testing
           » System Testing
           » UAT Testing
           » Ship It (????)
 Measuring rod is in the form of formal
  documents (specifications).
  Waterfall Process Assumptions
 Requirements are known up front before design
 Requirements rarely change
 Users know what they want, and rarely need visualization
 Design can be conducted in a pure abstract space, or trial
  rarely leads to error
 The technology will all fit nicely into place when the time
  comes (the apocalypse)
 The system is not so complex. (Drawings are for wimps)
   Structured Analysis Problems
 Reuse is complicated because Data is strewn throughout
  many different functions
   – Reuse is usually defined as code reuse and is
     implemented through cutting and pasting of the same
     code in multiple places. What happens when the logic
     changes?
       • coding changes need to be made in several different
         places
       • changing the function often changes the API which
         breaks other functions dependent upon that API
       • data type changes need to be made each time they are
         used throughout the application
    Waterfall Process Limitations
 Big Bang Delivery Theory
 The proof of the concept is relegated to the very end of a long singular
   cycle. Before final integration, only documents have been produced.
 Late deployment hides many lurking risks:
    – technological (well, I thought they would work together...)
    – conceptual (well, I thought that's what they wanted...)
    – personnel (took so long, half the team left)
    – User doesn't get to see anything real until the very end.
    – System Testing doesn't get involved until later in the process.
   The Rational Unified Process
 RUP is a method of managing OO Software Development
 It can be viewed as a Software Development Framework
  which is extensible and features:
   – Iterative Development
   – Requirements Management
   – Component-Based Architectural Vision
   – Visual Modeling of Systems
   – Quality Management
   – Change Control Management
              RUP Features
 Online Repository of Process Information
  and Description in HTML format
 Templates for all major artifacts, including:
  – RequisitePro templates (requirements tracking)
  – Word Templates for Use Cases
  – Project Templates for Project Management
 Process Manuals describing key processes
The Phases
 An Iterative Development Process...
 Recognizes the reality of changing requirements
     – Capers Jones’s research on 8000 projects
          • 40% of final requirements arrived after the analysis phase, after
             development had already begun
   Promotes early risk mitigation, by breaking down the system into mini-
    projects and focusing on the riskier elements first
   Allows you to “plan a little, design a little, and code a little”
   Encourages all participants, including testers, integrators, and technical
    writers to be involved earlier on
   Allows the process itself to modulate with each iteration, allowing you
    to correct errors sooner and put into practice lessons learned in the
    prior iteration
   Focuses on component architectures, not final big bang deployments
 An Incremental Development Process...
 Allows for software to evolve, not be produced in one
    huge effort
   Allows software to improve, by giving enough time to the
    evolutionary process itself
   Forces attention on stability, for only a stable foundation
    can support multiple additions
   Allows the system (a small subset of it) to actually run
    much sooner than with other processes
   Allows interim progress to continue through the stubbing
    of functionality
   Allows for the management of risk, by exposing problems
    earlier on in the development process
   Goals and Features of Each Iteration
 The primary goal of each iteration is to slowly chip away
  at the risk facing the project, namely:
   – performance risks
   – integration risks (different vendors, tools, etc.)
   – conceptual risks (ferret out analysis and design flaws)
 Perform a “miniwaterfall” project that ends with a delivery
  of something tangible in code, available for scrutiny by the
  interested parties, which produces validation or correctives
 Each iteration is risk-driven
 The result of a single iteration is an increment--an
  incremental improvement of the system, yielding an
  evolutionary approach
           Risk Management
 Identification of the risks
 Iterative/Incremental Development
 The prototype or pilot project
  – Booch’s “Tiger Team”
 Early testing and deployment as opposed to
  late testing in traditional methods
     The Development Phases
 Inception Phase
 Elaboration Phase
 Construction Phase
 Transition Phase
                Inception Phase
 Overriding goal is obtaining buy-in from all interested
    parties
   Initial requirements capture
   Cost Benefit Analysis
   Initial Risk Analysis
   Project scope definition
   Defining a candidate architecture
   Development of a disposable prototype
   Initial Use Case Model (10% - 20% complete)
   First pass at a Domain Model
                Elaboration Phase
 Requirements Analysis and Capture
   – Use Case Analysis
       • Use Case (80% written and reviewed by end of phase)
       • Use Case Model (80% done)
       • Scenarios
           – Sequence and Collaboration Diagrams
           – Class, Activity, Component, State Diagrams
   – Glossary (so users and developers can speak common vocabulary)
   – Domain Model
       • to understand the problem: the system’s requirements as they exist
         within the context of the problem domain
   – Risk Assessment Plan revised
   – Architecture Document
            Construction Phase
 Focus is on implementation of the design:
   – cumulative increase in functionality
   – greater depth of implementation (stubs fleshed out)
   – greater stability begins to appear
   – implement all details, not only those of central
     architectural value
   – analysis continues, but design and coding predominate
                 Transition Phase
 The transition phase consists of the transfer of the system
    to the user community
   It includes manufacturing, shipping, installation, training,
    technical support and maintenance
   Development team begins to shrink
   Control is moved to maintenance team
   Alpha, Beta, and final releases
   Software updates
   Integration with existing systems (legacy, existing
    versions, etc.)
     Elaboration Phase in Detail
 Use Case Analysis
   – Find and understand 80% of architecturally significant
      use cases and actors
   – Prototype User Interfaces
   – Prioritize Use Cases within the Use Case Model
   – Detail the architecturally significant Use Cases (write
      and review them)
 Prepare Domain Model of architecturally significant
  classes, and identify their responsibilities and central
  interfaces (View of Participating Classes)
     Introduction to XP

“When the tests all run, you’re done”
                    Options
 XP is designed around the concept of
  options
  –   Option to abandon
  –   Option to switch
  –   Option to defer
  –   Option to grow and learn
                The Four Variables
 Management or the Customer chooses 3 of the four variables, the
  development team defines the fourth.
 Cost
   – Cost is the amount of capital available, which defines resources.
     More resources don’t necessarily mean better quality or shorter
     time (remember Brooks?)
 Time
    – The amount of time available for the project through delivery
 Quality
   – Quality is the degree to which and aplomb with which functionality meets
     requirements
 Scope
   – Scope is the amount of work to be done, the totality of the set of
     requirements. As requirements come and go, scope vacillates.
            The Paradigm Shift
 XP is based on the rejection of a fundamental and long-
  standing principle, that it costs less to make changes earlier
  in the development cycle rather than later. That the graph
  of cost to change is exponential across time. This
  fundamental principle has led to several strategies:
   – Better safe than sorry
   – Functional extravagance
   – Design extravagance
   – Proliferation of activities that may never provide a
      return on the investment
The Paradigm Shift Continued
 The fundamental technical premise of XP is that the graph of cost to
   change is not exponential but digressive, and as time goes by, the
   cost to change is asymptotic. “You make the big decisions as late in
   the process as possible.” This has several strategies:
    – You implement only what you have to, and add functionality
       later only if necessary
    – Design is parsimonious
    – Thoreau’s principle: Simplify, Simplify, Simplify.
    – Automated tests
    – Refactoring
    – Learning to drive analogy
    – informality
                   The Four Values
 Communication
   – Communication is bipartite. Developers need to communicate
     with customers as well as between themselves
 Simplicity
    – “What’s the simplest thing that could possibly work?” Let’s do that.
 Feedback
   – Continuous and instant feedback to all artifacts
   – Continuous and instant feedback to the project progression
   – Continuous and instant feedback to code
 Courage
   – The courage to change (alter design, throw away code)
   – The courage to decide
   – The courage to do
   – The courage to be
       The Basic Principles of XP
 Rapid feedback
   – instant evaluation of all work and deliverables
 Assume simplicity
   – 98% of problems can be solved with “ridiculous simplicity”
   – What happened to complexity?
         • Complexity != complex solutions
 Incremental change
   – Avoid big changes, make smaller changes more often (driving analogy)
 Embracing change
   – Might as well. Heraclitus was right, Parmenides was wrong. You simply
      will not be stepping into the same river twice.
 Quality work
   – Work ethic
   – Is Beck a little too hopeful on the human condition?
          Subordinate Principles
 Teach learning
 Small initial investment
 Play to win
 Concrete experiments
 Open, honest communication
 Work with people’s instincts, not against them
 Accepted not foisted responsibility
 Local adaptation (of process)
 Travel light (the nomadic team)
 Honest measurement (no lying)
     The Four Basic Activities
 Coding
 Testing
 Listening
 Designing
    Dominance of Coding and Testing
 Code is unambiguous and constant. It offers no opinions.
 Code is a another language for communication (as in pair
    programming)
   Tests allow for a secondary view into the code, from another angle
   Tests verify that “what was meant” was actually implemented
   Tests can validate performance as well as functionality
   You are responsible for writing multiple unit tests, you write a simple
    test for every possible way to “break” your code.
   Automated tests can prolong the longevity of the code, and provide
    continuous validation.
   A testing mentality promotes more self-assured programming style, as
    successful tests yield confidence in the code.
                       The Practices
 Planning – quickly determine the scope of the next iteration.
    Customers do the planning based on feedback from the developers.
     – “Software development is always an evolving dialog between the possible
       and the desirable.”
 Small Releases – take baby steps in each iteration. Rank iterations
    according to those which deliver the most valuable business
    requirements.
   Metaphor – define a simple story of how the system will work. It
    should be enlightening.
   Simple design – few classes and methods, no duplicated logic
   Testing – Developers write unit tests, Customers write functional tests
   Refactoring – revisiting code with rules that simplify the code. “When
    the system requires that you duplicate code, it’s asking for
    refactoring.”
   Pair Programming
   Collective Ownership – anyone can change any code at any time.
               The Practices, cont.
 Continuous Integration – code is integrated every half or full day at
  most. Integration is putting new code with the current system.
 Sane work week
 On-site customer – customer needs to be around
 Coding standards that all coders follow
                  Pair Programming
 One programmer writes the code, at the low level. He/she “has the
  ball”, or at least the keyboard.
 The other programmer looks at the code being written from a higher
  strategic level:
    –   What additional tests could break this?
    –   Can this be done more simply? (designing)
    –   Have I seen this before? (Refactoring)
    –   Did the guy with the ball just introduce a bug?
    –   Is this the best approach to this problem?
    –   Did the guy with the ball forget something?
    –   Does a question need to be answered by the Customer?
 Coding standards help reduce the need for reformatting code and
  bickering about style.
 Pairs write tests together too, following the same principles.
“Problems” With Pair Programming

 What happens on a geographically
  distributed development team?
 Management will object to “waste”, you
  only get half as much done, or we’ll need
  twice as many programmers.
 Pairs will naturally “self-select” in a
  Darwinian sense, militating against teaching
  learning.
           Project Planning
 Three phases:
  – Exploration
  – Commitment
  – Steering
           Exploration Phase
 Write a story (think “simplified” Use Case)
 Estimate a story: how long will it take to
  code this?
 Split a story: if a part of a story is more
  important than another, split it into two
  stories
            Commitment Phase
 Business chooses the scope and delivery date of
  the next iteration
 Four movements:
   – Sort by value (must have, should have, nice to have)
   – Sort by (estimation) risk
   – Set velocity – how quickly do we expect to move on
     this?
   – Choose Scope – Ok, given the above, what are we to
     deliver and when is it due?
                 Steering Phase
 Four movements:
   – Iteration
       • Iterations run 1 to 3 weeks generally.
       • Each iteration selects one or more stories to
         implement. Each iteration must yield a system that
         runs end-to-end, however embryonically.
   – Recovery: if development has overstated velocity, re-
     evaluate the set of stories (deliverables)
   – New story: If business realizes it’s got a new story, the
     new story is estimated, ranked, and added.
   – Reestimate: If development feels the plan is
     inadequate, it can reestimate the remaining stories and
     reset the estimated velocity.
                  Iteration Planning
 Task planning
 Three Phases:
   – Exploration Phase
      • Write a task by breaking down the stories into tasks
      • Split a task if necessary
   – Commitment Phase
      • Accept a task
      • Estimate a task
   – Steering Phase
      • Implement a task
      • Record Progress
      • Recovery – what to do if overworked: manage scope
      • Verify story with functional tests
   What about Design Strategy?
 Start with a test. A simple test.
 Design and implement just enough to get
  that test running, and make sure you don’t
  break another test.
 Add functionality and repeat
 Refactor.
 “The definition of the best design is the
  simplest design that runs all the test cases.”
             Use Case Analysis
 What is a Use Case?
   – A sequence of actions a system performs that yields a
     valuable result for a particular actor.
 What is an Actor?
   – A user or outside system that interacts with the system
     being designed in order to obtain some value from that
     interaction
 Use Cases describe scenarios that describe the interaction
  between users of the system and the system itself.
 Use Cases describe WHAT the system will do, but never
  HOW it will be done.
            What’s in a Use Case?
 Define the start state and any preconditions that accompany it
 Define when the Use Case starts
 Define the order of activity in the Main Flow of Events
 Define any Alternative Flows of Events
 Define any Exceptional Flows of Events
 Define any Post Conditions and the end state
 Mention any design issues as an appendix
 Accompanying diagrams: State, Activity, Sequence Diagrams
 View of Participating Objects (relevant Analysis Model Classes)
 Logical View: A View of the Actors involved with this Use Case, and
   any Use Cases used or extended by this Use Case
        Use Cases Describe Function not Form
 Use Cases describe WHAT the system will do, but never HOW it will be done.
 Use Cases are Analysis Products, not Design Products.
 Use Cases Describe Function not Form
 Use Cases describe WHAT the system
  should do, but never HOW it will be done
 Use cases are Analysis products, not design
  products
           Benefits of Use Cases
 Use cases are the primary vehicle for requirements capture
    in RUP
   Use cases are described using the language of the customer
    (language of the domain which is defined in the glossary)
   Use cases provide a contractual delivery process (RUP is
    Use Case Driven)
   Use cases provide an easily-understood communication
    mechanism
   When requirements are traced, they make it difficult for
    requirements to fall through the cracks
   Use cases provide a concise summary of what the system
    should do at an abstract (low modification cost) level.
      Difficulties with Use Cases
 As functional decompositions, it is often difficult to make
  the transition from functional description to object
  description to class design
 Reuse at the class level can be hindered by each developer
  “taking a Use Case and running with it”. Since Ucs do not
  talk about classes, developers often wind up in a vacuum
  during object analysis, and can often wind up doing things
  their own way, making reuse difficult
 Use Cases make stating non-functional requirements
  difficult (where do you say that X must execute at Y/sec?)
 Testing functionality is straightforward, but unit testing the
  particular implementations and non-functional
  requirements is not obvious
          Use Case Model Survey
 The Use Case Model Survey is to illustrate, in
  graphical form, the universe of Use Cases that the
  system is contracted to deliver.
 Each Use Case in the system appears in the
  Survey with a short description of its main
  function.
   – Participants:
      •   Domain Expert
      •   Architect
      •   Analyst/Designer (Use Case author)
      •   Testing Engineer
Sample Use Case Model Survey
                   Analysis Model
 In Analysis, we analyze and refine the requirements described in the
  Use Cases in order to achieve a more precise view of the requirements,
  without being overwhelmed with the details
 Again, the Analysis Model is still focusing on WHAT we’re going to
  do, not HOW we’re going to do it (Design Model). But what we’re
  going to do is drawn from the point of view of the developer, not from
  the point of view of the customer
 Whereas Use Cases are described in the language of the customer, the
  Analysis Model is described in the language of the developer:
    – Boundary Classes
    – Entity Classes
    – Control Classes
 Why spend time on the Analysis Model, why
         not just “face the cliff”?
 By performing analysis, designers can inexpensively come to a better
  understanding of the requirements of the system
 By providing such an abstract overview, newcomers can understand
  the overall architecture of the system efficiently, from a ‘bird’s eye
  view’, without having to get bogged down with implementation
  details.
 The Analysis Model is a simple abstraction of what the system is going
  to do from the point of view of the developers. By “speaking the
  developer’s language”, comprehension is improved and by abstracting,
  simplicity is achieved
 Nevertheless, the cost of maintaining the AM through construction is
  weighed against the value of having it all along.
                Boundary Classes
 Boundary classes are used in the Analysis Model to model interactions
  between the system and its actors (users or external systems)
 Boundary classes are often implemented in some GUI format (dialogs,
  widgets, beans, etc.)
 Boundary classes can often be abstractions of external APIs (in the
  case of an external system actor)
 Every boundary class must be associated with at least one actor:
             Entity Classes
 Entity classes are used within the Analysis
  Model to model persistent information
 Often, entity classes are created from
  objects within the business object model or
  domain model
                   Control Classes
 The Great Et Cetera
 Control classes model abstractions that coordinate, sequence, transact,
  and otherwise control other objects
 In Smalltalk MVC mechanism, these are controllers
 Control classes are often encapsulated interactions between other
  objects, as they handle and coordinate actions and control flows.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:0
posted:1/22/2013
language:English
pages:52