Self-Adjusting Computation by vymIR8ke

VIEWS: 5 PAGES: 53

									Self-Adjusting Computation

           Robert Harper
      Carnegie Mellon University
  (With Umut Acar and Guy Blelloch)
              The Problem
• Given a static algorithm, obtain a dynamic,
  or incremental, version.
  – Maintain a sorted list under insertions and
    deletions
  – Maintain a convex hull under motions of the
    points.
  – Maintain semantics of a program under edits.
          Example: Sorting
Input:

      5, 1, 2, 3
     5, 1, 4, 2, 3


Output:

      1, 2, 3, 5
     1, 2, 3, 4, 5
       Dynamic Algorithms
• There is a large body of work on
  dynamic / incremental algorithms.
  – Specific techniques for specific problems.
• Our interest is in general methods,
  rather than ad hoc solutions.
  – Applying them to a variety of problems.
  – Understanding when these methods apply.
  Self-Adjusting Computation
• Self-adjusting computation is a method
  for “dynamizing” a static algorithm.
  – Start with a static algorithm for a problem.
  – Make it robust under specified changes.
• Goal: “fast” response to “small” change.
  – “Fast” and “small” are problem-specific!
  – As ever, the analysis can be difficult.
  Self-Adjusting Computation
• Generalizes incremental computation.
  – Attribute grammars, circuit models assume static
    control dependencies.
  – SAC permits dynamic dependencies.
• Combines algorithmic and programming
  language techniques.
  – Linguistic tools to ensure correctness relative to
    static algorithm.
  – Algorithmic techniques for efficient implementation.
  Self-Adjusting Computation
• Adaptivity:
  Propagate the effects on the output of a
  change to the input.
• Selective Memoization:
  Reuse old results, provided they are valid
  after change.
• Adaptive Memoization:
  Reuse old results, even though they may not
  be valid after change.
      Model of Computation
• Purely functional programming model.
  – Data structures are persistent.
  – No implicit side effects or mutation.
• Imperative model of change.
  – Run on initial input to obtain output.
  – Make modifications to the input.
  – Propagate changes to the output.
Model of Computation
 Stages of Revision                        Steps of Execution
                      (impure/ephemeral)   (pure/persistent)
   A Simple Example: Map
data cell = nil | cons of int £ list
and list = cell

fun map (l:list) = map’(l)
and map’(c) =
  case c of nil ) nil
 | cons (h, t) ) cons (h+10, map t)
      Static Version of Map
• Running map on the input list …
       2          3          4


• … yields the new output list

      12         13         14
    Dynamic Version of Map
• To permit insertions and deletions, lists
  are made modifiable:

  data cell =
    nil | cons of int £ list
  and list =
    cell mod
    Dynamic Version of Map
Insertion changes a modifiable:

    2           3            5




                       4
    Dynamic Version of Map
We’d like to obtain the result …

   12           13           15




                       14
    Dynamic Version of Map
• Can we update the result in O(1) time?
  – Make one new call to map.
  – Splice new cell into “old” result.
• Yes, using self-adjusting computation!
  – Adaptivity: call map on the new node.
  – Memoization: re-synchronize with suffix.
       Adaptivity Overview
• To make map adaptive, ensure that
  – Changes invalidate results that depend on
    the modified value.
  – Computations dependent on a change are
    re-run with the “new” value.
• Two key ideas:
  – Make access to modifiables explicit.
  – Maintain dependencies dynamically.
             Adaptive Map
data cell = nil | cons of int £ list
and list = cell mod
                       Allocate new modifiable
fun map (l:list) =
  mod
  (let mod c = l in write(map’ c))
and map’ c =
  case c of nil ) nil
  | cons modifiable
 Write new (h, t) ) cons (h+10, map t)
                      Read old modifiable
              Adaptive Map
Modification to input:

     2               3       5


    Modified cell is
  argument of map’,      4
 which must be re-run.
              Adaptive Map
• Associated output is invalidated, and
  suffix is re-created.

     12              13             15
                                    15



                               14
Result of map’ written here.
     Adaptive Programming
• Crux: dependencies among modifiables.
  – Writing a modifiable invalidates any
    computation that reads it.
  – One read can be contained within another.
• Dependencies are fully dynamic!
  – Cells are allocated dynamically.
  – Reads affect control flow.
      Adaptive Programming
• Change propagation consists of
  – Re-running readers of changed cells.
  – Updating dependencies during re-run.
• To ensure correctness,
  – All dependencies must be accurately tracked.
  – Containment ordering must be maintained.
• Linguistic tools enforce these requirements!
  Type System for Adaptivity
• The type  mod is a modality.
  – From lax modal logic.
  – And therefore forms a monad.
• Two modes of expression:
  – Stable: ordinary functional code, not
    affected by changes.
  – Changeable: affected by change, will be
    written to another modifiable.
  Type System for Adaptivity
• Elimination form for  mod:
  let mod x: = s in c end
  – Read modifiable given by s.
  – Bind value to x:, evaluate c.
• Makes dependencies explicit:
  – Records read of given modifiable.
  – Re-run c with new x if changed.
  – Reads within c are contained in this read.
  Type System for Adaptivity
• Execution maintains a trace of adaptive events.
  – Creation of a modifiable.
  – Writes to a modifiable.
  – Reads of a modifiable.
• Containment is recorded using Sleator-Dietz
  order maintenance algorithm.
  – Associate time intervals with events.
  – Re-run reader within the “old” time interval.
  – Requires arbitrary fractionation of time steps.
           Adaptive Map
• Responds to insertion in linear time:


    12           13            15



                        14
           Memoizing Map
• For constant-time update, we must re-
  synchronize with the old result.
  – Results after insertion point remain valid
    despite change.
  – Re-use, rather than recompute, to save
    time.
• Selective memoization is a general
  technique for achieving this.
     Selective Memoization
• Standard memoization is data-driven.
  – Associate with a function f a finite set of
    ordered pairs (x, f(x)).
  – Consult memo table before call, update
    memo table after call.
• Cannot handle partial dependencies.
  – Eg, read only first 10 elements of an array.
  – Eg, use an approximation of input.
     Selective Memoization
• Selective memoization is control-driven.
  – Guided by the exploration of the input.
  – Sensitive to approximations.
• Associate results with control paths.
  – “Have I been here before?”
  – Control path records dependencies of
    output on input.
    Memoized Adaptive Map
fun map (l:list) =      Depends only on
                           nil/cons.
  mod
  (let mod c = l in write(map’ c))
and memo map’ c =     Depends on nil/cons,
  mcase c of nil )      head, and tail.
   return (nil)
  | cons (h, t) )
   let !h’=h and !t’=t in
     return(cons (h’+10, map t’))
    Memoized Adaptive Map
• With selective memoization we obtain

    2            3            5



                        4


• Constant-time update after insertion.
    Memoized Programming
• Selective memoization requires an
  accurate record of I/O dependencies.
  – Which aspects of the input are relevant to
    the output?
  – Sensitive to dynamic control flow.
• Linguistic support provides
  – Specification of approximations.
  – Accurate maintenance of dependencies.
Type System for Memoization
• Based on S4 modal logic for necessity.
  – Truth assumptions: restricted variables.
  – Validity assumptions: ordinary variables.
• Modality ! means value is necessary.
  – !(int £ int) : both components necessary
  – !int £ !int : either, neither, or both parts
    needed, depending on control flow.
Type System for Memoization
• Key idea: variables are classified as restricted
  or unrestricted.
   – Arguments to memoized functions are restricted.
   – Results of memoized functions may not involve
     restricted variables.
   – Elimination form for ! binds an unrestricted
     variable.
• Ensures that relevant portion of input must be
  explicitly “touched” to record dependency.
     Selective Memoization
fun map (l:list) = …
and memo map’ c =         Restricted
  mcase c of nil )
   return (nil)
  | cons (h, t) )
   let !h’=h and !t’=t in
     return(cons (h’+10, map t’))

               Unrestricted
      Adaptive Memoization
• Effectiveness of memoization depends
  on preserving identity.
  – Modifiables compare “by reference”.
  – Copying a modifiable impedes re-use.
• This conflicts with the functional
  programming model.
  – Eg, functional insertions copy structure.
  – Undermines effectiveness of memoization.
     Adaptive Memoization
• Consider again applying map to:

     2           3            5


• We obtain the new modifiable list

    12           13          15
        Adaptive Memoization
• Now functionally insert an element:

    2            3            5



                       4
     Adaptive Memoization
• Running map on result yields

    12          13           15



                       14
     Adaptive Memoization
• Subsequent runs propagate the effect:

    22          23           25



                      24
      Adaptive Memoization
• Ideally, we’d re-use the “old” prefix!

    12           13            15



                         14

• But what justifies this?
 Memoization and Adaptivity
• Consider a memoized “copy” function:

 fun copy (!m : int mod) =
  return (mod (let mod x = m in x))


• What happens if we modify m?
  – Value might change “under our feet”.
  – But adaptivity restores correctness!
  Memoization and Adaptivity
• Initial run of copy:

             43
            17                  43
                               17

• Calls to copy at later stages return “old”
  cell, which is updated by adaptivity.

             43
            17                  43
                               17
      Adaptive Memoization
• Permit inaccurate memoization.
  – Allows recovery of “old” cells.
  – Cached result will be incorrect.
• Adapt incorrect result to restore
  correctness.
  – Use change propagation to revise answer.
  – Only sensible in conjunction with
    adaptivity!
     Adaptive Memoization
fun map (l:list) = …
and memo map’ c =
  mcase c of nil )
                            Do not record
   return (nil)
                            dependency!
  | cons (h, t) )
   let !h’=h in       Memo match only on
                       nil/cons and head.
    let ?t’=t in
     return(cons (h’+10, map t’))
   Adaptively Memoized Map
• On the initial input …

     2            3         5


• Map yields the output …

     12           13        15
   Adaptively Memoized Map
• After a functional update, the input is

     2            3             5



                         4
   Adaptively Memoized Map
• Now maps yields the inaccurate result:

    12              13            15

                         Tail is incorrect!
    Result of map

• Memo matches on head value 2,
  yielding old result, with incorrect tail.
  Adaptively Memoized Map
• Restore accuracy by self-assignment:

    2           3           5



                      4
  Adaptively Memoized Map
• Change to input propagates to output

   12
   12          13
               13          15
                           15



                      14
           Some Results
• Quicksort: expected O(lg n) update after
  insert or delete at a random position.
• Mergesort: expected O(lg n) update
  after insert or delete.
• Tree Contraction: O(lg n) update after
  adding or deleting an edge.
• Kinetic Quickhull: O(lg n) per event
  measured.
           Ongoing Work
• When can an algorithm be dynamized?
  – Consider edit distance between traces for
    a class of input changes.
  – Small edit distance suggests we can build
    a dynamic version using SAC.
• What is a good semantic model?
  – Current methods are rather ad hoc.
  – Are there better models?
              Conclusion
• Self-adjusting computation is a powerful
  method for building dynamic algorithms.
  – Systematic methodology.
  – Simple correctness criteria.
  – Easy to implement.
• The interplay between linguistic and
  algorithmic methods is vital!
  –  is also a powerful algorithmic tool!
Questions?

								
To top