Docstoc

Definition

Document Sample
Definition Powered By Docstoc
					Implementation/Infrastructure
  Support for Collaborative
       Applications

        Prasun Dewan




                                1
Infrastructure vs. Implementation
           Techniques
• Implementation technique are interesting when
  general
   – Applies to a class of applications
• Coding of such an implementation technique is
  infrastructure.
• Sometimes implementation techniques apply to
  very narrow app set
   – Operation transformation for text editors.
• These may not qualify as infrastructures
• Will study implementation techniques applying to
  small and large application sets.
                                                     2
Collaborative Application




    Coupling   Coupling
                            3
Infrastructure-Supported Sharing

                  Client




             Sharing
             Infrastructure




       Coupling            Coupling
                                      4
             Systems: Infrastructures
•   NLS (Engelbart ‟68)               •   Post Xerox
•   Colab (Stefik ‟85)                •   Xerox
•   VConf (Lantz „86)                 •   Stanford
•   Rapport (Ahuja ‟89)               •   Bell Labs
•   XTV (Abdel-Wahab, Jeffay & Feit   •   UNC/ODU
    „91)
                                      •   Bellcore
•   Rendezvous (Patterson „90)
•   Suite (Dewan & Choudhary „92)
                                      •   Purdue
•   TeamWorkstation (Ishii ‟92)
                                      •   Japan
•   Weasel (Graham ‟95)
                                      •   Queens
•   Habanero (Chabert et al „ 98)
                                      •   U. Illinois
•    JCE (Abdel-Wahab „99)
                                      •   ODU
•   Disciple (Marsic „01)
                                      •   Rutgers


                                                        5
        Systems: Products
• VNC (Li, Stafford-    • ATT Research
  Fraser, Hopper ‟01)   • Microsoft
• NetMeeting
• Groove
• Advanced Reality
• LiveMeeting (Pay      • Microsoft
  by minute service
  model)
• Webex (service
  model)



                                         6
                 Issues/Dimensions
•   Architecture
•   Session management           Concurrency Control
•   Access control               Session Management
•   Concurrency control           Architecture Model

•   Firewall traversal
•   Interoperability
•   Composability         Colab. Sys. 1         Colab. Sys. 2
•   …                   Implementation 1      Implementation 3
                                                          7
Infrastructure-Supported Sharing

                  Client




             Sharing
             Infrastructure




       Coupling            Coupling
                                      8
Architecture?
                Infrastructure/
                client (logical)
                components


                Component
                (physical)
                distribution




                       9
Shared Window Logical Architecture
             Application

              Near-
             WYSIWIS
    Window                 Window
             Coupling




                                    10
Centralized Physical Architecture
                                           XTV („88)
                                           VConf („87)
        X Client                           Rapport („88)
                                           NetMeeting
                 Input/Output
      Pseudo Server        Pseudo Server



        X Server            X Server


         User 1             User 2

                                                  11
Replicated Physical Architecture
                                             Rapport
                                             VConf
       X Client                X Client

                     Input
     Pseudo Server           Pseudo Server



       X Server               X Server


        User 1                User 2

                                                  12
   Relaxing WYSIWIS?
         Application

          Near-
         WYSIWIS
Window                 Window
         Coupling




                                13
Model-View Logical Architecture
             Model



    View             View



    Window           Window




                              14
Centralized Physical Model
                  Rendezvous
          Model    („90, ‟95)


 View             View



 Window           Window




                                15
Replicated Physical Model
         Infrastructure            Sync ‟96,
Model                     Model
                                    Groove


View                      View



Window                    Window




                                         16
          Comparing the Architectures
                                 App               App
    App

              I/O                         Input
   Pseudo           Pseudo      Pseudo            Pseudo
   Server           Server      Server            Server


   Window           Window       Window           Window




            Model             Model                Model    Architecture
                                                              Design
 View                 View     View                 View
                                                              Space?


Window               Window   Window               Window
                                                                    17
      Architectural Design Space
• Model/ View are Application-Specific
• Text Editor Model
  –   Character String
  –   Insertion Point
  –   Font
  –   Color
• Need to capture these differences in
  architecture
                                         18
         Single-User Layered Interaction
                Layer 0    Layer 0


            Layer 1       Layer 1

                                                  Increasing
Communication                        I/O Layers   Abstraction
Layers

           Layer N-1      Layer N-1



           Layer N        Layer N


    PC                                                Physical
                                                      Devices
                                                         19
 Single-User Interaction
         Layer 0


         Layer 1


                       Increasing
                       Abstraction
        Layer N-1



        Layer N


PC
         PC

                                20
Example I/O Layers
      Model


      Widget


                     Increasing
                     Abstraction
     Window



     Framebuffer



      PC

                              21
    Layered Interaction with an Object
                       {“John Smith”,   Abstraction
                        2234.57}

Interactor =                            Interactor/
                   •   John Smith
Absrtraction       •                    Abstraction
Representation
+                 •    John Smith        Interactor/
Syntactic Sugar   •                      Abstraction


                                    X

                  •    John Smith         Interactor
                  •



                                                       22
Single-User Interaction
        Layer 0


        Layer 1


                      Increasing
                      Abstraction
       Layer N-1



       Layer N



        PC

                               23
         Identifying the Shared Layer
Higher layers will    Layer 0     Program
also be shared                    Component
   Shared
                     Layer S
   Layer

                                  Increasing
Lower layers may                  Abstraction
diverge
                     Layer S+1

                                  User-
                                  Interface
                     Layer N      Component


                     PC

                                           24
          Replicating UI Component




Layer S+1         Layer S+1   Layer S+1



Layer N           Layer N     Layer N



PC                 PC          PC

                                          25
            Centralized Architecture
                     Layer 0


                    Layer S




Layer S+1           Layer S+1    Layer S+1



Layer N             Layer N      Layer N



PC                  PC            PC

                                             26
     Replicated (P2P) Architecture
 Layer 0        Layer 0       Layer 0


Layer S         Layer S      Layer S




Layer S+1      Layer S+1     Layer S+1



Layer N        Layer N       Layer N



PC              PC           PC

                                         27
Implementing Centralized Architecture
                          Layer 0


                         Layer S

                    Master Input Relayer
Slave I/O Relayer                          Slave I/O Relayer
                    Output Broadcaster

   Layer S+1            Layer S+1             Layer S+1



   Layer N              Layer N               Layer N


             PC


                                                           28
               Replicated Architecture
    Layer 0              Layer 0              Layer 0


    Layer S             Layer S              Layer S


Input Broadcaster     Input Broadcaster   Input Broadcaster


   Layer S+1           Layer S+1            Layer S+1



   Layer N             Layer N              Layer N


              PC


                                                         29
       Classifying Previous Work
–   XTV
–   NetMeeting App Sharing   Shared
–   NetMeeting Whiteboard    Layer
–   Shared VNC
–   Habanero
                                      Rep vs.
–   JCE                               Central
–   Suite
–   Groove
–   LiveMeeting
–   Webex



                                                30
        Classifying Previous Work
• Shared layer
   – X Windows (XTV)                 Shared
   – Microsoft Windows (NetMeeting Layer
   App Sharing)
   – VNC Framebuffer (Shared VNC)
                                                  Rep vs.
   – AWT Widget (Habanero, JCE)                   Central
   – Model (Suite, Groove, LiveMeeting)
• Replicated vs. centralized
   – Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
     Suite, PlaceWare)
   – Replicated (VConf, Habanero, JCE, Groove, NetMeeting
     Whiteboard)
                                                          31
     Service vs. Server vs. Local
           Commuication
• Local: User site sends data
  – VNC, XTV, VConf, NetMeeting Regular
• Server: Organization‟s site connected by
  LAN to user site sends data
  – NetMeeting Enterprise, Sync
• Service: External sites connected by WAN
  to user site sends data
  – LiveMeeting, Webex

                                             32
            Push vs. Pull of Data
• Consumer pulls new data by sending request for it
  in response to
   – notification
      • MVC
   – receipt of previous data
      • VNC
• Producer pushes data for consumers
   – As soon as data are produced
      • NetMeeting, Real-time sync
   – When user requests
      • Asynchronous Sync

                                                  33
                Dimensions
•   Shared layer level.
•   Replicated vs. Centralized.
•   Local vs. Server vs. Service Broadcast
•   Push vs. Pull Data
•   …



                                             34
    Evaluating design space points
• Coupling Flexibility   Performance
                            –   Bandwidth usage
• Automation
                            –   Computation load
• Ease of Learning          –   Scaling
• Reuse                     –   Join/leave time
• Interoperability          –   Response time
                                 • Feedback to actor
• Firewall traversal                 – Local
                                     – Remote
• Concurrency and
                                 • Feedthrough to observers
  correctness                        – Local
• Security                           – Remote
                            – Task completion time
                                                          35
Sharing Low-Level vs. High-Level Layer
• Sharing a layer nearer the              • Sharing a layer nearer the
  data                                      physical device
   – Greater view independence                 – Have referential
   – Bandwidth usage less                        transparency
       • For large data sometimes                   • Green object no meaning
         visualization is compact.                    if objects colored
                                                      differently
   – Finer-grained access and
     concurrency control                       – Higher chance layer is
       • Shared window system
                                                 standard.
         support floor control.                     • Sync vs. VNC
   – Replication problems better                    • promotes reusability and
     solved with more app                             interoperability
     semantics
       • More on this later.

                        •Sharing flexibility limited with fixed layer
                        sharing
                        •Need to support multiple layers.
                                                                            36
 Centralized vs. Replicated: Dist.
        Comp. vs. CSCW
• Distributed              • CSCW
  computing:                 – Input immediately
   – More reads (output)       delivered without
     favor replicated          distributed commitment.
   – More writes (input)     – Floor control or
     favor centralized         operation transformation
                               for correctness




                                                   37
  Bandwidth Usage in Replicated vs.
            Centralized
• Remote I/O bandwidth only an issue when
  network bandwidth < 4MBps (Nieh et al
  „2000)
  – DSL link = 1 Mbps
• Input in replication less than output
     • Input produced by humans
     • Output produced by faster computers



                                             38
Feedback in Replicated vs. Centralized
• Replicated: Computation time on local computer
• Centralized
   – Local user
      • Computation time on local computer
   – Remote user
      • Computation time on hosting computer plus roundtrip time
   – In server/ service model an extra LAN/ WAN link




                                                                   39
    Influence of communication cost
• Window sharing remote feedback
   – Noticeable in NetMeeting.
   – Intolerable in PlaceWare‟s service model.
• Powerpoint presentation feedback time
   – not noticeable in Groove & Webex replicated model.
   – noticeable in NetMeeting for remote user.
• Not typically noticeable in Sync with shared model
• Depends on amt of communication with remote site
   – Which depends on shared layer



                                                          40
Case Study: Colab. Video Viewing




                               41
  Case Study: Collaborative Video
  Viewing (Cadiz, Balachandran et al. 2000)
• Two users collaboratively
  executing media player
  commands
• Centralized NetMeeting
  sharing added unacceptable
  video latency
• Replicated architecture
  created using T 120 later
• Part of problem in centralized
  system sharing video through
  window layer




                                              42
    Influence of Computation Cost
• Computation intensive apps
  – Replicated case: local computer‟s computation
    power matters.
  – Central case: central computer‟s computation
    power matters
  – Central architecture can give better feedback,
    specially with fast network [Chung and Dewan ‟99]
  – Asymmetric computation power => asymmetric
    architecture (server/desktop, desktop/PDA)

                                                  43
                         Feedthrough
• Time to show results at remote site.
• Replicated:
   – One-way input communication time to remote site.
   – Computation time on local replica
• Centralized:
   – One-way input communication time to central host
   – Computation time on central host
   – One-way output communication time to remote site.
• Server/service model add latency
• Less significant than remote feedback:
   – Active user not affected.
• But must synchronize with audio
   – “can you see it now?”




                                                         44
          Task completion time
• Depends on
   – Local feedback
       • Assuming hosting user inputs
   – Remote feedback
       • Assuming non hosting user inputs
       • Not the case in presentations, where centralized favored
   – Feedthrough
       • If interdependencies in task
       • Not the case in brainstorming, where replicated favored
   – Sequence of user inputs
• Chung and Dewan ‟01
   – Used Mitre log of floor exchanges and assumed interdependent tasks
   – Task completion time usually smaller in replicated case
   – Asymmetric centralized architecture good when computing power
     asymmetric (or task responsibility asymmetric?).

                                                                    45
             Scalability and Load
• Centralized architecture with powerful server more suitable.
• Need to separate application execution with distribution.
   – PlaceWare
   – Webex
• Related to firewall traversal. More later.
• Many collaborations do not require scaling
   – 2-3 collaborators in joint editing
   – 8-10 collaborators in CAD tools (NetMeeting Usage Data)
   – Most calls are not conference calls!
• Adapt between replicated and centralized based on #
  collaborators
   – PresenceAR goals


                                                               46
          Display Consistency
• Not an issue with floor control systems.
• Other systems must ensure that concurrent input
  should appear to all users to be processed in the
  same (logical) order.
• Automatically supported in central architecture.
• Not so in replicated architectures as local input
  processed without synchronizing with other
  replicas.

                                                      47
   Synchronization Problems
     abc                  abc
     dabc                 aebc
     deabc                daebc


 Program               Program
         Insert d,1            Insert e,2
         Insert e,2            Insert d,1
Input                 Input
Distributor           Distributor
         Insert d,1               Insert e,2

    UI                    UI


   User 1                User 2
                                          48
          Peer to peer Merger
     abc                                         abc
     dabc                                        aebc
     daebc                                       daebc


 Program                                      Program
         Insert d,1                                   Insert e,2
         Insert e,3                                   Insert d,1
Input                                        Input
                      Merger        Merger
Distributor                                  Distributor
         Insert d,1                                      Insert e,2

    UI                                           UI
                      Ellis and Gibbs „89,
                      Groove, …
   User 1                                       User 2
                                                                 49
  Local and Remote Merger
     abc                   Merger                     abc
     dabc                                             aebc
     daebc                                            daebc


 Program                                            Program
         Insert d,1                                        Insert e,2
         Insert e,3                                        Insert d,1
Input                 Merger      Merger        Input
Distributor                                     Distributor
         Insert d,1                                           Insert e,2

    UI           • Curtis et al ‟95, LiveMeeting,     UI
                   Vidot „02
                 • Feedthrough via extra WAN
                   Link
   User 1        • Can recreate state through        User 2
                   central site                                       50
          Centralized Merger
     abc                   Merger                      abc
     dabc                                              aebc
     daebc                                             daebc


 Program                                            Program
         Insert d,1                                         Insert e,2
         Insert e,3                                         Insert d,1
Input             • Munson & Dewan „94             Input
                  • Asynchronous and
Distributor         synchronous                    Distributor
                      – Blocking remote merge
         Insert d,1                                            Insert e,2
                  • Understands atomic
    UI              change set
                  • Flexible remote merge              UI
                    semantics
                      – Modify or delete can win
   User 1                                             User 2
                                                                       51
Merging vs. Concurrency Control
• Real-time Merging called Optimistic
  Concurrency Control
• Misnomer because it does not support
  serializability.
• More on this later.




                                         52
Reading Centralized Resources
                    f   ab             Central
                                       bottleneck!


 Program                                 Program
         read “f”                                read” “f”


Input                                  Input
Distributor                            Distributor
         read “f”
                          Read file
    UI                                      UI
                          operation
                          executed
                        infrequently
   User 1                                 User 2
                                                         53
Writing Centralized Resources
                          f   ab
                              abcc   Multiple
                                     writes


 Program                               Program
         write “f”, “c”                     write” “f”, “c”


Input                                Input
Distributor                          Distributor
         write“f”, “c”

    UI                                    UI


   User 1                               User 2
                                                     54
          Replicating Resources
f      abcc                                       f    abcc



     Program                                          Program
                write f, “c”                                  write “f”, “c”


    Input                                         Input
    Distributor                                   Distributor
              write“f”, “c”

        UI            • Groove Shared Space             UI
                        &Webex replication
                      • Pre-fetching
                      • Incremental replication
       User 1           (diff-based) in Groove         User 2
                                                                      55
 Non Idempotent Operations
                            msg
                            msg


 Program                           Program
         mail joe, msg                     mail joe, msg


Input                             Input
Distributor                       Distributor
            mail joe, msg

    UI                                UI


   User 1                            User 2
                                                  56
  Separate Program Component
                       Program‟                     msg

                               mail joe, msg
     Program                                                     Program
insert d, 1                                            insert d, 1

   Input                                                        Input
   Distributor                                                  Distributor
 insert d,1      •   Groove Bot: Dedicated machine for
                     external access
                 •   Only some users can invite Bot in shared
          UI         space                                           UI
                 •   Only some users can invoke Bot
                     functionality
                 •   Bot data can be given only to some users
                 •   Similar idea of special “externality
                     proxy” in Begole 01
        User 1                                                     User 2
                                                                              57
Two-Level Program Component
                                  Program++              msg
                     insert d,1    mail joe, msg


                                                                     insert d,1

      Program                                              Program

insert d,1        mail joe, msg

             UI                                                 UI
                      •   Dewan & Choudhary ‟92, Sync,
                          LiveMeeting
                      •   Extra comm. hop and
                          centralization
        User 1        •   Easier to implement                  User 2
                                                                              58
        Classifying Previous Work
• Shared layer
   – X Windows (XTV)                 Shared
   – Microsoft Windows (NetMeeting Layer
   App Sharing)
   – VNC Framebuffer (Shared VNC)
                                                  Rep vs.
   – AWT Widget (Habanero, JCE)                   Central
   – Model (Suite, Groove, PlaceWare,)
• Replicated vs. centralized
   – Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
     Suite, PlaceWare)
   – Replicated (VConf, Habanero, JCE, Groove, NetMeeting
     Whiteboard)
                                                          59
              Layer-specific
•   So far, layer-independent discussion.
•   Now concrete layers to ground discussion
•   Screen sharing
•   Window sharing
•   Toolkit sharing
•   Model sharing

                                               60
        Centralized Window Architecture
   Window Client
a ^, w1, x, y    draw a, w1, x, y

 Output Broadcaster
 & I/O Relayer                         draw a, w, x, y

                   a ^, w, x, y
                                         I/O Relayer                 I/O Relayer

                draw a, w1, x, y    a ^, w2, x, y draw a, w2, x, y         draw a, w3, x, y

    Win. Server                         Win. Server                  Win. Server
                                       Press a

                                                                                 61
        User 1                            User 2                        User 3
             UI Coupling in Centralized
                   Architecture
Window Client           •   Existing approach
                             – T 120, PlaceWare
                        •   UI coupling need not be
                            supported
Output Broadcaster           – XTV
& I/O Relayer++
                                                         move w3
          move w
                      I/O Relayer                     I/O Relayer

                     move w2                                move w3
move w1

 Win. Server         Win. Server                      Win. Server
                     move w2

                                                                   62
    User 1             User 2                            User 3
 Distributed Architecture for UI Coupling
 Window Client             • Need multicast server at
                           each LAN
                           • Can be supported by T 120
Output Broadcaster
& I/O Relayer++
         move w1
                     I/O Relayer ++        move w3      I/O Relayer++
                     move w1
         move w1                                               move w

 Win. Server         Win. Server                        Win. Server
                     move w1

                                                                    63
    User 1             User 2                              User 3
     Two Replication Alternatives

     S             S                         S      S



    S -1           S -1                     S -1    S -1


• Replicate d in S by
    – S-1 sending input events to all S instances
    – S sending events directly to all peers
• Direct communication allows partial sharing (e.g. windows)
• Harder to implement automatically by infrastructure

                                                               64
          Semantic Issue
• Should window positions be coupled?
• Leads to window wars (Stefik et al ‟85)
• Can uncouple windows
  – Cannot refer to the “upper left” shared
    window
• Compromise
  – Create a virtual desktop for physical desktop
    of a particular user

                                               65
UI Coupling and Virtual Desktop




                              66
           Raw Input with Virtual Desktop
   Window Client
a ^, w1, x, y   draw a, w1, x, y                              Knows about
                                                             virtual desktop
 Output Broadcaster I/O
 Relayer & VD                               draw a, x‟, y‟
  a ^, x‟, y‟

                            VD & I/O Relayer                          VD & I/O Relayer

                draw a, w1, x, y a ^, w2, x‟, y‟ draw a, w2, x‟, y‟             draw a, w3, x‟, y‟


    Win. Server                        Win. Server                         Win. Server
                                       Press a

                                                                                        67
        User 1                            User 2                               User 3
         Translation without Virtual Desktop
   Window Client
a ^, w1, x, y   draw a, w1, x, y

 Output Broadcaster, I/O
 Relayer & Translator
a ^, w1, x, y

                            I/O Relayer                             I/O Relayer

                draw a, w1, x, y   a ^, w2, x, y draw a, w2, x, y                 draw a, w3, x, y


    Win. Server                        Win. Server                        Win. Server
                                      Press a

                                                                                        68
        User 1                           User 2                              User 3
Coupled Expose Events: NetMeeting




                               69
            Coupled Exposed Regions
  Window Client
expose w    draw w
                                      T 120 (Virtual Desktop)
Output Broadcaster I/O
Relayer & VD
expose w
                     VD & I/O Relayer            VD & I/O Relayer
                                  expose w                      expose w
            draw w                draw w                        draw w

  Win. Server              Win. Server                 Win. Server

 front w3
                                                                    70
      User 1                 User 2                        User 3
Coupled Expose Events: PlaceWare




                               71
           Uncoupled Expose Events
                                      • XTV (no Virtual Desktop)
 Window Client
expose w    draw w                    • expose event not broadcast
                                      so remote computers do not
Output Broadcaster I/O                blacken region
Relayer & VD
                                      • Potentially stale data
expose w

                     VD & I/O Relayer              VD & I/O Relayer
           draw w                      draw w                       draw w


 Win. Server               Win. Server                    Win. Server

front w3
                                                                          72
     User 1                  User 2                              User 3
     Uncoupled Expose Events
• Centralized collaboration-transparent app draws to
  areas of last user who sent expose event.
   – May only sent local expose events
• If it redraws the entire window anyway everyone
  is coupled.
• If it draws only exposed areas.
   – Send the draw request only to inputting user
   – Would work as long unexposed but visible regions not
     changing.
   – Assumes draw request can be associated with expose
     event.
• To support this accurately, system needs to send it
  union of exposed regions received from multiple 73
  users
          Window-based Coupling
                         • Mandatory
• Couplable properties       – Window sizes
                             – Window contents
   –   Size
                         • Optional
   –   Contents              – Window positions
   –   Positions             – Window stacking order
                             – Window exposed regions
   –   Stacking order    • Optional can be done with or
   –   Exposed regions     without virtual desktop
                             – Remote and local windows
• In shared window             could mix, rather than have
                               remote windows embedded in
  system some must be          Virtual Desktop window.
  coupled and others         – Can lead to “window wars”
                               (Stefik et al ‟87)
  may be.
                                                          74
Example of Minimal Window Coupling




                               75
                Replicated Window Architecture
       Program                         Program                             Program
a ^, w1, x, y               a ^, w2, x, y                            a ^, w3, x, y


                      a ^, w1, x, y                      a ^, w3, x, y
     Input                            Input                               Input
     Distributor                      Distributor                         Distributor
                                a ^, w2, x, y
                 draw a, w2, x, y                 draw a, w2, x, y                    draw a, w3, x, y

            UI                               UI                                  UI
                                      Press a

          User 1                            User 2                            User 3          76
     Replicated Window Architecture
            with UI Coupling
 Program                 Program               Program



              move w                 move w
Input                  Input                  Input
Broadcaster            Broadcaster            Broadcaster
                       move w
         move w                                     move w

    UI                      UI                    UI
                        move w

   User 1                  User 2               User 3       77
           Replicated Window Architecture
                with Expose coupling
    Program                    Program                       Program
expose w                 expose w                        expose w


                 expose w                     expose w
  Input                      Input                         Input
  Distributor                Distributor                   Distributor
                            expose w
                draw w                     draw w                        draw w

           UI                       UI                              UI
                             move w2

       User 1                    User 2                        User 3       78
    Replicated Window System
• Centralized only implemented commercially
   – NetMeeting
   – PlaceWare
   – Webex
• Replicated can offer more efficiency and pass
  through firewalls limiting large traffic
      • Must be done carefully to avoid correctness problems
      • Harder but possible at window layer
          – Chung and Dewan ‟01
          – Assume floor control as centralized systems do
          – Also called intelligent app sharing

                                                               79
             Screen Sharing
• Sharing the screen client
  – Window system (and all applications running
    on top of it)
  – Cannot share windows of subset of apps
  – Share complete computer state
  – Lowest layer gives coarsest sharing granularity.



                                                   80
Sharing the (VNC) Framebuffer Layer




                                  81
       VNC Centralized Frame Buffer
                Sharing
 Window Client


 Win. Server


Output Broadcaster
& I/O Relayer
                                       draw pixmap rect
                                         (frame diffs)
               key events
               mouse events
                              I/O Relayer                 I/O Relayer


 Framebuffer                  Framebuffer                 Framebuffer
                                                                 82
    Replicated Screen Sharing?

• Replication hard if not impossible
  – Each computer runs a framebuffer server and
    shared input
  – Requires replication of entire computer state
     • Either all computers are identical and receive same
       input from when they were bought
     • Or at start of sharing session download one
       computer‟s entire environment
• Hence centralization with virtual desktop
                                                         83
     Sharing pix maps vs. drawing
              operations
•   Potentially larger size                    •   Smaller size
•   Obtaining pixmap changes difficult         •   Obtaining drawing operations easy
     –   Do framebuffer diffs                       – Create proxy that traps them
     –   Put hooks into window system
     –   Do own translation                    •   Many output operations
•   Single output operation                    •   Non-standard operations
•   Standard operation                         •   Fonts, colormaps etc need to be
•   No context needed for interpretation           replicated
•   Multiple operations can be coalesced            – Reliable protocol needed
    into single pixmap                              – Possible non standard operations
     –   Per-user coalescing and compression          for distributing state
     –   Based on network congestion and            – Session initiation takes longer
         computation power of user
                                               •   Compression but not coalescing
•   Pixmap can be compressed
                                                   possible




                                                                                         84
          T. 120 Mixed Model
• Send either drawing operation or pixmap.
• Pixmap sent when
   – Remote site does not support operation
   – Multiple graphic operations need to be combined into
     single pixmap because of network congestion or
     computation overload
• Feedthrough and fidelity of pixmaps only when
  required
• More complex – mechanisms and policies for
  conversion
                                                            85
            Pixmap compression
• Combine pixmap updates to overlapping regions into one
  update.
   – In VNC diffs of framebuffer done.
   – In T 120 rectangles computed from updates
• When data already exists, send x,y of source (VNC and T
  120)
   – Scrolling and moving windows
   – Function of pixmap cache size
• Diffs with previous rows of pixmap (T 120)
• Single color with pixmap subrectanges (VNC)
   – Background with foreground shapes
• JPEG for still data, MPEG for moving data
• Larger number of operations conflicts with interoperability.
• Reduce statelessness
   – Efficieny gain vs loss                                 86
      T 120 Drawing Operation
            compression
• Identify operands of previous operations
  (within some history) rather than send new
  value (T 120)
  – E.g. Graphics context often repeated
• Both kinds of compression useless when
  bandwidth abundant
  – But can unduly increase latency.

                                               87
      T 120 Pointer Coalescing
• Multiple input pointer updates combined into one
• Multiple output pointer updates combined into
  one.
• Reduced user experience
• Bandwidth usage of pointer updates small.
• Reduce jitter in variable latency situations.
   – If events are time stamped
• Consistent with not sending incremental
  movements and resizing of shapes in whiteboards.
                                                     88
        Flow Control Algorithms
• T 120 push-based approach
   – Sender pushes data to group of receivers
   – Compare end to end rate for slowest receiver by looking at application
     Queue
   – Works with overlays (firewalls)
   – Adapt compression and coalescing based on this
   – Very slow computers leave collaboration.
• VNC pull based rate
   –   Each client pulls data at consumption rate
   –   Gets diffs since since last pull with no intermediate points
   –   Per client diffs must be maintained
   –   Data might be sent along same path multiple times
   –   Could replicate updates at all LANS (federations) [Chung 01]




                                                                              89
          Experimental Data
• Pull-based vs. Push-based flow control
• Sharing pixmaps vs. drawing operations
• Replicated vs. centralized architecture




                                            90
       Remote Feedback Experiments
 Window Client                    • Nieh et al, 2000: Remote
                                    single-user access experiments.
                                     – VNC
Master I/O Distributor               – RDP (T. 120 based)
                                  • Measured
                                     – Latency (Remote feedback time)
                                     – Data transferred
          Slave I/O Distributor
                                  • Give idea of performance seen
                                    by remote user in centralized
                                    architecture, comparing
                                     – Sharing of pixmap vs.drawing
 Win. Server       Win. Server         operations
                                     – Pull-based vs. no flow control

                                                                        91
                     User 1
  High Bandwidth Experiments
• Letter A                       • Red box fill
  – Latency                         – Latency
     • VNC (Linux) 60 ms               • VNC (Linux) 100 ms
                                       • RDP (Win2K, T 120-
     • RDP (Win2K, T 120-
                                         based) 220 ms
       based) 200 ms
                                    – Data transferred
  – Data transferred                   • VNC 1.2KB
     • VNC 0.4KB                       • RDP 0.5KB
     • RDP 0.3KB
                                 • Compression increases
     • Previewers send text as
       bitmaps (Hanrahan)          latency reducing data

                                                              92
          Web Page Experiments
• Time to execute a web page         • Load time
  script                                 – 128 Kbps
    – Load 54*2 pages (text and              • RDP 297s
      bitmaps)                               • VNC 25s
    – Scroll down 200 pixels         • Data transferred
    – Common parts: blue left            – 100 Mbps
      column, white background, PC           • Web browser 2MB
      magazine logo                          • RDP 12MB
• Load time                                  • VNC 4MB
    – 4-100 Mbps < 50 seconds            – 128 Kbps
    – 100 MBps                               • RDP 12 MB
        • RDP: 35s                           • VNC 1MB
        • VNC: 24s                   • Data loss reduces load time


                                                                     93
        Animation Experiments
• 98 KB Macromedia Flash   • Data transferred
  315 550x400 frames           – 100 Mbps
                                   • RDP: 3MB
• FPS                              • VNC: 2.5MB
   – 100 Mbps                  – 512 kbps
      • RDP: 18                    • RDP: 2MB
      • VNC: 15                    • VNC: 1.2MB
   – 512 kbps                  – 128 kbps
      • RDP: 8                     • RDP: 2MB
                                   • VNC: 0.3MB
      • VNC: 15
                           • 18 fps acceptable, < 8fps intolerable
   – 128 Kbps
      • RDP: 2             • Data loss increases fps
      • VNC: 16            • LAN speed required for tolerable
                             animations


                                                            94
    Cyclic Animation Experiments
• Wong and Seltzer 1999,          • GIF banner and scrolling
  RDP Win NT                        news ticker simultaneously
• Animated 468x60 pixel              – 1.60 Mbps
  GIF banner                      • Client side cache of pixmaps
    – 0.01 mbps                      – Cache not big enough to
• Animated scrolling news              accommodate both animations
  ticker                             – LRU policy not ideal for cyclic
                                       animations
    – 0.01mbps
                                  • 10 Mbps can accommodate
•   Bezier screen saver             only 5 users
    – 10 bezier curves repeated
    – 0.1 mbps                    • Load put by other UI
                                    operations?


                                                                 95
      Network Loads of UI Ops
• Wong and Seltzer 1999,   • Menu navigation
  RDP Win NT                  – Depth-first selection from
• Typing                        Windows start menu: 1.17
                                Kbps
   – 75 wpm word typist
                              – Alt right arrow in word:
     generated 6.26 kbps
                                39.82 Kbps
• Mousing                     – Office 97 with animation:
   – Random, continuous:        48.88 KBps
     2Kbps
                           • Scrolling
   – Usefulness of mouse
                              – Word document, PG down
     filtering in T 120?
                                key held: 60 kbps

                                                             96
    Relative Occurrence of Operations
• Danshkin, Hanrahan ‟94, X        •   Bytes used
• Two 2D drawing programs              1. Images
• Postscript previewer                      1. 53 bytes avg size
                                            2. BW bitmap rectangles
• X11 perf benchmark
                                       2. Geometry
• 5 grad students doing daily               1. Half clearing letter
  work                                         rectangles
• Most output responses are            3. Text
  small.                               4. Window enter and leave
    – 100 bytes                        5. Mouse, Font, Window
    – TCP/IP adds 50% overhead            movement, etc events
                                          negligible
• Startup lots of overhead ~ 20s
                                   •   Grad students vs. real people?


                                                                        97
    User Classes vs. Load & Bandwidth
                  Usage
•    Terminal services study                  •   Simulation scripts run to measure
•    Knowledge Worker                             how may of each class can be
      –   Makes own work                          supported before 10% degradation
      –   Marketing, authoring
                                                  in server response
                                                   –   2xPentium 111 Xeon 450 MHz
      –   Excel, outlook, IE, word
                                                   –   40 structured task workers
      –   Keeps apps open all the time
                                                   –   70 knowledge workers
•    Structured task worker
                                                   –   320 Data entry workers
      – Claims processing, accts payable
                                                   –   In central architecture, perhaps
      – Outlook, word                                  separate multicaster
      – Uses each app for less time,          •   Network utilization
        closing and opening apps
                                                   – Structured task: 1950 bps
•    Data Entry worker
                                                   – Knowledge worker: 1200 bps
      – Transcription, typists, order entry
                                                   – Data entry: 495 bps
      – SQL, forms
                                              •   Encryption has little effect



                                                                                          98
      Regular vs. Bursty Traffic
• Droms and Dyksen ‟90, X traffic
• Regular
   – 8 hour systems programmer usage
       • 236 bps, 1.58 packets per second
   – Compares well with network file system traffic
• Bursts
   – 40,000 bps, 100 pps
   – Individual apps
       • Twm and xwd > 100, 000 bps, 100 pps
       • Xdvi, 60,000 bps, 90 pps
   – Comparable to animation loads
• Bandwidth requirements as much as remote file system

                                                         99
           Bandwidth in Replicated vs.
                 Centralized
• Input in replication less data than output
    – Several mouse events could be discarded
    – Output could be buffered.
• X Input vs. Output (Ahuja ‟90)
    – Unbuffered: 6 times as many messages sent in centralized
    – Buffered: 3.6 times as many messages sent
    – Average input and output message size: 25 bytes
         • RDP each keystroke message 116 bytes
         • Letter a, box fill, text scroll: < 1 Kb
         • Bitmap load: 100 KB




                                                                 100
 Generic Shared Layers Considered

• Framebuffer
• Window




                                101
             Shared Widgets
• Layer above window is Toolkit
• Abstractions offered
  – Text
  – Sliders
  – Other “Widgets”




                                  102
Sharing the (Swing) Toolkit Layer




     • Different window sizes
     • Different looks and feel
     • Independent scrolling        103
Window Divergence
               • Independent scrolling
               • Multiuser scrollbar
               • Semantic telepointer




                              104
                    Shared Toolkit
• Unlike window system, toolkit not a network layer
• So more difficult to intercept I/O
• Input easier by subscribing to events, and hence popular
  replicated implementations done for Java AWT & Swing
   – Abdel Wahab et al 1994 (JCE),Chabert et al 1998 (NCSA‟s
     Habanero) ,Begole 01
   – GlassPane can be used in Swing
        • A frame can be associated with a glass pane whose transparent property
          is set to true
        • Mouse and keyboard events sent to glass pane
• Centralized done for Java Swing by intercepting output and
  input (Chung ‟02)
   –   Modified JComponent constructor to turn debug option on
   –   Graphics object wrapped in DebugGraphics object
   –   DebugGraphics class changed to intercept actions
   –   Cannot modify Graphics as it is an abstract class subclassed by     105
       platform dependent classes
                  Shared Toolkit
• Widely available commercial shared toolkits not available.
• Intermediate point between model and window sharing.
• Like model sharing
   – Independent window sizes and scrolling
   – Concurrent editing of different widgets
   – Merging of concurrent changes to replicated text widget
• Like window sharing
   – No new programming model/abstractions
   – Existing programs




                                                               106
            Replicated Widgets
     abc                         abc
     adbc                        adbc

 Program                     Program
      Insert w, d,1                    Insert w,d,1

Input                       Input
Distributor                 Distributor
      Insert w, d,1

  Toolkit                    Toolkit


   User 1                      User 2
                                               107
  Sharing the Model Layer




• The same model can be bound to different
  widgets!                                   108
• Not possible with toolkit sharing
Sharing the Model Layer
                      Program
        Model         Component/
                      Model



                      Increasing
                      Abstraction

        Toolkit
                      User-
                      Interface
        Window        Component


        Framebuffer


                              109
            Sharing the Model Layer
                                        Program
                    Model               Component/
Cost of                                 Model
accessing
remote
model                                   Increasing
                 View      Controller
                                        Abstraction

                        Toolkit
                                        User-
                                        Interface
                    Window              Component


                    Framebuffer


                                                110
            Sharing the Model Layer
Send                                    Program
                    Model               Component/
changed                                 Model
model state
in notfication

                 View      Controller   Increasing
                                        Abstraction

                        Toolkit
                                        User-
                                        Interface
                    Window              Component


                    Framebuffer


                                                111
          Sharing the Model Layer
                                      Program
                  Model               Component/
No standard                           Model
protocol

               View      Controller   Increasing
                                      Abstraction

                      Toolkit
                                      User-
                                      Interface
                  Window              Component


                  Framebuffer


                                              112
             Centralized Architecture
   Program

                     Output Broadcaster and
Output Broadcaster   relayers cannot be
& I/O Relayer        standard


                     I/O Relayer              I/O Relayer



      UI                 UI                        UI


                                                        113
    User 1              User 2                     User 3
            Replicated Architecture
 Program          Program                  Program



Input            Input                    Input
Broadcaster      Broadcaster              Broadcaster



    UI               UI                        UI
                               Input broadcaster
                               cannot be
   User 1           User 2     standard       User 3    114
Model Collaboration Approaches
• Communication facilities of varying
  abstractions for manual implementation.
• Define Standard I/O for MVC
• Replicated types
• Mix these abstractions



                                            115
Unstructured Channel Approach
• T 120 and other multicast approaches
  – Used for data sharing in whiteboard
• Provide byte-stream based IPC primitives
• Add multicast to session capability
• Programmer uses these to create relayers
  and broadcasters


                                             116
                       RPC
• Communicate PL types rather than unstructured
  byte streams
   – Synchronous or asynchronous
• Use RPC
   – Many Java based colab platforms use RMI




                                                  117
                       M-RPC
• Provide multicast RPC (Greenberg and Marwood
  ‟92, Dewan and Choudhary ‟92) to subset of sites
  participating in session:
   –   processes of programmer-defined group of users
   –   processes of all users in session
   –   processes of users other than current inputter
   –   current inputter
   –   all processes of specific user
   –   specific process
                                                        118
            GroupKit Example

proc insertIdea idea {
    insertColouredIdea blue $idea
    gk_toOthers ''insertColouredIdea red $idea'' }




                                                     119
Model Collaboration Approaches
• Communication facilities for varying
  abstractions for manual implementation.
• Define Standard I/O for MVC
• Replicated types
• Mix these abstractions



                                            120
           Sharing the Model Layer
                                       Program
                                       Component/
                                       Model
                   Model
Define
standard
protocol        View      Controller   Increasing
                                       Abstraction

                       Toolkit
                                       User-
                                       Interface
                   Window              Component


                   Framebuffer


                                               121
           Sharing the Model Layer
                                 Program
                                 Component/
                                 Model
                   Model
Define
standard
protocol          View           Increasing
                                 Abstraction

                   Toolkit
                                 User-
                                 Interface
                   Window        Component


                   Framebuffer


                                         122
 Standard Model-View Protocol
                   • Can be in terms of model
                     objects or view elements.
         Displayed
Model
          element • View elements are varied
                      – Bar charts, Pie charts
                   • Model elements can be
                     defined by standard types
                   • Single-user I/O model
                      – Output: Model sends its
                        displayed elements to view and
 View                   updates to them.
                      – Input: View sends input
                        updates to displayed model
                        elements
                   • Dewan & Choudhary „90         123
                      IM Model view of element named
                            Create
                                        “IM History” whose type is
                                       “IM_History” and value is at
                                          address “&im_history”
/*dmc Editable String, IM_History */
typedef struct { unsigned num; struct String *message_arr; }
IM_History;
IM_History im_history;
String message;
Load () {
  Dm_Submit (&im_history, "IM History", "IM_History");
  Dm_Submit (&message, "Message", String);     Whenever “Message”
  Dm_Callback(“Message”, &updateMessage);     is changed by user call
  Dm_Engage ("IM History");
  Dm_Engage ("Message");                         updateMessage()
}
updateMessage (String variable, String new_message) {
    im_history.message_arr[im_history.num++] = value;
    Dm_Insert(“IM History”, im_history.num, value);
}
                                               Show (a la map)
                                                the view of “IM
                                                              124
                                                    History”
  Multiuser Model-View Protocol
                   • Multi-user I/O model
                      – Output Broadcast: Output
                        messages broadcast to all
Model                   views.
                      – Input relay: Multiple views
                        send input messages to
                        model.
                      – Input coupling: Input
                        messages can be sent to
                        other views also
                   • Dewan & Choudhary ‟91
 View       View


                                                125
                      IM Model

/*dmc Editable String, IM_History */
typedef struct { unsigned num; struct String *message_arr; }
IM_History;
IM_History im_history;
String message;                             Insert to to all
Load () {
  Dm_Submit (&im_history, "IM History", "IM_History");
  Dm_Submit (&message, "Message", String);
  Dm_Callback(“Message”, &updateMessage);
  Dm_Engage ("IM History");
  Dm_Engage ("Message");
}
updateMessage (String variable, String new_message) {
    im_history.message_arr[im_history.num++] = value;
    Dm_Insert(“IM History”, im_history.num, value);
}                                 Called by
                                  any user                     126
   Replicated Objects in Central
           Architecture
                       • Distributed view
               replicas needs to create local
Model
                         replica of displayed
                         object.
                       • Can build
                         replication into
                         types
 View        View



                                           127
Replicating Popular Types for Central
    and Replicated Architectures

   Model                                     Model                 Model



   View              View                    View                  View

• Create replicated versions of selected popular types.
• Changes in a type instance automatically made in all of its
  replicas (in views or models)
   – No need for explicit I/O
• Can select which values in a layer replicated
• Architectures
   – replicated architecture (Greenberg and Marwood ‟92, Groove)          128
   – semi-centralized (Munson & Dewan ‟94, PlaceWare)
     Example Replicated Types
• Popular primitive types: String, int, boolean …
  (Munson & Dewan ‟94, PlaceWare, Groove)
• Records of simple types (Munson & Dewan ‟94,
  Groove)
• Dynamic sequences (Munson & Dewan ‟94,
  Groove, PlaceWare)
• Hashtables (Greenberg & Marwood ‟92, Munson
  & Dewan ‟94, Groove)
• Combinations of these types/constructors (Munson
  & Dewan ‟94, PlaceWare, Groove)
                                                129
         Kinds of Distributed Objects
                   • By reference (Java and .NET)
                       – reference sent to remote site
                       – remote method invocation site results in
site 1   site 2          calls at local site
                   • By value (Java and .NET)
                       – deep copy of object sent
                       – remote method invocations results in calls
                         at remote site
site 1   site 2        – copies diverge
                   • Replicated objects
                       – deep copy of object sent
                       – remote method invocations results in local
                         and remote calls
site 1   site 2        – either locks or merging used to detect/fix
                         conflicts

                                                                    130
        Alternative model sharing
               approaches
1.   Stream-based communication
2.   Regular RPC
3.   Multicast RPC
4.   Replicated Objects (/Generic Model View
     Protocol)



                                           131
            Replicated Objects vs.
           Communication Facilities
• Higher abstraction
    – No notion of other sites
    – Just make change
• Cannot use existing types directly
    – E.g. in Munson & Dewan ‟94, ReplicatedSequence
• Architecture flexibility
    – PlaceWare bound to central architecture
    – Replicas in client and server of different types, e.g. VectorClient &
      VectorServer
• Abstraction flexibility
    – Set of types whose replication supported by infrastructure automatically
    – Programmer-defined types not automatically supported
• Sharing flexibility
    – Who and when coupled burnt into shared value
• Use for new apps
                                                                              132
        Replicated Objects vs.
       Communication Facilities
• PlaceWare has much richer set than WebEx
  – Ability to include Polling as a slide in a
    PowerPoint presentation
  – Seating arrangement
• Not as useful for converting existing apps.
  – Need to convert standard types to replicated
    types
  – Repartitioning to separate shared and unshared
    models
                                                 133
        Stream based vs. Others
• Lowest-level
   – Serialize and deserialize objects
   – Multiplex and demultiplex operation invocations into
     and from stream
• Stream-based communication (wire protocol) is
  language independent
• No need to learn non standard syntax and
  compilers
• May be the right abstraction for converting
  existing apps into collaborative ones.

                                                            134
      Case Study: Collaborative Video
      Viewing (Cadiz, Balachandran et al. 2000)
• Replicated architecture
  created using T 120
  multicast later.
• Exchanged command
  names
• Implementer said it
  was easy to learn and
  use.


                                                  135
              RPC vs. Others
• Intermediate ease of learning, ease of usage,
  flexibility
• Use when:
  – Overhead of channel usage < overhead of RPC
    learning
  – Appropriate replicated types
     • Not available, or
        – Who and when coupled, architecture burnt into replicated
          type
     • learning overhead > RPC usage overhead
                                                                136
             M-RPC vs. RPC
• Higher-level abstraction
• Do not have to know exact site roster
  – Others, all, current
• Can be automatically mapped to stream-
  based multicast
• Use M-RPC when possible


                                           137
        Combining Approaches
•   System combining benefits of multiple
    abstractions?
    –   Flexibility of lower-level and automation of
        higher-level
•   Co-existence
•   Migratory path
•   New abstractions

                                                       138
               Coexistence
Support all of these abstractions in one system
• RPC and shared objects (Dewan &
   Choudhary ‟91, Greenberg & Marwood
   ‟92, Munson & Dewan ‟94, and
   PlaceWare)



                                             139
                      Migratory Path
Problem of simple co-existence
•   Low-level abstraction effort not reused.
    –      E.g. RPC used to built a file directory
•       Allow the use of low-level abstraction to create higher-
        level abstraction
•       Framework allowing RPC to be used to create new
        shared objects (Munson & Dewan ‟94, PlaceWare).
    –      E.g. shared hash table
•       Can be difficult to use and learn
•       Low-level abstraction still needed when controlling who
        and when coupled
                                                               140
       New abstractions: Broadcast
                Methods
                                 public class Outline {
Stefik et al ‟85: Mixes shared         String getTitle();
     objects and RPC                   broadcast void setTitle(String
                                         title);
•    Declare one or more               Section getSection(int i);
     methods of arbitrary              int getSectionCount();
     class as broadcast                broadcast void setSection(int
                                         i,Section s);
•    Method invoked on all             broadcast void insertSection(int
                                         i,Section s);
     corresponding instances           broadcast void
     in other processes in               removeSection(int i);
                                 }
     session
•    Arbitrary abstraction
     flexibility
                                                                    141
              Broadcast Methods Usage
               Associates/               Associates/
               Replicas                  Replicas

                             Broadcast
Association     Model                      Model
                             method
               bm      lm                       lm

                View                        View

               lm      lm                       lm

                Window                    Window



                User 1                      User 2
                                                       142
    Problems with Broadcast Methods
•    Language support needed               public class Outline {
      – C#?                                      String getTitle();
•    Single multicast group                      broadcast void setTitle(String title);
      – Cannot do subset of participants         Section getSection(int i);
•    Selecting broadcast methods                 int getSectionCount();
     required much care                          broadcast void setSection(int i,Section
                                                    s);
      – Sharing at method rather than
        data level                               broadcast void insertSection(int
                                                    i,Section s);
                                                 broadcast void removeSection(int i);
                                                 broadcast void insertAbstract (Section s)
                                                    {
                                                       insertSection (0, s);
                                                   }
                                                }



    Broadcast method should not call another broadcast method!
                                                                                  143
 Method vs. State based Sharing
• Method-based sharing for indirectly sharing state.
• Programmer provides mapping between state and
  methods that change it.
• With infrastructure known mapping, replicated
  types automatically implemented.
• Mapping of internal state and methods not
  sufficient because of host-dependent data
  (specially in UI abstractions)
• Need mapping of external (logical) state.

                                                   144
           Property-based Sharing
• Roussev & Dewan ‟00                public class Outline {
                                            String getTitle();
• Synchronize external state or
                                            void setTitle(String title);
  properties
                                            Section getSection(int i);
• Properties deduced                        int getSectionCount();
  automatically from                        void setSection(int i,Section s);
  programming patterns                      void insertSection(int i,Section s);
    – Getter and setter for record          void removeSection(int i);
      fields                                void insertAbstract (Section s) {
    – Hashtables and sequences                   insertSection(0, s);
• System keeps properties                    }
  consistent                         }
    – Parameterized coupling model
• Patterns can be programmer-
  defined
                                                                                   145
Programmer-defined conventions
getter = <PropType> get<PropName>()
setter = void set<PropName>(<PropType>)




insert = void insert<PropName> (int, <ElemType>)
remove = void remove<PropName> (int)
lookup = <ElemType> elementAt<PropName>(int)
set = void set<PropName> (int, <ElemType>)
count = int get<PropName>Count()



                                                   146
   Multi-Layer Sharing with Shared
               Objects
Story so far:                  • But objects occur at
• Need separate sharing          each layer
   implementation for each
   layer                          – Framebuffer
   – Framebuffer: VNC             – Window
   – Window: T. 120               – TextArea
   – Toolkit: GroupKit
• Problem with data layer      • Why not use shared
  since no standard protocol     object abstraction for
• Create shared objects for      any of these layers?
  this layer

                                                          147
    Sharing Various Layers

              Parameterized
Model                         Model
                 Coupler


View                          View

Toolkit                       Toolkit


Window                        Window


Framebuffer                   Framebuffer


                                            148
    Sharing Various Layers

Model                         Model


View                          View

              Parameterized
Toolkit          Coupler
                              Toolkit


Window                        Window


Framebuffer                   Framebuffer


                                            149
    Sharing Various Layers

Model                         Model


View                          View

Toolkit       Parameterized
                 Coupler
                              Toolkit


Window                        Window


Framebuffer                   Framebuffer


                                            150
   Experience with Property Based
              Sharing
• Used for
  – Model
  – AWT/Swing Toolkit
  – Existing Graphics Editor
• Requires well written code
  – Existing may not be


                                    151
             Multi-layer Sharing

• Two ways to implement colab. application
   – Distribute I/O
      • Input in Replicated
      • Output in Centralized
      • Different implementations (XTV, NetMeeting) distributed
        different I/O
   – Defined replicated objects
      • A single implementation used for multiple layers
• Single implementation in Distribute I/O approach?

                                                                  152
Translator-based Multi-Layer Support
         for I/O Distribution
• Chung & Dewan „01
• Abstract Inter-Layer Communication Protocol
   – input (object)
   – output(object)
   – …
• Translator between specific and abstract protocol
• Adaptive Distributor supporting arbitrary, external mappings
  between program and UI components
• Bridges gap between
   – window sharing (e.g. T 120 app sharing) and higher-level sharing (e.g.
     T 120 whiteboard sharing)
• Supports both centralized and replicated architectures and
                                                            153
  dynamic transitions between them.
       I/O Distrib: Multi-Layer Support
       Layer 0                Layer 0                Layer 0


      Layer S                Layer S                Layer S



     Layer N-1              Layer N-1              Layer N-1

     Translator             Translator             Translator
Adaptive Distributor   Adaptive Distributor   Adaptive Distributor


     Layer N                Layer N                Layer N


                PC


                                                                     154
      I/O Distrib: Multi-Layer Support
      Layer 0                Layer 0                 Layer 0


     Layer S                Layer S                 Layer S

    Translator             Translator             Translator
Adaptive Distributor   Adaptive Distributor   Adaptive Distributor


     Layer S+1              Layer S+1              Layer S+1



     Layer N                Layer N                Layer N


               PC


                                                                155
    Experience with Translators
• VNC              • Requires translator
• X                  code, which can be
• Java Swing         non trivial
• User Interface
  Generator
• Web Services



                                           156
                 Infrastructure vs. Meta-
                      Infrastructure
Text Editor     Outline Editor
                                                                             application      application
      Pattern Editor             application           application

                                                                     application       application
                                                   application
          Checkers           application


                                                                              Java‟s
                                               X          JavaBeans           Swing          VNC



                       Property/Translator-based Distributor/Coupler
        Infrastructure                                           Meta-Infrastructure
                                                                                                     157
The End of Comp 290-063
        Material
 (Remaining Slides FYI)



                          158
            Using Legacy Code
• Issue: how to add collaboration awareness to
  single-user layer
   –   Model
   –   Toolkit
   –   Window System
   –   …
• Goal
   – Would as little coupling as possible between existing
     and new code
                                                             159
Adding Collaboration Awareness to Layer
  Colab. Transp.        Colab. Transp.       Colab. Aware
  Colab. Aware
                                  JCE                 Sync
             Suite
    Ad-Hoc              Colab. Aware         Colab. Transp.

                        Extend Colab-       Extend Colab.
                        Transp. Class       Aware Class



   Colab. Transp.       Colab. Aware
                                  Roussev
       Colab. Aware Delegate      ‟00

                                                      160
        Proxy Delegate



  X Client            Calling Object



Pseudo Server   XTV   Adapter Object COLA



  X Server            Called Object


                                       161
                  Identifying Replicas
•   Manual connection:
     – Translators identify peers (Chung and Dewan ‟01)
•   Automatic:
     – Central downloading:
          • Central copy linked to downloaded objects (PlaceWare, Suite, Sync)
     – Identical programs: Stefik et al ‟85
          • Assume each site runs the same program and instantiates programs in the same order
          • Connect corresponding instances (at same virtual address) automatically.
     – Identical instantiation order intercepted
          • Connect Nth instantiated object intercepted by system
          • E.g. Nth instantiated windows correspond
     – External descriptions (Groove)
          • Assume an external description describing models and corresponding views
          • System instantiates models and automatically connects remote replicas of them.
          • Gives programmers events to connect models to local objects (views, controllers).
     –   No dynamic control over shared objects.
•   Semi-manual (Roussev and Dewan ‟00)
     – Replicas with same GID‟s automatically connected.
     – Programmer assigns GIDs to top level objects, system to contained objects
                                                                                                 162
Connecting Replicas vs. Layers
            • Object correspondence
              established after containing
              layer correspondence.
            • Only some objects may be
              linked
            • Layer correspondence
              established by session
              management
            • E.g. Connecting whiteboards
              vs. shapes in NetMeeting
                                       163
Basic Session Management
        Operations
  Create/ Delete       Add/Delete
  (Conference 1)       (App3)



      Conference 1
                                    List/Query/
                                    Set/ Notify
      App1    App2      App3
                                    Properties

        User 1       User 2



        Join/Leave (User 2)
                                            164
                              Basic Firewall

 protected site                      • Limit network
                                       communication to and from
                   reply

                                       protected sites
         send




                                     • Do not allow other sites to
                                       initiate connections to
    unprotected proxy                  protected sites.
                                     • Protected sites initiate
                                       connection through proxies
            send
                       call
  open




                                       that can be closed if
                                       problems
communicating site                   • can get back results
                                        – Bidirectional writes
                                        – Call/reply
                                                                 165
                Protocol-based Firewall

       protected site                     • May be restricted to
                reply


                                            certain protocols
                                  sip
open




                                            – HTTP
         unprotected proxy                  – SIP
                           http
                    call
       open




                                    sip




  communicating site


                                                               166
         Firewalls and Service Access

 protected user                        • User/client at protected site.
                                       • Service at unprotected site.
           reply


                                       • Communication and
                                         dataflow initiated by
                                         protected client site
    unprotected proxy                     – Can result in transfer of data
                            http-rpc

                                            to client and/or server
                                       • If no restriction on protocol
               call
  open




                      rpc




                                         use regular RPC
unprotected service                    • If only HTTP provided,
                                         make RPC over HTTP
                                          – Web services/Soap model
                                                                         167
          Firewalls and Collaboration
                        • Communicating sites
       protected user
                          may all be protected.
                        • How do we allow
open




                          opens to protected
                          user?

   protected user


                                             168
              Firewalls and collaboration
                                    • Session-based forwarder
                                    • Protected site opens connection
  protected user        close         to forwarder site outside firewall
open




                                      for session duration
              send




                                    • Communicating site also opens
                                      connection to forwarder site.
 unprotected forwarder
                                    • Forwarder site relays messages to
                            close
                 send
       open




                                      protected site
                                    • Works well if unrestricted access
protected user                        allowed and used
                                    • What if restricted protocol?
                                                                     169
          Restricted Protocol
• If only restricted protocol then
  communication on top of it as in service
  solution
• Adds overhead.




                                             170
    Restricted protocols and data to
             protected site
• HTTP does not allow data flow to be initiated by
  unprotected site
• Polling
   – Semi-synchronous collaboration
• Blocked gets (PlaceWare)
   – Blocked server calls in general in one-way call model
   – Must refresh after timeouts
• SIP for MVC model
   – Model sends small notifications via SIP
   – Client makes call to get larger data
   – RPC over SIP?

                                                             171
           Firewall-unaware clients
• Would like to isolate specific apps from worrying protocol choice and
  translation.
• PlaceWare provides RPC
    – Can go over HTTP or not
• Groove apps do not communicate directly – just use shared objects and
  don‟t define new ones
    – Can go either way
• Groove and PlaceWare try unrestricted first and then HTTP
• UNC system provides standard property-based notifications to
  programmer allows them to be delivered as:
    –   RMI
    –   Web service
    –   SIP
    –   Blocked gets
    –   Protected site polling
                                                                     172
          Forwarder & Latency
                        • Adds latency
 protected user            – Can have multiple forwarders bound to
                             different areas (Webex)
                           – Adaptive based on firewall detection
                             (Groove)
                               •  try to open directly first
                               •  if fails because of firewall, opens
                                 system provided forwarder
unprotected forwarder          • asymmetric communication possible
                                    – Messages to user go through forwarder
                                    – Messages from user go directly
                           – Groove is also a service based model!
                           – PlaceWare always has latency and it
protected user               shows



                                                                        173
Forwarder & Congestion Control

 protected user          • Breaks congestion control
                           algorithms
                             – Congestion between
                               communication between
                               protected site and forwarder
unprotected forwarder          controlled by algorithms may
                               be different than end to end
                               congestion
                             – T 120 like end to end
protected user                 congestion control relevant

          Different congestions                          174
        Forwarder + Multi-caster
                              •   Forwarder can multicast to other users
                                  on behalf of sending user
 protected user               •   Separation of application processing
                                  and distribution
                                   –   Supported by PlaceWare, Webex
                              •   Reduces messages in link to forwarder
                              •   Separate multicaster useful even if no
                                  firewalls
                              •   Forwarder can be much more powerful
unprotected forwarder             machine.
                                   –   T 120 provides multi-caster without
    + multicaster                      firewall solution
                              •   Forwarder can be connected to higher
                                  speed networks
                                   –   In groove, if (possibly unprotected) user
                                       connected via slow network, single
                                       message sent to forwarder, which is
                                       then multicast
protected         protected   •   May need hierarchy of multicasters (T
user              user            120), specially dumb-bell

                                                                              175
            Forwarder + State Loader
                               •   Forwarder can also maintain state in
                                   terms of object attributes
  protected user               •   Slow and latecomer sites pull state
                                   asynchronously from state loader
                                    – Avoid message from forwarder to
                                      protected site containing state
            read



                                    – Alternative to multicast
                                    – Extra message to forwarder for pulling
                                      adds latency and traffic
 unprotected forwarder              – Each site pulls at its consumption rate
     + state loader            •   Works for MVC like I/O models
                                    – VNC: framebuffer rectangles
                                    – PlaceWare: PPT slides
                                    – Chung & Dewan ‟98: Log of arbitrary
                                      input/output events converted to object
                                      state
protected          protected   •   Useful even if no firewalls
user               user        •   Goes against stateless server idea
                                    – State should be an optimization
                                                                     176
       Forwarder + multicaster + state
                  loader
                                • Multicaster for rapidly
                                  changing information
   protected user
                                • State loader for slower
                                  changing information
                                • Solution adopted in
                                  PlaceWare
  unprotected forwarder            – Multicast for window sharing
   + multicaster + state           – State loading for PPT slides
          loader                • VNC results show pull model
                                  works for window sharing
                                • Greenberg ‟02 shows pull
                                  model works for video
protected           protected
user                user                                    177
            Interoperability
• Cannot make assumptions about remote
  sites
• Important in collaboration because one non
  conforming can prevent adoption of
  collaboration technology
• Devise “standard” protocols for various
  collaboration aspects to which specific
  protocols can be translated

                                           178
Examples of Collaboration Aspects

• Codecs in media (SIP)
• Window/Frame-based sharing
  –   Caching capability for bitmaps, colormaps.
  –   Graphics operations supported
  –   Bits per pixel
  –   Virtual desktop size


                                                   179
   Layer and Standard Protocols
• Easier to agree on lower level layer
• Every computer has a framebuffer with similar
  properties.
• Windows are less standard
   – WinCE and Windows not same
• Toolkits even less so
   – Java Swing and AWT
• Data in different languages and types
   – Interoperation very difficult

                                                  180
              Data Standard
• Web Services
  – Everyone converts to it
• Object properties based on patterns
  translated to Web services?
• XAF



                                        181
          Multiple Standards
• More than one standard can exist
  – With different functionality/performance
• How to negotiate?
• Same techniques can be used to negotiate
  user policies
  – E.g. which form of concurrency control or
    coupling

                                                182
Enumeration/Selection Approach
• One party proposes a series of protocols
  – M= audio 4006 RTP/AVP 0 4
  – A = rtpmap: 0 PCMU/8000
  – A = rtpmap: 4 GSM/8000
• Other party picks one of them
  – M= audio 4006 RTP/AVP 0 4
  – A = rtpmap: 4 GSM/8000

                                             183
   Extending to multiple parties
• One party proposes a series of protocols
• Other responds with subsets supported
• Proposing party picks some value in
  intersection.
• Multiple rounds of negotiaton



                                             184
              Single-Round
• Assume
  – Alternative protocols can be ordered into levels,
    where support for protocol at level l indicates
    support for all protocols at levels less than l
• Broadcast level and pick min of values
  received


                                                   185
         Capability Negotiation
• Protocol not named but function of capabilities
   – Set of drawing operations supported.
• Increasing levels can represent increasing
  capability sets.
   – Sets of drawing operations
• Increasing levels can represent increasng
  capability values
   – Max virtual desktop size

                                                    186
        Uniform Local Algorithm
• Apply same local algorithm at all sites to choose
  level and hence associated “collapsed” capability
  set
   – Min
      • Of capability set implies an AND
      • Bits per pixel, drawing operation sets
   – Max
      • Of boolean values implies an OR
      • Virtual desktop size
   – Something else based on # and identity of sites
     supporting each level

                                                       187
             UI Policy Negotiation
• Can use same mechanism for UI policy negotiation
• Examples
   – Unconditional grant floor control: In T 120, each node can say
       •   Yes
       •   No
       •   Ask Floor Controller
       •   Yes < Ask Floor Controller < No
       •   Use min of this for least permissive control
   – Sharing control: many systems each node can say:
       • Share scrollbar
       • Not share < share
       • Use min for least permissive sharing

                                                                      188
               Office Apps
• Multiple versions of office apps exist
• Use similar scheme for negotiating
  capabilities of office apps in conference
  – pdf capability < viewer < full app
  – Office 10 < Office 11 < Office 12




                                              189
Conversion to Standard Protocols
• May need to convert richer protocol to lesser
  “lowest common denominator” protocol with
  lesser capabilities
• Also may not wish to lose lowest common
  denominator protocol and do per site conversion
• Drawing operation to bitmap in T 120
• Fine-grained locks to floor control in Dewan and
  Sharma „91

                                                     190
                   Composability
• Collaboration infrastructure must perform several tasks
   – Session management
   – Set up (centralized/replicated/hybrid) architecture
   – I/O distribution
       • filtering
       • Multipoint communication
       • Latecomer and slow user accomodation
   – Access and concurrency control
   – Firewall traversal
• Multiple ways to perform each of these functions
• Implement separate functions in different composable
  modules
• Difficult because they must work with each other
                                                            191
 T 120 Composable Architecture
• Multicast layer
   – Multicast + tokens
• Session Management
   – Session operations + capability negotiation
• Application template
   – Standard structure of client of multicast + session management
• Window-based sharing
   – Centralized architecture for window sharing
   – Uses session management + multicast
• Whiteboard
   – Replicated or centralized whiteboard sharing
   – Uses session management + multicast

                                                                      192
                                                     T 120 Layers
                                                User Application(s)
                           (Using Both Standard and Non-Standard Application Protocols)



         User Application(s)                          Node                          User Application(s)
     (Using Std. Appl. Protocols)                    Controller                  (Using Non-Std Protocols)




          Rec. T.127 (MBFT)
      Rec. T.126 (SI)                                         ...
        Application Protocol Entity                                 ...
                                                                             Non-Standard Application
Rec. T.120
                                                                                 Protocol Entity
  Application Protocol
  Recommendations




                                         Generic Conference Control (GCC)
                                                    Rec. T.124




                                       Multipoint Communication Service (MCS)
                                                   Rec. T.122/T.125




                                          Network Specific Transport Protocols
                                                     Rec. T.123


                                                                                                                      193
        T.120 Infrastructure Recommendations

                                                                                                        T0826350-96
    Composability Advantages
• Can use needed components
  – Just the multicast channel
• Can substitute layers
  – Different multicast implementation
• Orthogonality
  – Level of sharing not bound to multicast
  – Architecture not bound to multicast

                                              194
  Composability Disadvantages
• May have to do much more work.
• T 120 component model
  – Create application protocol entity and relate to
    actual application
  – Create/ join multicast channel
• Suite & PlaceWare monolithic model
  – Instantiating an application automatically
    performs above tasks

                                                   195
       Combining Advantages
• Provide high-level abstractions representing
  popular ways of interacting with sub sets of
  these components
• e.g. Implementing APE for Java applets




                                            196
 Improving T120 Componentization

• Add object abstraction on top of application
  protocol entity
• Web Service
• Object with properties?




                                            197
 Improving T120 Componentization

• Separate input and output sharing
• Some nodes will be input only
  – E.g PDAs sharing projected presentation




                                              198
     Using Mobile Computer for Input

                                              Program




         UI                UI                   UI




                                                        199
Use mobile computers for input (e.g. polls)
    Generic Conference Abstraction
•   Conference (T 120, PlaceWare)
•   Room (MUDs, Jupiter)
•   Space (Groove)
•   Session (SIP)
     – Different from application session
• May be persistent and asynchronous
     – Space, Room
                                            200
Basic Session Management
        Operations
  Create/ Delete       Add/Delete
  (Conference 1)       (App3)



      Conference 1
                                    List/Query/
                                    Set/ Notify
      App1    App2      App3
                                    Properties

        User 1       User 2



        Join/Leave (User 2)
                                            201
    Advanced Session Management
•   Join/Leave subset of (possibly queried) apps (T 120)
•   Eject user (T 120)
•   Transfer users from one conference to another (T 120)
•   Timed conference
     – Set conference duration (T 120, PlaceWare)
     – Query duration left (T 120)
     – Extend duration ( T 120)
•   Schedule conference and modify schedule (PlaceWare)
•   Keep interaction log, and query (PlaceWare)
•   Terminate when no active users (PlaceWare, T 120
•   In persistent conferences, in-core version automatically created
     – When first user joins (PlaceWare)
     – When conference manager launched (Groove)


                                                                       202
     Centralized Session Management

   Program
                                  • Add app
                                     – loads and starts program at:
                                        • invoker‟s site
Output
Broadcaster & I/O
                                            – XTV, Suite
Relayer                                 • or some other site
                                            – T 120
                    I/O Relayer
                                     – joins all existing users
                                  • Join conference
      UI                UI           – loads, starts, and binds local
                                       UI to central program

   User 1              User 2
                                                                        203
     Replicated Session Management
                            • Add app
 Program       Program
                              – loads and starts program
                                replica at invoker‟s site
                                    – XTV, Groove
Input         Input           – joins all existing users
broadcaster   broadcaster
                            • Join conference
                              – loads, starts, and binds
    UI             UI
                                replicas

 User 1           User 2
                                                            204
  Architecture Flexibility of Session
            Management
• Architecture specific
   – Groove, PlaceWare, …
• Architecture semi-dependent
   – T 120
       • Single APE abstraction “started” when user connects
       • APE abstraction bound to architecture
• Architecture independent
   – Chung and Dewan, 01
       • Single “loggable” abstraction connected to central or replicated logger
       • Loggable not bound to architecture
       • Join operation specifies architecture


                                                                             205
    Architecture-independent Session
              Management




(a) creating a new session       (b) joining an existing session

          Chung & Dewan ‟01: one app session per
                      conference                                   206
  Application-Session Management
            Coordination
• Session Management must know about attachment points.
• In centralized architecture:
   –   POD and Applet – PlaceWare
   –   X app and X server – XTV
   –   Generic APE – T 1.20
   –   Java “loggable” objects - Chung & Dewan -98
• In replicated architecture: program replicas and UIs
   – model and views (Groove & Sync)
   – APE ( T 120)
   – Java “loggable” objects - Chung & Dewan -01
• These are registered with session management.

                                                         207
  Explicit & Implicit Join/Leave
• Explicit
  – Create, join, and leave operations explicitly
    executed
• Implicit
  – Automatic or side effect of other operations




                                                    208
                   Implicit Join/Leave
Session Joining/Leaving Side Effect of:
•   Artifacts being edited
     – Editing same object joins them in conference
     – Dewan & Choudhary ‟91, Edwards
     – Important MSFT Office 12 scenario
•   Intersection of auras in virtual environment
     – Benford and DIVE
     – Applications and users have auras
     – Join conference result of user‟s aura intersecting application aura
•   Conference has single application session
     – Office 12 fixes thus
•   Not general.
•   No control – with options, semi-implicit



                                                                             209
                   Explicit Join/Leave

          Autonomous Joining         Invitation-based Joining

        Conference 1                 Conference 1

        App1   App2       App3       App1    App2      App3


          User 1       User 2          User 1       User 2

T 120                            T 120 Accept        Invite
                       Join
                                 SIP
               User 2            Groove              User 2
                                                         210
Autonomous vs. Invitation-based Join
 • Less message traffic and      • Implicit notification
   per user overhead             • Low overhead to create
    – No invitations sent          small conference.
 • Needs discovery (e.g.         • Raises mobility issues
   notifications), name             – User may have multiple
   resolution, and separate           devices
   access control mechanism         – Can register device (SIP,
 • Overhead amortized in              Groove)
   recurring conferences            – Privacy issues
 • Suitable for large, planned   • Raises firewall issues
   conferences                      – invitee must accept
                                      connections


                                                                  211
                Examples
• Invitation-based
  – NetMeeting, Messenger
• Autonomous
  – PlaceWare, Webex
• Both
  – T 120
  – Integrate messenger and PlaceWare?

                                         212
        Open vs. Closed Session
             Management
• Closed Session Management
  – Policies bound (PlaceWare)
     • Name vs. Invitation
     • Implicit vs. explicit
     • UI
• Open Session Management
  – Multiple policies can be implemented ( T. 120, SIP,
    Roseman & Greenberg ‟92) using an API
  – Defaults may be provided (Roseman & Greenberg „92)

                                                     213
  API for 2-Party Invites
 User      invite A    User • SIP Model
Agent A    accept     Agent B • N-party?
                              • Name-
            bye                 based?




                                     214
   2-Party, Autonomous
 User                   User
Agent A                Agent B




          Conference
            Agent

                       •   GroupKit
                       •   +create X OK?
                       •   +create X OK
                       •   Delete like Create C
                       •   Bye terminates


                                             215
  N-Party, Autonomous
 User                                    User
Agent A                                 Agent B




               Conference
                 Agent
                                           •   GroupKit, T 120
                                           •   Leave like Join
                          Joined X, C           – Event may not be
          Join X, C



                                                   broadcast as leaver
                                                   can do so (T 120)
                                           •   + LastUserLeft event

                       User
                      Agent C
                                                              216
N-Party, Autonomous and Invitation-
              based
   User                              User
  Agent A                           Agent B




                  Conference
                    Agent
            Invite X, C

                           Accept
                            X, C



                           User
                          Agent C
                                              217
    Example GroupKit Policies
• Open registration
  – Anyone can invite
  – Conference persists after last user
• Centrally facilitated
  – Only convener can invite
• Room-based session management
  – Anyone can join name (room)

                                          218
         Performance Problems
• Operations are heavyweight
   – Require OK? and success events sent to each user
   – joining expensive in T 120
• Could use publish/subscribe
   – Build n-party, name-based on top of SIP
     publish/subscribe and invite/accept/delete model?
   – Mobility supported
   – Need extra (conference) argument to invite


                                                         219
Improving Programming
 User                              User
Agent A                           Agent B




                Conference
                  Agent

                                     • Shared data type
          Invite X, C

                         Accept

                                     • Success events
                          X, C

                                       generated on
                                       update to it
                         User           – Joined X, C
                        Agent C      • GroupKit
                                                  220
    Session-Aware Applications
• Applications may want session events
   – To display information
   – To create (centralized or replicated) application session
     possibly involving multicast channels
   – To exchange capabilities (interoperation)
• Each app on a site can subscribe directly from
  conference agent (GroupKit)
   – Multiple events sent to a node
• Each app subscribes from user agent (T 120)
   – IPC latency
   – User agent implements conference agent interface
                                                             221
  Improving Session Access Control
• Create, delete, leave, join, protected through events
• Could also protect add/delete application
   – Add/delete app Ok?, OK and Success
• Protect discovery of conferences
   – Listed attribute in T 120
• Protect query of conference information
   – PlaceWare
• “Lock” ? “Unlock” conference (T 120)
   – Allow/disallow more joins
   – Set user limit (PlaceWare)
• Protect how late users can join (PlaceWare)
                                                          222
      Improving Access Control
• Can support ACLs and passwords
   – Password protected attribute and extra join parameter (T 120,
     PlaceWare)
   – ACL parameter (PlaceWare)
   – More efficient but earlier binding than interactive OK? Events.
• Regular, Interactive, and Optimistic access control
   – Tech fest demo
• Can protect groups of conferences together
   – As files in a directory
   – PlaceWare place is group of conferences similarly protected
• Can specify groups of users
   – PlaceWare

                                                                       223
       Session vs. Application Access
                   Control
• Controls session                • Control interaction with
  operations                        applications.
   –   Create, Delete conf.          – Presenter vs. audience
   –   Join, Leave user                privileges. (PlaceWare &
   –   Add, Remove App                 Webex)
   –   Query…                        – Telepointer editable only by
                                       creator (T 120, PlaceWare,
• Indirectly provides coarse-          GroupKit, NetMeeting
  grained application access           Whiteboard, Webex)
   – If cannot join, cannot use   • Access denied for
     applications                   authorization rather than
• May want to prevent joins         performance reasons.
  for performance rather
  than security reasons
                                                                224
  Shared Layer & Application Access
               Control
• Higher level sharing implies finer granularity access
    – screen sharing protected operations
        •   provide input
    – window sharing
        • Display window
        • Input in window
        • Add to NetMeeting to support digital rights?
    – PPT sharing
        •   change shared slide vs. change private slide (Webex)
• In many cases screen sharing enough
    – PlaceWare PPT sharing: audience vs. presenter equivalent to providing
      input control
• App-specific controls may be needed


                                                                              225
Operation-specific access control
• Allow each operation to determine who can execute it.
   – Dewan and Choudhary ‟91 and Groove
       • Operation can query environment for user
   – PlaceWare
       •   Operation is remote procedure call
       •   Callee identity automatically added as an extra argument
       •   Integrated with RPC proxy generation
       •   Add such a facility to Indigo?
• Dewan and Shen ‟92
   – Can build app-specific access control without access awareness
   – Extends the notion of generic “file rights” to generic “tree-based
     collaboration rights”
   – Assumes system intercepts operation before it is executed
   – Would apply to XAF-like tree model
                                                                      226
        Meta Access Control
• Who sets the access privileges?
• Convener
  – PlaceWare
  – T 120
• Group ownership, delegation etc
  – Dewan & Shen „96


                                    227
 Access vs. Concurrency Control
• Access control
   – Controls whether user is authorized to execute operation
• Concurrency control
   – Controls whether authorized users actions conflict with
     others and schedules conflicting actions
• Can share common mechanism
   – for preventing operation from being executed.
• In T 120 window sharing, UI can be identical
   – Only one user allowed to enter input
   – UI allows mediator to give application control to users
   – User passed because of AC or CC.

                                                                228
        Shared Layer & Concurrency
                  Control
• Higher level sharing implies more concurrency
    – screen sharing
        • Cannot distinguish between different kinds of input
        • Multiple input events make an operation
        • Must prevent concurrency
    – window sharing (add to NetMeeting and PlaceWare?)
        • Can allow concurrent input in multiple windows
        • Probably will not conflict
             – Same word document in multiple windows
    – Whiteboard
        • can allow concurrent editing of different objects
        • Probably will not conflict
             – Object and connecting line
• App-specific concurrency control may be needed

                                                                229
   Pessimistic vs. Optimistic CC
• Two alternatives to serializable transactions
• Pessimistic
   – Prevent conflicting operation before it is executed
   – Implies locks and possibly remote checking
• Optimistic
   – Abort conflicting operation after it executes
   – Involves replication, check pointing/compensating
     transactions
   – Not actually implemented in collaborative systems
      • Aborting user (vs. programmed) transactions not acceptable
   – Merge and optimistic locking variations
                                                                     230
                             Merging
• Like optimistic
   – Allow operation to execute without local checks
• But no aborts
   – Merge conflicting operations
   – E.g. insert 1,a || insert 2, b = insert 1, a; insert 3, b || insert 2, b;
     insert 1, a
• Serializability not guaranteed
   – Strange results possible
   – E.g. concurrent dragging of an object in PlaceWare whiteboard
• App-specific


                                                                                 231
               App-Specific Merging
•   Text editor specific
     – Sun ‟89, ….
•   Tree editor specific
     – ECSCW ‟03
     – Apply to XAF and Office Apps?
•   Programmer writes merge procedures
     – Per file in Coda (Kistler and Satya 92)
     – Per object in Rover (Joseph et al 95, & PlaceWare)
     – Per relation in Bayou and Longhorn WinFS (Terry et al 95, terry@microsoft.com)
•   Programmer creates merge specifications (Munson & Dewan ‟94)
     –   Object decomposed into properties
     –   Properties merged according to merge matrix
     –   Less flexible but easier to use.
     –   Accommodates all existing policies.
     –   Implement in C# objects?


                                                                                   232
    Synchronous vs. Asynchronous
              Merge
• Synchronous
   –   Efficient
   –   Less work to destroy
   –   Can accommodate simple-minded merge
   –   Replicated operation transformation
• Asynchronous
   – Opposite
   – Centralized, merge procedures and matrices
• Faster computers allow complex synchronous merging
   – Centralized merge matrix
• Merging of drawing operations still an issue

                                                       233
          Merging vs. Locking
• Requires replication
   – With its drawbacks and advantages
• Requires high-level local operations
   – Cannot work with replicated window-based systems
• Conflicts cannot be merged
   – Require an interactive phase
• No lock delays
• More concurrency
• Disconnected (asynchronous) interaction
                                                        234
       Response time for locks
• Central lock information
  – Well known site knows who has locks
  – Delay in contacting the site
• Distributed lock information (T 120)
  – Lock information sent to all sites
  – More traffic but less delay
• Still delay in getting lock from current
  holder
                                             235
          Optimistic Locking
• Greenberg et al, 94
  – In general remote checking can take time
  – Allow operation as in optimistic until lock
    response received
  – At that point continue operation or abort
     • Abort damage potentially small
• Office 12 scenarios

                                                  236
                          Floor Control
• Host only (T 120)
    – Person hosting app has control
    – Usually convener
• Mediated ( T 120)
    – Any one can request floor.
    – One or more of the other users have to agree (specially current floor holder)
    – Can pass control to another, if latter accepts
• Facilitated
    – Facilitator distributes floor (PlaceWare)
    – Special case of mediated when floor passed through facilitator
• Unconditional grant (T 120)
    – Anyone can take current floor by clicking
    – Special case of mediated where no user has to agree.
• End user negotiation to decide on policy interactively ( T 120)
    – Interoperability solution works
• API to set and implement policy programmatically (T 120)
                                                                                 237
 Fine-Grained Concurrency Control
• Provide an API (T 120)
   –   Allocate/de-allocate token
   –   Test
   –   Grab exclusively/non exclusively
   –   Release
   –   Request/give token
• Munson and Dewan ‟96
   – Lock hierarchical object properties
   – Associate lock tables with properties
   – Hierarchical locking
• Office 12 scenarios use fine-grained locks

                                               238
            Interoperability
• Cannot make assumptions about remote
  sites
• Important in collaboration because one non
  conforming can prevent adoption of
  collaboration technology
• Devise “standard” protocols for various
  collaboration aspects to which specific
  protocols can be translated

                                           239
Examples of Collaboration Aspects

• Codecs in media (SIP)
• Window/Frame-based sharing
  –   Caching capability for bitmaps, colormaps.
  –   Graphics operations supported
  –   Bits per pixel
  –   Virtual desktop size


                                                   240
   Layer and Standard Protocols
• Easier to agree on lower level layer
• Every computer has a framebuffer with similar
  properties.
• Windows are less standard
   – WinCE and Windows not same
• Toolkits even less so
   – Java Swing and AWT
• Data in different languages and types
   – Interoperation very difficult

                                                  241
              Data Standard
• Web Services
  – Everyone converts to it
• Object properties based on patterns
  translated to Web services?
• XAF



                                        242
          Multiple Standards
• More than one standard can exist
  – With different functionality/performance
• How to negotiate?
• Same techniques can be used to negotiate
  user policies
  – E.g. which form of concurrency control or
    coupling

                                                243
Enumeration/Selection Approach
• One party proposes a series of protocols
  – M= audio 4006 RTP/AVP 0 4
  – A = rtpmap: 0 PCMU/8000
  – A = rtpmap: 4 GSM/8000
• Other party picks one of them
  – M= audio 4006 RTP/AVP 0 4
  – A = rtpmap: 4 GSM/8000

                                             244
   Extending to multiple parties
• One party proposes a series of protocols
• Other responds with subsets supported
• Proposing party picks some value in
  intersection.
• Multiple rounds of negotiaton



                                             245
              Single-Round
• Assume
  – Alternative protocols can be ordered into levels,
    where support for protocol at level l indicates
    support for all protocols at levels less than l
• Broadcast level and pick min of values
  received


                                                   246
         Capability Negotiation
• Protocol not named but function of capabilities
   – Set of drawing operations supported.
• Increasing levels can represent increasing
  capability sets.
   – Sets of drawing operations
• Increasing levels can represent increasng
  capability values
   – Max virtual desktop size

                                                    247
        Uniform Local Algorithm
• Apply same local algorithm at all sites to choose
  level and hence associated “collapsed” capability
  set
   – Min
      • Of capability set implies an AND
      • Bits per pixel, drawing operation sets
   – Max
      • Of boolean values implies an OR
      • Virtual desktop size
   – Something else based on # and identity of sites
     supporting each level

                                                       248
             UI Policy Negotiation
• Can use same mechanism for UI policy negotiation
• Examples
   – Unconditional grant floor control: In T 120, each node can say
       •   Yes
       •   No
       •   Ask Floor Controller
       •   Yes < Ask Floor Controller < No
       •   Use min of this for least permissive control
   – Sharing control: many systems each node can say:
       • Share scrollbar
       • Not share < share
       • Use min for least permissive sharing

                                                                      249
               Office Apps
• Multiple versions of office apps exist
• Use similar scheme for negotiating
  capabilities of office apps in conference
  – pdf capability < viewer < full app
  – Office 10 < Office 11 < Office 12




                                              250
Conversion to Standard Protocols
• May need to convert richer protocol to lesser
  “lowest common denominator” protocol with
  lesser capabilities
• Also may not wish to lose lowest common
  denominator protocol and do per site conversion
• Drawing operation to bitmap in T 120
• Fine-grained locks to floor control in Dewan and
  Sharma „91

                                                     251
                              Basic Firewall

 protected site                      • Limit network
                                       communication to and from
                   reply

                                       protected sites
         send




                                     • Do not allow other sites to
                                       initiate connections to
    unprotected proxy                  protected sites.
                                     • Protected sites initiate
                                       connection through proxies
            send
                       call
  open




                                       that can be closed if
                                       problems
communicating site                   • can get back results
                                        – Bidirectional writes
                                        – Call/reply
                                                                 252
                Protocol-based Firewall

       protected site                     • May be restricted to
                reply


                                            certain protocols
                                  sip
open




                                            – HTTP
         unprotected proxy                  – SIP
                           http
                    call
       open




                                    sip




  communicating site


                                                               253
         Firewalls and Service Access

 protected user                        • User/client at protected site.
                                       • Service at unprotected site.
           reply


                                       • Communication and
                                         dataflow initiated by
                                         protected client site
    unprotected proxy                     – Can result in transfer of data
                            http-rpc

                                            to client and/or server
                                       • If no restriction on protocol
               call
  open




                      rpc




                                         use regular RPC
unprotected service                    • If only HTTP provided,
                                         make RPC over HTTP
                                          – Web services/Soap model
                                                                         254
          Firewalls and Collaboration
                        • Communicating sites
       protected user
                          may all be protected.
                        • How do we allow
open




                          opens to protected
                          user?

   protected user


                                             255
              Firewalls and collaboration
                                    • Session-based forwarder
                                    • Protected site opens connection
  protected user        close         to forwarder site outside firewall
open




                                      for session duration
              send




                                    • Communicating site also opens
                                      connection to forwarder site.
 unprotected forwarder              • Forwarder site relays messages to
                                      protected site
                            close
                 send
       open




                                    • Works well if unrestricted
                                      write/write allowed and used
protected user                      • How to support RPC and higher-
                                      level protocols in both directions
                                    • What if restricted protocol?
                                                                     256
          RPC in both directions
• Would like RPC to be invoked by and on
  protected site (via forwarder).
• In two one-way RPCs
  – Proxies generated separately
  – Create independent channels opened by each party
  – Implies forwarder opens connection to protected site.
• PlaceWare two-way RPC
  – Proxies generated together
  – Use single channel opened by (client) protected site.

                                                            257
         Restricted Protocol
• If only restricted protocol then
  communication on top of it as in service
  solution
• Adds overhead.
• Groove and PlaceWare try unrestricted first
  and then HTTP


                                            258
    Restricted protocols and data to
             protected site
• HTTP does not allow data flow to be initiated by
  unprotected site
• Polling
   – Semi-synchronous collaboration
• Blocked gets (PlaceWare)
   – Blocked server calls in general in one-way call model
   – Must refresh after timeouts
• SIP for MVC model
   – Model sends small notifications via SIP
   – Client makes call to get larger data
   – RPC over SIP?

                                                             259
           Firewall-unaware clients
• Would like to isolate specific apps from worrying protocol choice and
  translation.
• PlaceWare provides RPC
    – Can go over HTTP or not
• Groove components use SOAP to communicate
    – Apps do not communicate directly – just use shared objects and don‟t
      define new ones
• UNC system provides standard property-based notifications to
  programmer allows them to be delivered as:
    –   RMI
    –   Web service
    –   SIP
    –   Blocked gets
    –   Protected site polling


                                                                             260
          Forwarder & Latency
                        • Adds latency
 protected user            – Can have multiple forwarders bound to
                             different areas (Webex)
                           – Adaptive based on firewall detection
                             (Groove)
                               •  try to open directly first
                               •  if fails because of firewall, opens
                                 system provided forwarder
unprotected forwarder          • asymmetric communication possible
                                    – Messages from user go through
                                      forwarder
                                    – Messages to user go directly
                           – Groove is also a service based model!
protected user             – PlaceWare always has latency and it
                             shows


                                                                        261
Forwarder & Congestion Control

 protected user          • Breaks congestion control
                           algorithms
                             – Congestion between
                               communication between
                               protected site and forwarder
unprotected forwarder          controlled by algorithms may
                               be different than end to end
                               congestion
                             – T 120 like end to end
protected user                 congestion control relevant

          Different congestions                          262
        Forwarder + Multi-caster
                              •   Forwarder can multicast to other users
                                  on behalf of sending user
 protected user               •   Separation of application processing
                                  and distribution
                                   –   Supported by PlaceWare, Webex
                              •   Reduces messages in link to forwarder
                              •   Separate multicaster useful even if no
                                  firewalls
                              •   Forwarder can be much more powerful
unprotected forwarder             machine.
                                   –   T 120 provides multi-caster without
    + multicaster                      firewall solution
                              •   Forwarder can be connected to higher
                                  speed networks
                                   –   In groove, if (possibly unprotected) user
                                       connected via slow network, single
                                       message sent to forwarder, which is
                                       then multicast
protected         protected   •   May need hierarchy of multicasters (T
user              user            120), specially dumb-bell

                                                                              263
            Forwarder + State Loader
                               •   Forwarder can also maintain state in
                                   terms of object attributes
  protected user               •   Slow and latecomer sites pull state
                                   asynchronously from state loader
                                    – Avoid message from forwarder to
                                      protected site containing state
            read



                                    – Alternative to multicast
                                    – Extra message to forwarder for pulling
                                      adds latency and traffic
 unprotected forwarder              – Each site pulls at its consumption rate
     + state loader            •   Works for MVC like I/O models
                                    – VNC: framebuffer rectangles
                                    – PlaceWare: PPT slides
                                    – Chung & Dewan ‟98: Log of arbitrary
                                      input/output events converted to object
                                      state
protected          protected   •   Useful even if no firewalls
user               user        •   Goes against stateless server idea
                                    – State should be an optimization
                                                                     264
       Forwarder + multicaster + state
                  loader
                                • Multicaster for rapidly
                                  changing information
   protected user
                                • State loader for slower
                                  changing information
                                • Solution adopted in
                                  PlaceWare
  unprotected forwarder            – Multicast for window sharing
   + multicaster + state           – State loading for PPT slides
          loader                • VNC results show pull model
                                  works for window sharing
                                • Greenberg ‟02 shows pull
                                  model works for video
protected           protected
user                user                                    265
                   Composability
• Collaboration infrastructure must perform several tasks
   – Session management
   – Set up (centralized/replicated/hybrid) architecture
   – I/O distribution
       • filtering
       • Multipoint communication
       • Latecomer and slow user accomodation
   – Access and concurrency control
   – Firewall traversal
• Multiple ways to perform each of these functions
• Implement separate functions in different composable
  modules
• Difficult because they must work with each other
                                                            266
 T 120 Composable Architecture
• Multicast layer
   – Multicast + tokens
• Session Management
   – Session operations + capability negotiation
• Application template
   – Standard structure of client of multicast + session management
• Window-based sharing
   – Centralized architecture for window sharing
   – Uses session management + multicast
• Whiteboard
   – Replicated or centralized whiteboard sharing
   – Uses session management + multicast

                                                                      267
                                                     T 120 Layers
                                                User Application(s)
                           (Using Both Standard and Non-Standard Application Protocols)



         User Application(s)                          Node                          User Application(s)
     (Using Std. Appl. Protocols)                    Controller                  (Using Non-Std Protocols)




          Rec. T.127 (MBFT)
      Rec. T.126 (SI)                                         ...
        Application Protocol Entity                                 ...
                                                                             Non-Standard Application
Rec. T.120
                                                                                 Protocol Entity
  Application Protocol
  Recommendations




                                         Generic Conference Control (GCC)
                                                    Rec. T.124




                                       Multipoint Communication Service (MCS)
                                                   Rec. T.122/T.125




                                          Network Specific Transport Protocols
                                                     Rec. T.123


                                                                                                                      268
        T.120 Infrastructure Recommendations

                                                                                                        T0826350-96
    Composability Advantages
• Can use needed components
  – Just the multicast channel
• Can substitute layers
  – Different multicast implementation
• Orthogonality
  – Level of sharing not bound to multicast
  – Architecture not bound to multicast

                                              269
  Composability Disadvantages
• May have to do much more work.
• T 120 component model
  – Create application protocol entity and relate to
    actual application
  – Create/ join multicast channel
• Suite & PlaceWare monolithic model
  – Instantiating an application automatically
    performs above tasks

                                                   270
       Combining Advantages
• Provide high-level abstractions representing
  popular ways of interacting with sub sets of
  these components
• e.g. Implementing APE for Java applets




                                            271
 Improving T120 Componentization

• Add object abstraction on top of application
  protocol entity
• Web Service
• Object with properties?




                                            272
 Improving T120 Componentization

• Separate input and output sharing
• Some nodes will be input only
  – E.g PDAs sharing projected presentation




                                              273
     Using Mobile Computer for Input

                                              Program




         UI                UI                   UI




                                                        274
Use mobile computers for input (e.g. polls)
                        Summary
• Multiple policies for
   –   Architecture
   –   Session management
   –   Coupling, Concurrency, Access Control
   –   Interoperability
   –   Firewalls
   –   Componentization
• Existing systems such as Groove, PlaceWare, NetMeeting
  are not that different, sharing many policies.
• Pros and cons of each policy
• Flexible system possible

                                                       275
Recommendations: window sharing
• Centralized window sharing.
  – Remove expose coupling in window sharing
  – Add window-based access and concurrency
    control in
  – Provide multi-party sharing, through firewalls,
    without extra latency
• Investigate replicated window sharing
  – Will go through firewalls because low
    bandwidth
                                                  276
 Recommendations: model sharing

• Decouple architecture and data sharing
  – Use delegation based model
• Provide a replicated type for XAF tree
  model.
• Use property based sharing to share
  collaboration-unaware C# objects


                                           277
   Recommendations: Multi-Layer
            Sharing
• Allow users to choose the level of sharing
  – Transparently change system (NetMeeting,
    PlaceWare)
  – Provide layer-neutral sharing
• Allow users to select the architecture,
  possibly dynamically
  – From peer to peer to server-based to service
    based depending on single collaborator, local
    multiple collaborators, and remote collaborators
                                                  278
 Recommendations: expriments
• Need more experimental data
  – Sharing different layers
  – Centralized, replicated, and hybrid archs
• Need benchmarks
  – MSR usage scenarios?




                                                279
Recommendations: Communication
• Use standard Indigo layer, with
  modifications
  – Sending data to protected site
     • Use SIP
     • Provide PlaceWare 2-way RPC
  – Access aware methods
• Add M-RPC
• Build over mulicast

                                     280
Recommendations: Communication
     and componentization
• Have a separate stream multicast for
  language neutrality and lightweightness
• Need M-RPC so it can be mapped to above
  layer




                                        281
  Recommendations: Coupling
• Add externally configurable filtering
  component to determine what, when, and
  who.




                                           282
  Recommendations: Concurrency
           Control
• Support
  –   Various kinds of floor control
  –   Fine-grained token-based control
  –   Optimistic and regular locks
  –   Property-based locking on top
  –   Property-based merging of arbitrary C# types



                                                     283
     Recommendations: Session
          Management
• Build N-party session management on top
  of SIP to get mobility
• Support
  – Implicit and explicit
  – Name-based and invitation-based




                                            284
     Recommendations: Custom
     Collaborative Applications
• Model sharing in existing office
  applications
• Use capability negotiation
• Create shared object type for XAF




                                      285
    Recommendations: Composability

•   Extend T120 component model with
•   Replicated types
•   M-RPC
•   SIP features




                                       286
          Recommendation
• Lots of research in this area
• Use input from research also when deciding
  on new products




                                          287
THE END (The rest are extra slides)




                                      288
Partial Sharing


    Uncoupled




    Coupled




                  289
Merging vs. Concurrency Control
• Real-time Merging called Optimistic Concurrency Control
• Misnomer because it does not support serializability.
• Related because Concurrency Control prevents the
  problem it tries to fix
   – Collaboration awareness needed
   – User intention may be violated
   – Correctness vs. latency tradeoff
• CC may be
   – floor control: e.g. NetMeeting App Sharing
   – fine-grained: e.g. NetMeeting Whiteboard
       • Selecting an object implicitly locks it.
       • Approach being used in design of some office apps.


                                                              290
       Evaluating Shared Layer and
               Architecture
•   Mixed centralized-replicated architecture
•   Pros and cons of layering choice
•   Pros and cons of architecture choice
•   Should implement entire space rather than single
    points
    – Multiple points
       • NetMeeting App Sharing, NetMeeting Whiteboard,
         PlaceWare, Groove
    – Reusable code
       • T 120
       • Chung and Dewan „01

                                                          291
             Centralized Architecture
   Program


Output Broadcaster
& I/O Relayer


                     I/O Relayer   I/O Relayer



      UI                 UI             UI


                                             292
    User 1              User 2          User 3
            Replicated Architecture
 Program          Program       Program



Input            Input         Input
Broadcaster      Broadcaster   Broadcaster



    UI               UI            UI



   User 1           User 2       User 3      293
                               Limitations
•   In OO system must create new types for sharing
     – No reuse of existing single-user types
     – E.g. in Munson & Dewan ‟94, ReplicatedVector
•   Architecture flexibility
     – PlaceWare bound to central architecture
     – Replicas in client and server of different types, e.g. VectorClient & VectorServer
•   Abstraction flexibility
     – Set of types whose replication supported by infrastructure automatically
     – Programmer-defined types not automatically supported
•   Sharing flexibility
     – Who and when coupled burnt into shared value
•   Single language assumed
     – Interoperability of structured types very difficult
     – XML-based solution needed



                                                                                        294
Translating Language Calls to SOAP

• Semi-Automatic translation from Java & C#
  exist
• Bean objects automatically translated.
• Other objects must be translated manually.
• Could use pattern and property-based
  approach to do translation (Roussev &
  Dewan ‟00)

                                          295
    Property-based Notifications
• Assume protected site gets notified (small amt of
  data) and then pulls data in response a la MVC
• Provide standard property-based notifications to
  programmer
• Communicate them using
   –   RMI
   –   Web service
   –   SIP
   –   Blocked gets
   –   Protected site polling
        • Semi-synchronous collaboration
                                                      296
       Shared Layer Conclusion
• Infrastructure should support as many shared
  layers as possible
• NetMeeting/T. 120
   – Desktop sharing
   – Window sharing
   – Data sharing (at high cost)
• PlaceWare
   – Data sharing (at low cost)
• Should and can support a larger set of layers at
  low cost (Chung and Dewan ‟01)

                                                     297
        Classifying Previous Work
• Shared layer
   – X Windows (XTV)                  Shared
   – Microsoft Windows (NetMeeting Layer
   App Sharing)
   – VNC Framebuffer (Shared VNC)
                                                  Rep vs.
   – AWT Widget (Habanero, JCE)                   Central
   – Data (Suite, Groove, PlaceWare,)
• Replicated vs. centralized
   – Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
     Suite, PlaceWare)
   – Replicated (VConf, Habanero, JCE, Groove, NetMeeting
     Whiteboard)
                                                         298
Suite Text Editor




                    299
Suite Text Editor Type
 /*dmc Editable String */
 String text = "hello world";
 Load () {
   Dm_Submit (&text, "Text", "String");
   Dm_Engage ("Text");
 }




                                     300
Multiuser Outline




                    301
                   Outline Type

/*dmc Editable Outline */
typedef struct { unsigned num; struct section *sec_arr; } SubSection;
typedef struct section {
   String Name; String Contents; SubSection Subsections;
} Section;
typedef struct { unsigned num; Section *sec_arr; } Outline;
Outline outline;

Load () {
  Dm_Submit (&outline, "Outline", "Outline");
  Dm_Engage ("Outline");
}




                                                            302
Talk




       303
            Talk Program
/*dmc Editable String */
String UserA = "", UserB = "";
int talkers = 0;
Load () {
  if (talkers < 2) {
    talkers++;
    Dm_Submit (&UserA, "UserA", "String");
    Dm_Submit (&UserB, "UserB", "String");
    if (talkers == 1)
      Dm_SetAttr ("View: UserB", AttrReadOnly, 1);
    else
      Dm_SetAttr ("View: UserA", AttrReadOnly, 1);
    Dm_Engage_Specific ("UserA", "UserA", "Text");
    Dm_Engage_Specific ("UserB", "UserB", "Text"); }
}


                                                 304
                      N-User IM

/*dmc Editable Outline */
typedef struct { unsigned num; struct String *message_arr; }
IM_History;
IM_History im_history;
String message;
Load () {
  Dm_Submit (&im_history, "IM History", "IM_History");
  Dm_SetAttribute(“IM History”, “ReadOnly”, 1);
  Dm_Engage ("IM History");
  Dm_Submit (&message, "Message", String);
  Dm_Update(“Message”, &updateMessage);
  Dm_Engage ("Message");
}
updateMessage (String variable, String new_message) {
    im_history.message_arr[im_history.num++] = value;
}
                                                               305
     Broadcast Methods
            Broadcast
 Model                  Model
            method
bm

 View                   View



Toolkit                 Toolkit



Window                  Window



User 1                  User 2    306
      Connecting Applications
• Replicas connected when containing applications
  connected in (collaborative) sessions.
• Collaborative application session created when
  application is added to a conference.
• Conference created by a convener to which others
  can join.
• Management of conference and application
  sessions called conference/session management.

                                                 307
             Mobility Issues
• Invitee registers current device(s) with
  system
• System sends invitation to all current
  devices
• Supported by Groove and SIP



                                             308
      Connecting Applications
• Replicas connected when containing applications
  connected in (collaborative) sessions.
• Collaborative application session created when
  application is added to a conference.
• Conference created by a convener to which others
  can join.
• Management of conference and application
  sessions called conference/session management.

                                                 309
Synchronization in Replicated
  abc
       Architecture     abc
     dabc                aebc
     deabc               daebc


  X Client              X Client
        Insert d,1           Insert e,2
        Insert e,2           Insert d,1
Pseudo Server         Pseudo Server

       Insert d,1                Insert e,2

  X Server             X Server


   User 1               User 2
                                        310
          Comparing the Architectures
    App
                                       App               App         Shared
                                                                   Abstraction
                                                Input
              I/O                                                  = Model +
   Pseudo            Pseudo            Pseudo           Pseudo
   Server            Server            Server           Server        View

    Window           Window            Window           Window




                              Host 1                               Shared
             Model
                                                                 Abstraction
                                            Shared
                                                                 Centralized
 View
                                          Abstraction
                              View
                                           = Model
                                                                  Shared
                                                                 Abstraction
Window                    Window
                                                                 Replicated
                                                                      311

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:27
posted:8/24/2011
language:Malay
pages:311