Docstoc

Layered Multicast for Scalable Video Communication over

Document Sample
Layered Multicast for Scalable Video Communication over Powered By Docstoc
					Layered Multicast for Scalable
 Video Communication over
 Heterogeneous IP Networks
   Jenq-Neng Hwang, Professor



  Department of Electrical Engineering
       University of Washington
          Seattle, WA 98195
        hwang@ee.washington.edu
              Tutorial Outline

!   Introduction and Motivation
!   Scalable Video Coding for Heterogeneous Networks
!   Multimedia Layered Multicast Techniques
!   End-to-End Available Bandwidth Estimation
!   Congestion Control for Layered Multicast
!   Node Adaptive Layered Multicast
!   Conclusion

                                                       2
        Internet Users Growth




"1B mobile users by 2005 and 1B Internet users by
2005 (www.isc.org)
"More  new mobile phones and handheld devices
have (wireless) internet access (GPRS, WLAN, 3G)
                                                    3
Converged Multimedia Services

         IP Networks




                                4
              Multimedia Signals
"   Text
"   Speech         (8KHz=128Kbps)

"   Audio       (16KHz=256Kbps)

"   Image (B/W and color)
"   Video
"   Graphics & Animation
"   Documents (various formats)
                                    5
       Multimedia Signals and Bitrates

Source              Bandwidth   Sampling              Bits per     Bit Rate
                    (Hz)        Rate                  Sample
Telephone Voice     200—3400    8000 samples/sec      12           96 Kbps
Wideband speech     50—7000     16,000                14           224 Kbps
Wideband audio      20—20,000   44.1 Ks/sec           16/channel   1.412 Mbps
     (2 channels)                                                      (2 channels)
B/W documents                   300 dpi (dots/inch)   1            90 Kb/inch2
Color Image                     512x512               24           6.3 Mb/image
CCIR-601 (NTSC)                 720x480x30            24           248.8 Mbps
CCIR-601 (PAL)                  720x576x25            24           248.8 Mbps
SIF (standard)                  360x240x30            12           31 Mbps
CIF (common)                    352x288x30            12           37 Mbps
QCIF (quarter)                  176x144x7.5           12           2.3 Mbps


                                                                                      6
IP Routing: Delay and Loss
                   LSA
                   Hello                                  Router
Router                         Router
          LSA
          Hello
                                               Router

                     Router                                IP
                             Router           Router    Network

     Router
                                                        Router

              transmission
 A                              propagation


     B
                 nodal
               processing     queue management
                                                                   7
Heterogeneous IP Networks

             !   Adaptive Rate Control
             !   Scalable Coding
                 !   Real-time bandwidth
                     estimation
                 !   Receiver feedback
                 !   Adaptive Multicast control




                                                  8
            IP QoS and Multimedia
"   Quality of Service (QoS) methods aim at trading
    quality vs. resources to meet the constraints dictated
    by:
    "   Availability
    "   delay (latency)
    "   delay variation (jitter)
    "   throughput (average and peak rates)
    "   packet loss
"   QoS is originally developed in network
    communication, and recently extended to the domain
    of multimedia communication.
                                                             9
       IP Unicast vs. Multicast

                            Unicast
Host

              Router




                            Multicast
Host

              Router

                                        10
      Multimedia IP Multicast
! Why multicast?
  •   When sending same data to multiple receivers
  •   Better bandwidth utilization
  •   Lesser host/router processing
  •   Receivers’ addresses unknown
! Applications
  • Video/audio conferencing
  • Resource discovery/service advertisement
  • Media streaming and distribution

                                                     11
     IP Multicast Service Model
!IETF RFC 1112, each multicast group is identified
 by a class D IP address
   •Range from 224.0.0.0 through 239.255.255.255
!Well known addresses designated by Internet
 Assigned Number Authority (IANA)
   •Reserved use: 224.0.0.0 through 224.0.0.255
!Members join and leave the group and indicate this
 to the routers (IGMP)
!Multicast routers listen to all multicast addresses and
 use multicast routing protocols (MRP) to manage
 groups
                                                           12
       IP Multicast Protocols
Host-to-Router Protocols (IGMP)


                                    Hosts




                                    Routers



Multicast Routing Protocols (MRP)
                                              13
              Tutorial Outline

!   Introduction and Motivation
!   Scalable Video Coding for Heterogeneous
    Networks
!   Multimedia Layered Multicast Techniques
!   End-to-End Available Bandwidth Estimation
!   Congestion Control for Layered Multicast
!   Node Adaptive Layered Multicast
!   Conclusion
                                                14
        Scalable Video Coding
"   Why a scalable video codec?
    "   Compression efficiency
    "   Robustness with respect to packet loss
    "   Adaptation to the changing bandwidth
"   Techniques of scalable video coding
    "   Temporal
    "   Spatial
    "   Signal-to-Noise Ratio (SNR)
    "   Data Partition
    "   Wavelet
    "   Fine Granularity Scalability (FGS)
                                                 15
           Scalability - SNR
"   Encode residual image with finer quantization




                  -
                 +



                         -
                        +
                                                    16
        Scalability - Spatial
"   Encode residual image with higher resolution
                  Enhancement Layer




                                             ¼ size

                                      Enhancement Layer 1
                                                       ¼ size

                                       Enhancement Layer 2



                                                                17
        Scalability - Temporal

"   Encode enhancement layers at higher frame-rate

                 Enhancement Layer




                                                     18
           Scalability - Others
"   Data Partition -- Split bit-stream according to DCT/DWT
    coefficients and/or control bits
"   Sub-band Transform such as Wavelet Embedded
    Scalability (EZW, SPIHT, JPEG2000)




                                                          19
Scalability through Embedded
        Bitplan Coding




                               20
Wavelet
based
Scalable
Compression




              21
   Scalability through Set
Partitioning of Wavelet Trees




                                22
        Fine Grain Scalability (FGS)
            ! Have the ability to adapt to the network
               bandwidth much better
                                         F G S t r a n s m it t e d
                                             b a n d w id t h
bandwidth




                    N e t w o r k b a n d w id t h


              B a n d w id t h f o r t r a d it io n a l
                s c a la b le v id e o c o d e c




                                            tim e                     23
       MPEG4 FGS Encoder
                        Encoder
                             Bitplane VLC
                                                      Enhancement Layer

                             Bitplane forming

                             Bitplane Shift/
                             Find maximum
                                                     FGS Layer
                                   +              -
                                                 MPEG4 base-line encoder
                                       +
                                           -
           +                                   VLC
               +       DCT    Q                /EC   MUX   Base Layer
Video In
                   -
                                  IQ
                                                 VLC/EC
                               IDCT

       Intra/Inter Motion Estimation

                                                                          24
              MPEG4 SNR-FGS Decoder
Enhancement layer
    bit-stream                                     Decoded video frame
          Bitplane      Bitplane
                                      IDCT        +
            VLD          Shift


Base layer bit-stream

           VLD/ED       Q-1    IDCT     +    clipping

                                      Motion
                                   compensation

                                                                         25
FGS Bit-Plane Coding




                       26
         Packet Loss of Bit-Planes
                                    Lost 1.5 planes
                                  MSB4 1,0,0,0,0,0,0,0,0,0,0
                                  MSB3 0,0,1,0,0,0,0,0,0,0,0
  MSB4   1,0,0,0,0,0,0,0,0,0,0
                                  MSB2 1,0,1,0,0,1,0,0,0,0,0
  MSB3   0,0,1,0,0,0,0,0,0,0,0
                                  MSB1 0,0,0,0,0,0,0,0,0,0,0
  MSB2   1,0,1,0,0,1,0,1,1,0,0
  MSB1   0,0,0,0,0,1,0,0,0,0,0     10 0 6 0 0 2 0 0 0 0 0
 10 0 6 0 0 3 0 2 2 0 0             Lost 2 planes
                                 MSB4 1,0,0,0,0,0,0,0,0,0,0
 How does a server               MSB3 0,0,1,0,0,0,0,0,0,0,0
  (client) know how              MSB2 0,0,0,0,0,0,0,0,0,0,0
many bitplanes to send           MSB1 0,0,0,0,0,0,0,0,0,0,0
       (receive)?                  80400000000
                                                               27
              Tutorial Outline

!   Introduction and Motivation
!   Scalable Video Coding for Heterogeneous Networks
!   Multimedia Layered Multicast Techniques
!   End-to-End Available Bandwidth Estimation
!   Congestion Control for Layered Multicast
!   Node Adaptive Layered Multicast
!   Conclusion

                                                       28
Video Sender             Scalable
CIF. 10 fps 75kbps        Server    CIF. 10 fps 75kbps
DCT-based SNR
scalability




    CIF. 10 fps 25kbps              CIF. 10 fps 50kbps

                                                         29
              Layered Multicast
"   An effective way to disseminate multimedia data to
    a large number of heterogeneous receivers




                                                         30
                    Layered Multicast
"   Sender-driven ones
    "   Rely on the sender (server) to selectively drop layers or reduce
        rate/layer
    "   SAMM [B. Vickers, C. Albuquerque, and T. Suda, 00]
"   Receiver-driven ones (implicit bandwidth inference)
    "   On spare capacity, join one layer; on congestion, drop one layer
    "   Non-cumulative [J. Byers, M. Luby, M. Mitzenmacher, 01]
    "   Cumulative
         "   RLM [S. McCanne, V. Jacobson, and M. Vetterli, 96]
         "   RLC [L. Vicisano, J. Crowcroft, and L. Rizzo, 98]
         "   Thin-Streams [L. Wu, R. Sharma, and B. Smith, 97]
         "   Delay-based flow control [M. Johanson, 02]
                                                                           31
    Source Adaptive Multi-layer
        Multicast (SAMM)
! Video in the SAMM is encoded into several layers
  and each layer has a unique discarding priority.
! When a network link experiences congestion,
  packets from the lowest priority layer are
  discarded.
! The source periodically multicasts a control packet
  “forward feedback packet” and receivers fill in and
  send back “backward feedback packet” upon receiving
  the forward control packet.
! The video source obtains backward feedbacks from
  receivers to adjust the number of video layers and
  also the encoding rate for each layer.             32
Feedback Packets in the SAMM




                               33
  Feedback Merger in the SAMM

•Feedback Merger: To solve feedback implosion by
consolidating information from arriving feedback
packets and routing the resulting feedback packets
upstream toward the next feedback merger on the
path to the source.




                                                     34
(a) Source rates: Network-based SAMM. (b) Source rates: End-to-end SAMM.
(c) Source rates: Non-adaptive. (d) Video received: Network-based SAMM.
                                                                           35
         Issues in the SAMM

!Video bit rate can be fine-tuned according to
the feedback information.

!Trade-off between amount of feedback
information and the number of feedback
mergers.

!Not friendly to compete network traffic.

                                                 36
          Receiver-driven Layer
            Multicast (RLM)
" RLM Protocol Concepts:
   Source: No active role in the protocol.
   Receivers: On congestion, drop a layer.
              On spare capacity, add a layer.
! When to drop a layer:
  Whenever congestion happens. Congestion is
  expressed explicitly in the data stream through lost
  packets.
" When to add a layer:
  Join-experiment: To carry out active experiments by
  spontaneously adding layers at “well chosen” time.
                                                         37
Join-Experiment in the RLM
"   Use join-experiment to infer spare capacity
"   a receiver joins a group and measures the
    loss rate over a time interval called the
    decision-time. If the loss is too high, the
    receiver leaves the group.




                                                  38
      Shared Learning in the RLM
"   To avoid the implosion of join experiments, the join experiments are
    coordinated through shared learning.
"   The joint-experiment member multicasts a packet to all other receivers
    declaring its intention for trying a certain layer.
"   All receivers can determine whether the experiment caused congestion or
    not without performing an experiment of their own.




                                                                              39
               Issues in the RLM
! A fixed number of multicast groups.
   ! Lack of granularity adaptation
   ! Severe quality degradation when pack loss on base
   layer.
! Slow adaptation to changes of varying
       network bandwidth [Q. Liu, J. N. Hwang, 02]
   ! Synchronization is crucial
   ! Well-developed protocol is crucial
   ! Not fair nor stable [A. Legout and E.W. Biersack, 00], [R.
   Gopalakrishnan, J. Griffioen, et. al., 99]

                                                                  40
           Receiver-driven Layered
              Congestion (RLC)
"   Exponential layer data organization (source generated
    periodic probing bursts)
"   Synchronization packets are sent periodically as flagged
    packets in the encoded stream (no more sending in the
    subsequent period)
"   The receivers are only allowed to perform join experiments
    immediately after receiving a synchronization packet.




                                                                 41
Simulations of RLC




                     2=n2
                     1=n3
                     0=n4
                            42
               Issues in the RLC
"   More scalable than the shared learning algorithm of RLM.
"   Convergence is slow and loss incurred: [A. Legout and E.W.
    Biersack, 00] the subscription to the higher layers is
    exponentially slower than that of to the lower layers.
"   Performance is highly depending on the router queue size.




                                                                 43
                          Thin Streams
"   Thin layer (16Kbps) to avoid packet loss [Wu, Sharma, and Smith,
    97], since the congested network can buffer more packets.
"   Use of difference between the expected throughput (assumed
    CBR) and the actual throughput to detect congestion
"   Thin layers leads to transmission of partial video layers (cannot
    be used in decoding), hence implies poor bandwidth utilization.
"   The requirement of the layers to be strict CBR poses severe
    implementation problems.
"   When a receiver is far (in the sense of transmission delay) from
    the source it can still cause the bottleneck router queue overflow
    and therefore all receivers suffer from packet loss during a
    failed join experiment.
                                                                         44
Simulations of ThinStream




                            45
              Tutorial Outline

!   Introduction and Motivation
!   Scalable Video Coding for Heterogeneous Networks
!   Multimedia Layered Multicast Techniques
!   End-to-End Available Bandwidth Estimation
!   Congestion Control for Layered Multicast
!   Node Adaptive Layered Multicast
!   Conclusion

                                                       46
          Available Bandwidth
"   Link bandwidth
"   End-to-end (bottleneck) bandwidth [static]
"   End-to-end available bandwidth [dynamic]
              100M        1.5M         600K         10M


          S   20M    R1   1.2M    R2   100K   R3        5M   D
                                 End-to-end bandwidth
                                 is 600K (R2-R3)

                                 End-to-end available
                                 bandwidth
                                 is 300K (R1–R2)

                                                                 47
                End-to-End Bandwidth (bottleneck
                   link bandwidth) Estimation
          "   Packet pair technique: without cross traffic, the
              dispersion of arrival time of two back-to-back
              packets reflects the path bottleneck [K. Lai and M. Baker,00]
"   Active packets
      "Bprobe [R.L. Carter and
      M.E. Crovella, 96]
      "Pathrate [C. Dovrolis, P.

      Ramanathan, and D. Moore,
      01]
"   Passive packets
      "[V. Paxson, 97]
      "[K. Lai and M. Baker, 99]

                                                                              48
Packet-Pair Layered Multicast

"   PLM: use Packet-pair to infer (available)
    bandwidth [A. Legout and E.W. Biersack, 00], [S. Keshav, 91]
"   No join-experiment required
"   Stable and fair, faster convergence
"   Require Fair Queuing [J.C. Bennett and H. Zhang, 97]
"   Packet-pair technique is not very accurate [C.
    Dovrolis, P. Ramanathan, and D. Moore, 01]
"   Packet train (sequence of packet-pairs)
     "   cprobe [R.L. Carter and M.E. Crovella, 96] , pipechar [G. Jin, G.
         Yang, et. al.,01]

                                                                             49
Algorithm
 of PLM




            50
Simulations
  of PLM




              51
Simulations
  of PLM




              52
              Delay Trend Detection
"   The one-way delay (OWD, {Di}) will have an increase trend
    when the load exceed the available bandwidth along the path




                                                                  53
             Delay Trend Detection in
                    PathLoad
"   Pathload [M. Jain and C. Dovrolis, 02]: iterative measurment,
    non-intrusive
"   Use identical packet size and pre-processing (median filter)
"   The Trend detection algorithm in Pathload has poor            1 or 0
    resolution and slow convergence
     " PCT (Pair Comparison Test)

         " Fragile to zigzag delay measurements

     " PDT (Pair Difference Test)

         " Fragile to big delay measurement jumps




                                                                           54
          Simulations of PathLoad




Tight Link: the link with the minimum available bandwidth
along the path, which also determines the end-to-end avail-bw.
                                                                 55
Simulations of PathLoad




                          56
Packet Delay Dispersion




                          57
       End-to-End Delay Trend Model

"   k : Packet number     "     N : Total number of links
"   D k : One-way delay   "     C i : Bandwidth of link i
"   Lk : Packet size
                          "   d ik : Queuing delay of packet k at link i
                          "    σ i : Processing delay at link i


                      N 1                N               N
            Dk = ( ∑            )L k +   ∑      d ik +   ∑σi
                     i =1 C i            i =1            i =1


              P{Dl > Dm } > 0.5                    ∀l > m

                                                                           58
                Bi-Weight Moving Average
    MDk = MDk −1 ⋅ (1 − α ) + Dk ⋅ α
                1 3
         MD 0 =    ∑ Dl
                4 l=0

         1                        1
    α=                      α=
"        4   for fast MA;        12   for slow
    MA [Q. Liu, J. N. Hwang, ICME 2003]
"   Test: The percentage of delay
    measurement whose fast MA is
    larger than its slow MA?
      " Threshold is set to 65%

"   Computation overhead is ο (N )
                                                 59
         Trend Detection Examples




Test result: 92%; Increase trend   Test result: 39%; No trend


                                                                60
     Iterative Measurement of
       Available Bandwidth

"   The sender uses token bucket [S. Tanenbaum, 96]
    to keep the target rate
"   Initial probe rate is acquired by packet pair
    technique
"   Idle time is calculated so that the overall rate of
    probe packets is less than 10% of the measured
    available bandwidth


                                                          61
Flow Chart for Bandwidth Estimation
                                     Init low_rate to 0 and
                                       high_rate to MAX
                                                              Parameters:

                                        Acquire Initial       Probe_rate,
                                         Probe rate
                                                              Probe_size,

                                         Send probe           Packet_size    Trend
          Idle Period
                                         to receiver                        detection



      low_rate = probe_rate;                                          Send trend result and
No                                         Increase
     probe_rate = (low_rate +   No                                      observed rate to
          high_rate) / 2                    Trend?
                                                                             sender

                                              Yes

                                 high_rate = probe_rate;
       high_rate - low_rate
                                      Probe_rate =
          < resolution?
                                     observed_rate



                                               Iteration result:
               Yes

               Quit
            iteratioin                         [low_rate,high_rate]
                                                                                              62
    Experiment and Verification
"   Experiment Environment
    "   T1(1.536Mbps)
    "   ADSL(400Kbps)
"   SNMP[J. Case, M. Fedor, et al., 90] monitoring
    "   Collect the transmitted bytes through the
        bottleneck link in one-second interval
    "   Results in following pages are calculated in 2-
        second interval
"   Resolution
    "   10Kbps
    "   Converges in 10-30s; 3-14 probes
                                                          63
   Experiment Result: ADSL Path
                       workstation to mmc.ee (ADSL)
                       1 192.168.1.1 (192.168.1.1) 0.389 ms 0.382 ms 0.317 ms
                       2 bdsl.66.14.120.193.gte.net (66.14.120.193) 7.405 ms 7.893 ms 7.819 ms
                       3 4.24.53.73 (4.24.53.73) 7.946 ms 9.266 ms 9.291 ms
                       4 p7-0.evrtwa1-cr1.bbnplanet.net (4.24.125.117) 9.156 ms 8.421 ms 8.503 ms
                       5 p3-0.evrtwa1-br1.bbnplanet.net (4.24.5.101) 8.206 ms 8.877 ms 8.339 ms
                       6 so-4-2-0.sttlwa1-hcr1.bbnplanet.net (4.24.4.77) 9.674 ms 9.918 ms 10.234 ms
                       7 pos6-2.hsa2.Seattle1.Level3.net (209.0.227.129) 10.205 ms 10.012 ms 9.493 ms
                       8 so-2-0-0-0.mp2.NewYork1.Level3.net (209.247.9.89) 9.687 ms 10.687 ms 10.213 ms
                       9 so-7-0-0.gar2.Seattle1.Level3.net (64.159.1.166) 11.174 ms 10.429 ms 10.749 ms
                       10 unknown.Level3.net (209.247.84.38) 10.197 ms 11.383 ms 11.486 ms
                       11 uwbr2-GE0-0.cac.washington.edu (198.107.150.52) 11.165 ms 9.924 ms 9.994 ms
                       12 hoover-GE1-1.cac.washington.edu (140.142.150.9) 12.169 ms 10.196 ms 10.136 ms
                       13 mmc.ee.washington.edu (128.95.28.18) 10.662 ms 11.657 ms 10.227 ms




Bottleneck: ADSL; packet size: 256 bytes

                                                                                                        64
Experiment Result: T1 Path
  workstation to mmc.ee (T1)
   1 <10 ms <10 ms <10 ms 209.101.242.113
   2 <10 ms <10 ms <10 ms 206.135.85.177
   3   16 ms <10 ms <10 ms seri0-1-0-12-0.stl-m100.gw.epoch.net [206.135.200.161]
   4   15 ms   16 ms <10 ms pnwgp-six.pnw-gigapop.net [198.32.180.84]
   5   16 ms <10 ms <10 ms uwbr2-ge0-0.cac.washington.edu [198.107.150.52]
   6 <10 ms <10 ms     15 ms hoover-ge1-1.cac.washington.edu [140.142.150.9]
   7   15 ms <10 ms <10 ms mmc.ee.washington.edu [128.95.28.18]




         Bottleneck: T1; packet size: 1392 bytes
                                                                                    65
    Testing of Specific Available
     Bandwidth along the Path
               Correct result: av-bw is lower than 160Kbps




Bottleneck: ADSL; packet size: 192 bytes;
noise: 50 bytes


                Confused by noise           Incorrect result
                                                               66
               Tutorial Outline

!   Introduction and Motivation
!   Scalable Video Coding for Heterogeneous Networks
!   Multimedia Layered Multicast Techniques
!   End-to-End Available Bandwidth Estimation
!   Congestion Control for Layered Multicast
!   Node Adaptive Layered Multicast
!   Conclusion

                                                       67
    Bandwidth Inference Congestion
            (BIC) Control
"   BIC works for arbitrary layer data
    organization [Q. Liu, J. N. Hwang, ICME 2003]
    "   bwi
                                         pbwi = bwi +1(i ≠ N )
          "   Bitrate up to layer i     
                                         pbwN = bwN
    "   pbwi
          "   Bitrate up to layer i during probes

    Example:
    bwi   (32,64,128,256,512,1024) Kbps

    pbwi (64,128,256,512,1024,1024) Kbps
                                                                  68
               Source Layer Data Organization

Normal Period



Probe Period



Layer 2 data
forced to layer 1




                                                69
                Source Layer Probing

"   Two parameters
    "   Probe size P (with average packet size S)
         "   P = 50
    "   Probe interval T
         " Additional data rate is less than 10% of the
           original layer data rate                    P×S    P×S
                                                 ti =       =
         " Probe duration t for layer i                pbwi bwi +1
                            i
                                       N −1   N −1 P × S N −1 P × S
         " Total probe duration t = ∑ t i = ∑            = ∑
                                       i =1   i =1 pbwi i =1 bwi +1
         " Probe interval
                           T = max(t , 10t1 )
                                                                      70
        Spare Bandwidth Inference

"   The receiver detects the delay trend during the
    probe period.
    "   If there is no increase trend, it is probable that the path
        has spare capacity to support one more layer.
    "   If there is an increase trend, the path probably doesn’t
        has enough spare capacity for one more layer. And the
        router will absorb the additional packets (normally
        about P/2).
"   The probe also works as a synchronization point
    for all receivers
    "   Scale to number of receivers.
                                                                      71
          Join/Leave Adaptation

"   To join one more layer
    "   No increase trend during probe period, and
    "   no packet loss recently
"   To leave current layer
    "   Observed packet loss exceeds a predefined
        threshold
"   No join-experiment required

                                                     72
                Join/Leave Adaptation

"   Inter-session fairness
    "   Different thresholds for different layers
         "   Delay increase trend detection threshold and packet loss threshold
         "   Higher layers use more strict thresholds
"   Startup phase (converge faster)
    "   A startup phase starts from a receiver joins the base layer
        until it detects network congestion and then it turns to
        steady phase
    "   The receiver doesn’t wait for the probe period to join the
        next layer in the startup phase
    "   A more strict trend threshold is applied for startup phase
        to avoid over-subscription
                                                                                  73
   Simulations




Simulation Topology


                      74
          Stability, Scalability and
               Heterogeneity




"   Two sets of bandwidth layers bwi
    "   (32,64,128,256,512,1024) Kbps
    "   (24,50,120,240,360,500,760,1400) Kbps
"   Joining time: 5+14*(i-1) for
    different Di

                                                75
            Simulations: No Background Traffic




(32,64,128,256,512,1024,2048) Kbps   (24,50,120,240,360,500,760,1400) Kbps



                                                                        76
       Simulations: On/Off Background Traffic
          200 Kbps CBR at R1-R2 between [100s,200s], [300s,400s], [500s,600s]
          100 Kbps CBR at R2-D3, R2-D4, R2-D5, R2-D6 between [100s,200s],
          [300s,400s], [500s,600s]




                                     (32,64,96,128,160,192,224,256,288,320,352,
(32,64,128,256,512,1024,2048) Kbps
                                     384,416,448,480,512,544,576,608,640) Kbps

                                                                                77
           How Well Are the Performance (D1)




                                     (32,64,96,128,160,192,224,256,288,320,352,
(32,64,128,256,512,1024,2048) Kbps
                                     384,416,448,480,512,544,576,608,640) Kbps

                                                                            78
         Exponential Background Traffic
        Average 300 Kbps VBR at R1-R2
        Average 200 Kbps CBR at R2-D3, R2-D4, R2-D5, R2-D6
        Capacity Links are increased 300 Kbps for R1-R2, 200
        Kbps for R2-D3, R2-D4, and 100 Kbps for R2-D5, R2-D6




(24,50,120,240,360,500,760     (32,64,96,128,160,192,224,256,288,320,352,
,1400) Kbps                    384,416,448,480,512,544,576,608,640) Kbps

                                                                      79
         How Well Are the Performance (D1)




(24,50,120,240,360,500,760,1400)   (32,64,96,128,160,192,224,256,288,320,352,
Kbps                               384,416,448,480,512,544,576,608,640) Kbps

                                                                          80
             Inter-Session Fairness
"   Delay Increase Trend Thresholds: higher thresholds (e.g., > 80%) for
    lower layer, and lower thresholds (e.g., > 55%) for higher layers.
"   Packet Loss Threshold: higher thresholds (e.g., > 20%) for lower
    layers, and lower thresholds (>0%) for higher layers.
"   Inter-Session Loss Credit: the count of consecutive intervals (min{5s,
    time[50 packets]}) that a receiver encounters packet loss.
"   Multiple Layer Drop: to simulate the multiplicative decrease behavior
    of TCP in congestion avoidance phase.




                                                                             81
            Inter-Session Fairness

"   Fairness among 3 BIC sessions with the same Type-
    3 sessions or 3 different source types (1,2,3), with
    bottleneck 900Kbps (exponential background)




                                                           82
                    TCP Fairness

!   BIC is “conservative” at joining a layer and “reluctant” at
    dropping a layer, which means BIC is good at keeping its
    current throughput while is bad at acquiring bandwidth
    from existing competing traffic,
!   BIC is responsive to competing TCP connections, both
    TCP Newreno and TCP Vegas.
!   For short round trip time, BIC acquires less bandwidth
    when competing with TCP Newreno connections. For long
    round trip time, BIC acquire more bandwidth than TCP
    Newreno.
!   BIC has good heterogeneity performance and achieves the
    same throughput for the receivers with different delays.
                                                                  83
        TCP Fairness: Competition
              Performance

"   S1 source for BIC session, with D1 & D2 receivers; S2 source
    for TCP session with D3 receiver.
"   Routers use RED queue, with queue size 100.




                                                                   84
         TCP Fairness: Competition
               Performance



                                 Link
TCP Newreno, BIC starts first.   Delay   TCP Newreno, TCP starts first.
                                 10ms




TCP Vegas, BIC starts first.             TCP Vegas, TCP starts first.     85
        TCP Fairness: Competition
              Performance



                                 Link
TCP Newreno, BIC starts first.   Delay   TCP Newreno, TCP starts first.
                                 200ms




TCP Vegas, BIC starts first.             TCP Vegas, TCP starts first.     86
              Tutorial Outline

!   Introduction and Motivation
!   Scalable Video Coding for Heterogeneous Networks
!   Multimedia Layered Multicast Techniques
!   End-to-End Available Bandwidth Estimation
!   Congestion Control for Layered Multicast
!   Node Adaptive Layered Multicast
!   Conclusion

                                                       87
        Issues of Layered Multicast

!   How to Improve Congestion Control (in high
    bandwidth-delay & conventional environments):
    !   Small queues
    !   Almost no drops
!   How to Improve Fairness
!   Better Scalable (no per-flow state)
!   More Flexible bandwidth allocation: min-max
    fairness, proportional fairness, differential
    bandwidth allocation,…

                                                    88
     XCP: An eXplicit Control Protocol




       1. Congestion Controller
       2. Fairness Controller

“Congestion Control for High Bandwidth-Delay Product Networks,”
Dina Katabi, Mark Handley, and Chalrie Rohrs, ACM Sigcomm’02
                                                                  89
How Does XCP Work?



       Round Trip
 Round Trip Time Time

      Congestion Window
Congestion Window

         Feedback =
          Feedback
    Feedback
         + 0.1 packet


Congestion Header
                          90
How does XCP Work?



            Round Trip Time

           Congestion Window

              Feedback =
              Feedback =
              - 0.1
              + 0.3 packet



                               91
          How does XCP Work?




 Congestion Window = Congestion Window + Feedback


Routers compute feedback without any
Routers compute feedback without any
           per-flow state
           per-flow state
                                                    92
     How Does an XCP Router Compute
              the Feedback?
  Congestion Controller                Fairness Controller
Goal: Matches input traffic to link   Goal: Divides ∆ between flows to
capacity & drains the queue           converge to fairness

Looks at aggregate traffic &          Looks at a flow’s state in
queue                                 Congestion Header

Algorithm:
                     MIMD                                  AIMD
                                      Algorithm:
Aggregate traffic changes by ∆        If ∆ > 0 ⇒ Divide ∆ equally
∆ ~ Spare Bandwidth                   between flows
∆ ~ - Queue Size                      If ∆ < 0 ⇒ Divide ∆ between flows
So, ∆ = α davg Spare - β Queue        proportionally to their current rates
                                                                         93
    Node Adaptive Congestion Control

"   Fairness is an important element in congestion
    control.
"   Max-min fairness: maximization of the minimum
    throughput of the flows sharing a single bottleneck
    (most RLM’s have some limitation on this).
"   Solution:
    Reallocate all the traffic centrally in congestion
    control enabled nodes, with receiver’s subscription

                                                          94
       Node-Adaptive Congestion
              Control
                           Node




                                                       Bandwidth
 Max-Min BW Share              Layer Filter
                                                       Fine-Tuner




                Traffic Information
                                              Queue Management
                    Adaptation


Video of Traffic of Interest
Congestion Control Signaling                                        95
       To Achieve Max-min Fairness

                       Q
r = α bwavail − β              end _ of _ time _ t

                                     t
bw  avail
            = c − Rinput
                                                     Node
                 1
r > 0, ∆ r x = r
                 N
             r
r < 0, ∆ r x = r       x

            ∑r             i

new r = r + ∆ r
       x       x       x
                                                            96
     Bandwidth Analysis in Node

Ro                    When init q length is zero :
                                         Q + Num drop
                      Rinput = Routput +
                                              t
                                                             Q + Num drop
      Q Packet drop Q > 0, R   output   = c , Rinput = c +
                                                                   t
C                     Q = 0, Routput = Rinput


                      In general :
                                        Q                        Num drop − Q init
                      r = bw avail −              = bw avail −
                                            end              '

                                           t                             t

          Ri                                                                  97
                            TCP

       Simulations

                            UDP   TCP




"   Q size=80.
"   BW (2-3) = 2000 Kbps.
"   Multicast 200Kbps per
    group
                                        98
Conclusion




             99

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:8
posted:11/4/2011
language:English
pages:99