Document Sample
P6 Powered By Docstoc
					 Promoting the Use of
End-to-End Congestion
Random Early Detection
of Network Congestion
• Congestion collapse forms
• Classification of flows
• Incentives for the use of End-to-End
  congestion control
• Identifying flows to regulate
• Router role in the network
  Congestion Collapse Forms
• classical congestion collapse
• congestion collapse from undelivered
• fragmentation based congestion collapse
• congestion collapse from increased control
• congestion collapse from stale or unwanted
Avoiding Congestion Collapse
 From Undelivered Packets
Two ways:
• maintaining an environment
  characterized by end-to-end
  congestion control
• maintaining a virtual-circuit-style
 Characterization of Flows
• TCP-Friendly flow arrival rate does not
  exceed the arrival rate of a conformant
  TCP-connection in the same circumstances
• Responsive flow changes it’s arrival rate
  proportionally in response to an increased
  packet drop rate
• Flow using disproportionate bandwidth -
  where a disproportionate share is defined
  as a significantly larger share than other
  flows in the presence of suppressed
       Aggressive Flows
• Unresponsive Flows
• Responsive but not TCP-Friendly
• Flows using disproportionate
Note: an aggressive use of TCP is
  possible as well
      Creating the Right
A solution: TCP connections cooperating
  to share scarce bandwidth in times of
• User-Based incentive mechanism (?)
• Network-Based incentive mechanism –
  identifying flows to regulate
Creating the Right Incentives
           – cont.
Policies for regulating high-bandwidth flows:
• regulate high-bandwidth flows in time of
  congestion when they violate expectations of end-
  to-end congestion control, by being unresponsive
  or exceeding bandwidth used by conformant TCP
  flow under the same circumstances
• regulate any flows determined to be using
  disproportionate share of bandwidth in time of
The regulated flows can be restricted either
 to the same bandwidth as “well-behaved”
 flows, or to less bandwidth.
 Identifying Flows to Regulate
• un-addressed issues: encryption and packet
  fragmentation may interfere with packet-fine-
  classification into single flows by routers
• A single flow is defined by the source &
  destination IP and port -> each TCP connection is a
  single flow
• the approaches discussed are designed to detect a
  small number of misbehaving flows in an
  environment characterized by end-to-end
  congestion control and would not be effective as a
  substitute for such control.
Identifying Flows to Regulate
• Routers have some mechanism for
  efficiently estimating the arrival rate of
  high-bandwidth flows
• A router only needs to consider regulating
  flows using significantly more than their
  “share” of the bandwidth in the presence
  of suppressed demand from other best
  effort flows
Identifying flows that are
   “not TCP-Friendly”
Assuming TCP is characterized by:
1. reducing its window at least by half upon
     congestion indication
2. increasing its window by a constant rate
     of at most one packet per RTT
it is possible to determine a maximum overall
     sending rate for a TCP connection given
     packet loss rate, packet size and RTT
        TCP-Friendly Test
A TCP-friendly flow sustains the next equation:
T – maximum sending rate for a TCP connection (Bps)
P – packet drop rate
B – maximum packet size (Bytes)
R – fairly constant minimum RTT including queuing
   delays (sec)

               1.5 2 / 3B
                  R P
    Limitations of the TCP-
          friendly test
• Can only be applied to a flow at the
  level of granularity of a single TCP
• Difficulties: determining maximum
  packet size and minimum RTT for a
• Test measurements should be taken
  over a sufficiently large time interval
   Identifying Unresponsive
if the steady state drop rate increases
  by a factor x
the presented load for a high
  bandwidth flow should decrease by a
  factor close to x or more.
otherwise: a single flow or an
  aggregated traffic can be considered
Requirements & Limitations
 of Unresponsiveness Test
• requires estimates of flow arrival rate and
  packet drop rate over long time intervals
• the test does not detect all flows that do
  not respond to congestion, but is only
  applied to high bandwidth flows
• when the packet drop rate remains
  relatively constant, no flows will be
  identified as unresponsive
• this test is less straight forward for flows
  with variable demands
A Variant of Unresponsive
        Flow Test
instead of applying this test passively
  another option is to purposefully
  increase the packet drop rate of a
  high bandwidth flow in times of
  congestion and observing whether the
  arrival rate of the flow on that link
  decreases appropriately
If the only tests deployed upon a path
  were tests for responsiveness this
  could be an incentive for flows to
  start with an overly high bandwidth –
  such flow can reduce it’s sending rate
  be considered responsive and still
  receive a larger share of bandwidth
  than other competing flows
  Identifying Flows Using
Disproportionate Bandwidth
Let n be the number of flows with packet
  drops in the recent reporting interval
we define a flow as using disproportionate
  share of the best effort Bandwidth if it’s
  fraction of the aggregate arrival rate
  exceeds ln(3n)/n
we define a flow as having high arrival rate
  relative to the level of congestion if it’s
  arrival rate is greater than c/ p Bps for
  some constant c.
Disproportionate Bandwidth
 Test Limitations & notes
• estimating the level of unsatisfied
  demand is problematic
• A router may restrict the bandwidth
  of such flows even if they are known
  to be using conformant TCP
  congestion control
     Router Response To
      Aggressive Flows
routers should freely restrict the
  bandwidth of best effort flows
  determined to be “Aggressive” in
  times on congestion
Restrictions should be removed at time
  of no congestion or upon indication of
  reduced arrival rate appropriately
    Router Response To
   Aggressive Flows cont.
As to flows which use disproportional
  share of the bandwidth:
a conservative approach would be to
  limit the restriction of responsive
  flows so that over the long run each
  flow receives as much as the highest
  bandwidth unrestricted flow
     Alternate Approaches
A deployment at all congested routers, of
  per-flow scheduling mechanisms (such as
  round robin and fair queuing)
however, per-flow scheduling cannot prevent
  congestion collapse by itself and should be
  used along side with end-to-end congestion
but, per-flow scheduling motivates flows not
  to use end-to-end congestion control
therefore, it needs other incentives for
  flows to use end-to-end congestion control
Alternate Approaches cont.
FCFS scheduling is more efficient to
  implement than per-flow scheduling and
  may improve link speed and the number of
  active flows per link
FCFS is considered an optimal algorithm for
  a traffic where the long term aggregate
  arrival rate is restricted by either
  admission control or end-to-end congestion
FCFS allows packets arriving in a small burst
  to be transmitted in a burst rather than be
  spread out and delayed by the scheduler
Router Role in the Network
Basically there is a limit on how much control
   can be accomplished from the edges of
   the network
Some mechanisms are needed in the routers
   to complete the endpoint congestion
   avoidance mechanisms.
two classes of router algorithms related to
   congestion control:
   1. queue management
   2. scheduling
      Queue Management
“Tail Drop”:
The traditional technique for managing
  router queue length:
• set a maximum length (packets) for each
• accept packets until the maximum length
  has been reached
• reject (drop) subsequent incoming packets
  until the queue length decreases
     Tail Drop Drawbacks
• lock-out:
  may allow monopolization of queue
  space by a single connection or a few
• Full Queues:
  allows queue to maintain a full (or
  almost full) status for long periods of
  Full Queues – A problem
packets often arrive at routers in
if the queue is full or almost full, an
  arriving burst will cause multiple
  packet drop.
this may result in synchronization of
  flows throttling back followed by a
  long period of lowered link utilization,
  reducing overall throughput.
  “Tail Drop” Alternatives
• random drop on full
  when a new packet arrives and queue
  is full - drop a randomly selected
  packet from the queue
• drop front on full
  when a new packet arrives and queue
  is full – drop the packet at the front
  of the queue
 Active Queue Management
given responsive flows:
A solution for the full-queues problem
is for routers to drop packets before
a queue becomes full.
Active queue management allows routers to
control when and how many packets to drop
such mechanism can provide:
• reduce number of packets dropped
• lower delay interactive service
• lock-out avoidance
    Approaches for Queue
• A router could send a message to the source of
  flow when the queue size exceeds a certain size,
  and outline a possible method for flow control at
  the source
• DECbit – a binary feedback scheme where the
  router uses a congestion indication bit in packet
  headers for feedback about congestion.
• each router at the flow path has upper bound
  indicating congestion and a lower bound indicating
  light load. each router adds its own information
  about its queue size to the packet
  the source node decides on a course of action
  depending on that information
       RED queue management
RED – (Random Early Detection) an example to an
  active queue management algorithm used to keep
  throughput high but average queue size low
the gateway detects incipient congestion by
  computing the average queue size
notifies connections of congestion by either by
  dropping packets or by setting a bit in packet
the probability for notifying a particular
  connection to reduce window size is proportional
  to the bandwidth share it uses
designed to accompany a transport layer
  congestion control protocol such as TCP
 RED queue management cont.
  RED gateways can be useful even in
  controlling the average queue size in a
  network where the transport layer
  protocol cannot be trusted to be
  cooperative. it can also work with:
• rate-based (as apposed to window-based)
  transport layer protocols
• drop preference algorithms
• separate queues for realtime and non-
  realtime traffic
           Basics of RED
Transient congestion is accommodated by a
  temporary increase in the queue.
Longer-lived congestion is reflected by an
  increase in the computed average queue
Randomized selection of packets to drop (or
  mark) results in increased probability for
  high bandwidth flows to get congestion
          RED vs. DECbit
• RED – the source should reduce its window
  even when only one packet was marked
  DECbit -the source looks at the fraction of
  marked packets in the last RTT
• RED as apposed to DECbit separates the
  congestion detection algorithm from the
  algorithm to set the congestion indication
  bit. thus RED is not biased against bursty
            Red Algorithm
for each packet arrival
  calculate average queue size: avg
  if min    < max
 than calculate probability p
     with probability p: mark arriving packet
 else if max   avg
     mark the arriving packet
• RFC2309
• Sally Floyd and Kevin Fall
  “Promoting the Use of End-to-End
  Congestion Control in the Internet”
• Sally Floyd and Van Jacobson
  “Random Early Detection Gateways
  for Congestion Avoidance”

Shared By: