Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Active Congestion Control (ACC) Using Available Bandwidth-based by nfk14697

VIEWS: 4 PAGES: 8

									   Active Congestion Control Using Available Bandwidth-based Congestion
                                 Detection
                              AGGARWAL A. K. University of Windsor - Canada
                              BHARADWAJ A. N. Wayne State University - USA
                                KENT R. D. University of Windsor - Canada


Abstract: - Though the problem of congestion has been studied since the beginning of Internet, it continues to demand
attention. This work proposes an Active Congestion control (ACC) scheme based on Available Bandwidth-based
Congestion Detection (ABCD), which regulates the traffic according to network conditions. Changes in the available
bandwidth can trigger re-adjustment of the flow rate. We have introduced packet size adjustment at the intermediate
router in addition to rate control at the sender node, scaled according to the available bandwidth, which is continuously
monitored. To verify the improved scheme, we have extended Ted Faber‟s ACC work [1] in NS-2 simulator. With this
simulator, we verify ACC-ABCD‟s gains such as a reduced number of packet drops and a more stable performance.
Our tests prove that the ACC-ABCD technique yields better results as compared to TCP congestion control with or
without the cross traffic.

Key-Words: - Active Networks, Congestion Control, Congestion Avoidance, Available Bandwidth

1. Introduction                                                 adjustment at the intermediate router in addition to the
                                                                rate control at sender.
The end-to-end approach is considered to be a robust
one and it has served quite well until recently, when           The congestion control mechanism used by TCP reacts
researchers started to explore the information available        to a single packet loss by halving its congestion
at the intermediate node level. This approach triggered         window. This causes abrupt changes in the sending
a new field called Active Networks, in which the                rate. This problem is sorted out by use of the Scaled
intermediate nodes have a much larger role to play than         Rate Change (SRC) approach in ACC-ABCD. This
that of the naïve nodes. It was expected that with the          improves reliability, which is demonstrated by a
era of high bandwidth networks, congestion control              reduced number of packet drops. The ACC-ABCD
would become a historical phenomenon [2]; on the                approach uses packet size adjustment at the
contrary, it continues to be an important task for the          intermediate router in addition to the rate control at
networking community. Bhatacharjee [3] regarded                 sender. We evaluate and validate our approach through
congestion control as a problem that was unlikely to            simulation using NS-2.
disappear in the near future. Over the past few years
the active networking community has been suggesting             Section 2 provides the background of classical
ways which highlight the importance of network level            congestion control techniques. In section 3, we have
computation. Internet experiences packet losses                 surveyed the work related to congestion control
frequently due to congestion. Widmer et al. state that          through network level computation, i.e., Active
imbalance in resource allocation should be regarded as          Congestion Control. Section 4 outlines the proposed
the prime cause for instances of congestion [4].                ACC-ABCD methodology. The details of the
                                                                experiments and the test results are given in section 5.
The proposed active congestion control technique                The concluding section 6 lists achievements and
presents a new intermediate router based congestion             limitations of our methodology.
control mechanism. It makes use of available
bandwidth-based congestion detection to determine the
degree of congestion and to formulate its remedial              2. Classical Techniques
measures. The ACC-ABCD approach uses packet size                Congestion control and avoidance was well
                                                                documented for the first time in [5], which
                                                                incorporated into the TCP suite some of the
congestion-control remedies like slow start and             can be minimized. Li [11] talks about TCP-ECN where
congestion avoidance. Jain [2] distinguished between        bottlenecks mark the packet so that packet loss is
two commonly interchangeable terminologies called           avoided. TCP uses AIMD (additive increase
congestion control and congestion avoidance.                multiplicative decrease) mechanism, which is mainly
According to Jain et al., congestion control is a           responsible for rate control. Li et al said that in
recovery process from a congested state whereas             traditional congestion-control schemes, the packet loss
congestion avoidance is a prevention mechanism              is unavoidable.
designed to keep the network from entering the
congested state. Further, they state, “[t]he congestion     2.2 Random Early Detection (RED)
avoidance scheme allows a network to operate in the
region of low delay and high throughput.” While             Floyd [12] states that RED gateways improve the
exploring the reasons for congestion, Lotfi [6] said that   fairness and performance of TCP traffic. They have
“[c]ongestion is a mismatch of available resources          presented a RED queue-management scheme, which
and the amount of traffic.”                                 keeps the buffer from overflowing. They infer that
                                                            RED configuration is still a research issue. Janarthanan
Brakmo and Peterson [7] proposed an end-to-end              [13] states that RED gateways are intended for a
congestion avoidance scheme (which came to be               network where a transport protocol responds to
known by the name TCP Vegas), that increases the            congestion indications from the network.
throughput and decreases the packet loss. In other
versions of TCP, the size of the congestion window
decreases only when a packet is lost. TCP Vegas, on
the other hand, tries to anticipate congestion and          2.3 Feedback-Based End-to-End
adjusts its transmission rate accordingly. By using         Congestion Control
measurements of Round Trip Time (RTT), TCP Vegas
modulates the congestion window size. Secondly it           Congestion control has been traditionally dependent on
uses the RTT estimate to retransmit a dropped segment       the end-to-end adjustment of the flows. Sources adjust
earlier. Thirdly it modifies Slow Start by allowing         their transmission rate in response to feedback from the
exponential growth only for every other RTT                 network nodes [6].
measurement. For the duration between the two
measurements, the congestion window stays invariant.        Psounis [14] describes the scheme where centrally
Golestani [8] formulated the problem of end-to-end          managed techniques will be replaced by distributed
traffic as an optimization problem. The Golestani           network management. Network management is
method attempts to avoid congestion, while being fair       achieved through the polling of managed devices,
to all the flows into the network. Fairness requires        which are observed for anomalies. Psounis [14]
restricting every source host to an equal share of          demonstrated how centrally located management
gateway bandwidth. Mo and Walrand [9] presented the         stations initiate a large amount of traffic, which can be
multiclass closed-fluid model, which is used to justify     suppressed, with the technique of active networks.
the existence of a window- based fairness scheme for        Thus, congestion control was regarded as a necessary
end-to-end congestion control algorithms. The theory        part of efficient network management. Managed
presented in this work is supported by mathematical         devices are responsible for feedback. Bhattacharjee
proof but no practical implementation was provided.         [15] describes the idea of a router taking the initiative
                                                            and restricting the much demanding flows at the time
2.1 ECN marking                                             of congestion. This idea gives rise to use of network
                                                            level computation. Santos [16] proposed a packet-
Explicit Congestion Notification (ECN) is used as a         marking scheme in which link-level control is
means of conveying congestion information back to the       exercised so that congestion may not spread to other
end systems. ECN was proposed in 1999 by [RFC               network nodes.
2481] and incorporated into the TCP/IP suite in 2001
by [RFC 3168]. Kunniyur [10] proposed a scheme to           Thus the classical methods started using and
adjust the ECN marking such that, the loss of packets       demanding more information from network devices for
                                                            congestion control. The methods outlined in the next


                                                                                                                   2
section, require that intermediate nodes should provide    router experiences congestion and it is forced to
both information as well as computational facility for     discard the packet, the router calculates the new
congestion control.                                        window size that the endpoint would choose if it had
                                                           instantly detected the congestion. The router then
3. Active Congestion Control                               deletes the packets that the endpoint would not have
                                                           sent and informs the endpoint of its new state. Internal
Bhattacharjee [3] proposed the use of active-networks      network nodes beyond the congested router see the
for congestion control. He stated that “[a]ctive           modified traffic, which has been tailored as though the
networking is a natural consequence of the end-to-end      endpoint has instantly reacted.
argument, because certain functions can be most-
effectively implemented with information that is           Figure 3-1 shows the actual simulation topology used
available inside the network.” [17]                        for ACC experiments.

Since only a limited set of functions can be computed
at an active network node, Computing these functions
may involve state information that persists at the node.
Congestion is an intra-network event and is potentially
far removed from the application. The time that is
required for congestion notification information to
propagate back to the sender limits the speeds with
which an application can self regulate to reduce
congestion or ramp up when congestion has cleared.
Bhatacharjee et al proposed three methods for
processing application specific data:
                                                           Figure 3-1: ACC topology (Ref – [1] pp. 4)
    (i) Unit level drop – A packet that specifies
          one particular function (one particular flow)
                                                           Lemar [19] designed and implemented a reusable
          is dropped; for some time thereafter, every      congestion control component used as a part of
          packet that matches the same subset of its       protocol capsule type definition. The capsule is created
          labels is also dropped.                          and sent by the sender node. It requests the
    (ii) Buffering and rate control – Putting              intermediate nodes to use the congestion control
          additional buffering into the switch, which      method while forwarding.
          has to monitor the available bandwidth and
          rate control the data.                           Wang [20] presented an algorithm called FACC and its
    (iii) Media Transformation – It‟s an intelligent       performance analysis. The work compares FACC
          dropping of data when congestion occurs.         (forward active network congestion control) with
          Transformation of data at the congestion         Tahoe TCP and demonstrates that FACC increases the
                                                           average throughput of each end point with or without
          point into a form that reduces the
                                                           cross traffic. FACC monitors the queue at the
          bandwidth but preserves as much useful           intermediate node. It informs the source node and the
          information as possible.                         intermediate nodes in the upper follow path. It creates
                                                           a filter and uses Slow Start to ramp back to normal
Active Congestion Control (ACC) [18 and 2] uses            flow.
router participation in both congestion detection and
recovery. Congestion is detected at the router, which      Gyires [21] proposed an active network algorithm that
also immediately begins reacting to congestion by          can reduce the harmful consequence of congestion due
changing the traffic that has already entered the          to aggregated bursty traffic. When traffic burstiness at
network. Incorporating congestion detection as well as     a router exceeds a certain threshold and the routers
recovery at the router reduces the feedback delay. This    buffer size, the router divides the traffic into
lends a much greater stability to the system. When         independent paths. When traffic burstiness falls below


                                                                                                                 3
the threshold the dispersed paths are collapsed into the       The ABCD approach modulates the packet size, based
original single path. The number of paths depends on           on its share of bandwidth.
the burstiness. The paths are the adjacent routers, that
have proven to be capable of taking over dispersed             The queue size at intermediate nodes can be measured
traffic in previous cases with minimal cost.                   in packets or in bytes. In our system we make use of
                                                               the byte mode. The queue size is fixed to a certain
Cheng [22] presented a network assisted congestion             number of packets but the mean packet size is set to the
control (NACC). NACC utilizes RTCP packet as the               original size of packets. So the buffering capacity of
control message carrying information about the desired         the queue to store the incipient packets is in bytes.
transmission rate to use. The intermediate routers
adjust this value according to available resources and         4.1 ACC – ABCD Algorithm
forwards the control messages to the next node.
Receiver transmits the updated information and the             Step 1. Monitor the available bandwidth of the
sender adjusts transmission behaviour according to the         bottleneck link. The receiver router obtains the
received information.                                          available bandwidth by using a triplet of data packets
                                                               by using the following:
                                                               Bavailable
3.1 Available Bandwidth Estimation
                                                                                   Packetsize2  Packetsize3
Cheng [23] used a three packet-probing algorithm for           =                            t 3 t 1
determining available bandwidth for a layered                                          ……….[24]
multicast congestion control. This technique was first         PacketSizei (i=1, 2, 3) = packet size in bytes
proposed in [22].                                              ti (i=1, 2, 3) = arrival time of packet in seconds

Packet-pair method uses two packets, which are sent            Step 2. Here the link under consideration is facing the
out, back to back, by the intermediate node, located at        scarcity of link bandwidth. Hence the available
the start of the flow in to the bottleneck link. If one is t   bandwidth is the bottleneck resource. We scale packet
second apart from the other after traversing the               size linearly by observing change in the available
bottleneck link,                                               bandwidth. We have chosen the threshold of 0.5 Mb/s;
                                                               at any value below this we start reducing the packet
             Packetsize                                        size by 200 Bytes for each 0.1 Mb/s.
      t =
             Bbottleneck
                                                               Step 3. The queue at the sender router is measured in
                                                               bytes. That means the upper bound of this queue buffer
4. Available Bandwidth-based                                   is (number of packets * mean packet size). Each de-
Congestion Detection (ABCD)                                    queued packet is evaluated against the packet size that
                                                               (step 2) has been calculated to be appropriate.
The available bandwidth is defined as the maximum
rate that the path can provide to a flow, without              Step 4. Rate needs to be scaled in accordance with the
reducing the rest of the traffic. As the utilization of        new packet size. We estimate this rate change using –
bottleneck link increases, available bandwidth
decreases. Our method uses a non-intrusive technique           r = (1- e –T/K) S   / T + e –T/K rold
to estimate the available bandwidth. Routine data
packets are used to continuously monitor the available                       …………..(4.2) [24]
bandwidth.
                                                               where -
Throughput is directly proportional to packet size,
hence, a perfect scaling of rate and packet size should        T =  t = (δt2 – δt1 2)/
help maintain a high throughput in case of congestion.



                                                                                                                     4
δ t = enqueue (t) – dequeue (t) i.e. inter arrival          5. Experiments, Testing and
time.                                                       Analysis
S = Change in the size of the packet,
K is a constant (between 100 and 500)                       TOOLS: NS-2 along with a few of the available tools
                                                            were used for the experiments. The tools were Network
The spacing of two packets (δ t) at the queue provides      Animator, Trace-graph and NANS-2 and Nscript from
a measure of the amount of traffic in the link i.e.         George Washington University.
congestion level.

When the link is congested, the fair rate r is computed     SIMULATION TOPOLOGY: We have used the
such that the rate of the aggregate incoming rate equals    dumbbell topology, with senders and receivers on
the link capacity.                                          either side of a single bottleneck link, as depicted in
                                                            Figure 3-1. This makes it possible to vary the
Step 5. This process of observing drop or increase in       bandwidth-delay product of link 1-2, which is
available bandwidth continues for each triplet received     traversed by all the TCP flows, without affecting the
at the other end of the link under consideration where      cross-traffic.
PacketSize is the size of the second packet, and
Bbottleneck is the bottleneck bandwidth.
                                                            SIMULATION DATA: The simulation experiments
The three packet probing algorithm uses three packets       are run 30 times, and the data we report, is in the form
for estimating the available bandwidth as follows:          of mean values over those 30 trials. To be able to see
                                                            the differences among the 30 trials, we seed the
                                                            simulator‟s number generator with the current time at
               Packetsize2  Packetsize3                    each invocation.
  BAvailable =         ta 3ta1
                                                            The bulk traffic sources send data continuously
where PacketSize2 and PacketSize3 are the sizes of the      throughout the simulation. All links from an endpoint
second packet and the third packet, respectively, and       to a router have a delay of 10 ms and bandwidth of
ta1, ta3 are the arrival times of the first and the third   10Mb/s except the bottleneck link, which is of
packet, respectively.                                       bandwidth 2 Mb/s and a delay of 10 ms. All endpoints
                                                            use 1000 byte packets. The cross traffic endpoints are
Our method uses three packet algorithm to estimate the      uncontrolled endpoints, sending on-off traffic with a
available bandwidth. We use the circuit configuration       burst and idle time of 2.5 sec and sending rate of 100
of Fig 3-1 to test our algorithm and to compare it with     kb/s. Varying the delay on the link from router 1 to 2
other algorithms.                                           changes the bandwidth delay product that the bulk
                                                            sources see. The simulations vary the delay between
ACC-ABCD system does not start congestion control           100 and 300 ms; this range was selected because it is
after a packet is dropped. It tries to take pre-emptive     the high end of common round trip time in Internet.
action by monitoring the bandwidth. If the available        Each simulation lasts for 200 seconds.
bandwidth were to fall below a specified threshold, it
would start modulating the flow so that congestion          5.1 Verification of the ABCD formula
may not occur. This eliminates the problems of low
through-put due to slow start. The system should be         We verified equation (4.1) [23] for calculation of the
stable because the changes are carried out from the         available bandwidth by sending a CBR traffic and
point of congestion to the sender. The delay in taking      measuring the available bandwidth at an intermediate
action, when the action is required to be taken only by     router. Each simulation was of 200 seconds. The
the sender, is avoided, since the intermediate node,        available bandwidth differed from the set value by no
which monitors the bandwidth is able to begin taking        more than 6.0%.
the action.




                                                                                                                  5
5.2 Verification of variation of Packet and                                                                                                                            WinSize & PktSize (Bytes)
Window size with Available Bandwidth                                                                                                           30000                                                         1200
                                                                                                                                               25000                                                         1000




                                                                                                                                 Window Size




                                                                                                                                                                                                                    Packet Size
The packet size has to scale down when the available                                                                                           20000                                                         800
bandwidth falls below the threshold value. Fig 5-1                                                                                             15000                                                         600
shows that the packet size varies as required. In NS-2                                                                                         10000                                                         400
window size is calculated in packets. Therefore if we                                                                                          5000                                                          200

change the packet size, the window size should                                                                                                    0                                                          0
                                                                                                                                                       1 11 21 31 41 51 61 71 81 91 101111121131141
automatically change in proportion. Fig 5-2 and Fig 5-
                                                                                                                                                                          time (sec)
3 verify the result. In Fig 3, the gradual increase in the
window size at particular moments depicts the effect of                                                                                                  Window Size   Packet Size     Available Bandwidth

the additive increase algorithm of TCP. It can be                                                                                                  Figure 5-2: Window size and packet size
observed that after halving (point 61 in Figure 5-3) the                                                                        We have performed simulations with delay-bandwidth
window (in case of a packet drop), the window size                                                                              products of 100 ms, 200 ms and 300 ms for link 1-2 in
does not change with the packet size, till it moves out                                                                         Fig 3-1. As the delay-bandwidth product increases, the
of the slow start process.                                                                                                      number of dropped packets decreases to 0 for ABCD.
                                                                                                                                Table 1 shows the results for a delay-bandwidth
This shows that ABCD along with TCP provides a                                                                                  product of 100 ms. The great improvement in the
stable environment, which is able to respond to                                                                                 number of packets dropped is clearly brought out. The
increase or decrease in network capacity.                                                                                       throughput decreases 5.5% because the method does
                                                                                                                                not allow the available bandwidth to fall to zero. Hence
                                                AB Vs Pkt-Size                                                                  the loss of packets is avoided at the cost of a marginal
                1200                                                                    0.8
                                                                                                                                decrease in throughput.
                                                                                                  Available Bandwidth in Mb/s




                                                                                        0.7
                1000
                                                                                        0.6
                       800                                                                                                                                      TCP            ACC                  ABCD
 Packet Size




                                                                                        0.5
                       600                                                              0.4                                     Number of
                                                                                        0.3                                      Dropped                         886            716                    24
                       400
                                                                                        0.2
                       200                                                                                                         Pkts.
                                                                                        0.1
                                                                                                                                Throughput                    150.03         150.553
                         0                                                              0                                                                                                      223.886 Kb/s
                             1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73                                               of Flow 1                     Kb/s           Kb/s
                                                                                                                                Throughput                    149.258        139.952
                                          Available Bandwidth      Packet Size                                                                                                                 136.566 Kb/s
                                                                                                                                 of Flow 2                     Kb/s           Kb/s
                                                                                                                                Throughput                    158.602        154.455
                                                                                                                                                                                               135.052 Kb/s
                  Figure 5-1: Available bandwidth Vs packet size                                                                 of Flow 3                     Kb/s           Kb/s
                                                                                                                                Throughput                    151.898        159.37
                                                                                                                                                                                               136.708 Kb/s
                                          WinSize - PktSize (Bytes)                                                              of Flow 4                     Kb/s           Kb/s
                                                                                                                                Throughput                    152.345        172.455
                       25000                                                        1200                                                                                                       137.133 Kb/s
                                                                                                                                 of Flow 5                     Kb/s           Kb/s
                       20000                                                        1000
                                                                                                                                Throughput                    154.011        136.664
         Window Size




                                                                                           Packet Size




                       15000
                                                                                    800                                                                                                        141.039 Kb/s
                                                                                    600
                                                                                                                                 of Flow 6                     Kb/s           Kb/s
                       10000
                                                                                    400                                         Throughput                    158.358        165.67
                                                                                                                                                                                               134.342 Kb/s
                        5000                                                        200                                          of Flow 7                     Kb/s           Kb/s
                             0                                                      0                                           Throughput                    149.055        152.384
                                 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35                                                                                                              141.231 Kb/s
                                                                                                                                 of Flow 8                     Kb/s           Kb/s
                                                    time (sec)
                                                                                                                                Throughput                    149.014        151.369
                                             Window Size         Packet Size                                                                                                                   128.492 Kb/s
                                                                                                                                 of Flow 9                     Kb/s           Kb/s
    Figure 5-2: Window size - Packet size scaling with                                                                          Throughput                    152.752        142.345
                                                                                                                                                                                               126.808 Kb/s
                         time                                                                                                   of Flow 10                     Kb/s           Kb/s
                                                                                                                                   Total                      1525.32        1525.22
                                                                                                                                                                                               1441.26 Kb/s
                                                                                                                                Throughput                     Kb/s           Kb/s
                                                                                                                                                       Table 1: TCP-ACC-ABCD Comparison


                                                                                                                                                                                                                                  6
6. Conclusions                                             University of Mannheim, Manheim, Germany.
For congestion control, this paper proposes the method     2002.
of active congestion control based on monitoring of        5. Jacobson V.; „Congestion Avoidance and
available bandwidth. The method is based on the            Control‟, Computer Communication Review, vol.
assumption that the bottleneck resource is bandwidth.      18, no.4, Pages: 314-329 ,August 1988
The method uses the adjustment of packet size and          6. B Lotfi, Semyon M. Meerkov; „Feedback
flow rate control at the sender. The method decreases      control of congestion in packet switching
the packet drops by as much as 39.6 times at the cost of   networks: the case of a single congested node‟,
a 5.5% decrease in throughput. The decrease in
                                                           IEEE/ACM Transactions on Networking (TON),
throughput can be further reduced if the threshold level
of the available bandwidth, used in our formulae, is       V-1/6, pp.693 – 708, December 1993.
reduced. However this may increase the number of           7. L. Brakmo and L. Peterson.; „TCP Vegas: End
packets dropped. The tuning can be done through use        to End Congestion Avoidance on a Global
of historical data of cross-traffic.                       Internet‟, IEEE Journal of Selected Areas in
                                                           Communications, 13(8):1465 1480, October 1995.
We envision a system where all the intermediate nodes      8. Golestani, S.J.; Bhattacharyya, S.; „A class of
are ABCD compliant. In such a system, the number of        end-to-end congestion control algorithms for the
dropped packets will be minimized.                         Internet‟, Sixth International Conference on
                                                           Network Protocols, pp.137 – 150, 13-16 Oct. 1998.
The paper shows that the ACC-ABCD is an effective          9. Mo, J. and Walrand, J.; „Fair End-to-End
method for congestion control. The method provides
                                                           Window-based Congestion Control‟, IEEE/ACM
improved stability and reliability to TCP flows.
                                                           Trans. on Networking 8-5, pp. 556-567, 2000.
The disadvantage of the active networks is the             10. Kunniyur, S.; Srikant, R.; „End-to-end
processing overhead, due to the introduction of            congestion control schemes: utility functions,
computation at intermediate nodes. This would require      random        losses     and      ECN      marks‟
increased processing power at each of the intermediate     IEEE/ACM Transactions on Networking, Volume:
nodes. NS-2 is not a suitable tool for measurement of      11, Issue: 5, pp.689 – 702, Oct. 2003.
the processing overhead. The active network research       11. Jiang Li, Shivkumar Kalyanaraman; „MCA: A
community has also to provide for security of the          Rate-based End-to-end Multicast Congestion
system before active networks can be used on the           Avoidance        Scheme‟,ICC       2002.     IEEE
Internet.
                                                           International conference on Communications,
                                                           2002., Volume: 4 , pp. Pages:2341–2347,28 April-
References:
                                                           2 May 2002.
1. Faber, T; „Experience with active congestion
                                                           12. Floyd, S., and Jacobson, V.; „Random Early
control‟, DARPA Active Networks Conference and
                                                           Detection gateways for Congestion Avoidance‟,
Exposition, 02, pp.132 - 142, 29-30 May 2002.
                                                           IEEE/ACM Transactions on Networking, V.1 N.4,
2. Jain R, Ramakrishnan K, Chiu D.; „Congestion
                                                           pp. 397-413, August 1993.
avoidance in computer networks with a
                                                           13. Yoganandhini Janarthanan,Gary Minden, and
connectionless network layer‟, Proceedings of the
                                                           Joseph Evans;
Computer Networking Symposium, pp.134 – 143,
                                                           „Enhancement of Feedback Congestion Control
11-13 April 1988.
                                                           Mechanisms by Deploying Active Congestion
3. S. Bhattacharjee, K. Calvert, and E. Zegura;
                                                           Control, The University of Kansas, Information
„On Active Networking and congestion‟ College
                                                           and Telecommunication Technology Center,
of Computing, Georgia Tech, Tech. Rep. GITCC -
                                                           Technical report, ITTC-FY2003-TR-19740-10.
96-02, 1996.
                                                           14. Konstantinos Psounis; „Active Networks:
4. J Widmer, C Boutremans, J Boudec; „End-to-
                                                           Applications, Security, Safety and Architectures‟,
end congestion Control for Flows with Variable
Packet Size‟, Technical Report ID – IC/2002/82,


                                                                                                           7
IEEE Communications Surveys, Vol. 2, No.1, First       24. I. Stoica, S. Shenker, and H. Zhang.;„Core-
Quarter, 99.                                           Stateless Fair Queuing: A Scalable Architecture to
15. Floyd, S.; Fall, K.; „Promoting the use of end-    Approximate Fair Bandwidth Allocations in High
to-end congestion control in the Internet‟,            Speed Networks.‟ Proceedings of the SIGCOMM
IEEE/ACM Transactions on Networking, Volume:           '98 Conference, Vancouver, Canada, Aug. 1998.
7, Issue: 4, pp.458 – 472, Aug. 1999.
16. Santos, J.R.; Turner, Y.; anakiraman, G.;
„End-to-end congestion control for infiniband‟,
Twenty-Second Annual Joint Conferences of the
IEEE Computer and Communications Societies.
INFOCOM 2003. Volume: 2, pp.1123 – 1133, 30
March - 3 April 2003.
17. Samrat Bhattacharjee, Ken Calvert, and Ellen
W. Zegura.; „Active Networking and the End-to-
End Argument‟ IEEE International Conference on
Network Protocols (ICNP '97), pp. 220 – 229,
Atlanta, GA, October 28-31, 1997.
18. Faber, T; „ACC: using active networking to
enhance feedback congestion control mechanisms‟,
Network, IEEE, Volume: 12 Issue: 3, pp.61 – 65,
May-June 1998.
19. Eric Lemar, Stefan Bjarni Sigurosson;
„Congestion Control in Active Networks‟,
Technical Report, University of Washington 1999.
20. Bin Wang; Bing Zhang; Zengji Liu; Hongbin
Li; „Forward active networks congestion control
algorithm and its performance analysis‟, The 2000
IEEE Asia-Pacific Conference on Circuits and
Systems, IEEE APCCAS 2000, pp.313 – 318, 4-6
Dec. 2000.
21. Gyires, T.; „Using active networking for
congestion control in high-speed networks with
self-similar     traffic‟,    IEEE    International
Conference on Systems, Man, and Cybernetics,
Volume: 1, pp.405 – 410 8-11 Oct. 2000.
22. Cheng Wanxiang; Shi Peixin; Lei Zhenming;
„Network-assisted          congestion      control‟,
International Conferences on Info-tech and Info-
net, 2001. Proceedings. ICII 2001 - Beijing.
2001, Volume: 2, pp.8 – 32, 29 Oct.-1 Nov. 2001
23. Lechang Cheng; Ito, M.R.; „Layered multicast
with TCP-friendly congestion control using active
networks‟, 10th International Conference on
Telecommunications, ICT 2003.Volume: 1, pp.806
– 811, 23 Feb.-1 March 2003.




                                                                                                       8

								
To top