Q2 - DOC by wuyunyi


									                  CNT5106C Computer Networks, Summer 2010
                          Instructor: Prof. Ahmed Helmy
                                   Homework #1
 On the Internet Architecture, Elementary Queuing Theory and Application Layer

I. Internet and layered protocol architecture:

Q1. (5 points) In the layered protocol architecture the transport layer functionality
includes congestion control and error recovery (e.g., retransmission). One suggested that
this functionality should be done strictly at the end points (i.e., at the hosts) without aid
from the network. Do you agree? Why? Elaborate showing the design trade-offs.

Answer: (5 points)
        In general, error recovery (e.g., re-transmission) is specific to application needs.
Some applications require 100% packet recovery, even with delays and jitters (such as
TCP-based applications, http, ftp and telnet traffic). Other applications may be tolerant to
loss but less tolerant to delays and jitter, such as voice applications. Re-transmissions and
packet recovery add to the jitters and the delays and hence may not be desirable for real-
time or voice applications. Hence it is not a good idea, in general, to include error
recovery at the network layer (that is not aware of application needs) and it is better to
implement such functionality at the transport layer end-to-end.
        In cases of lossy channels in the network (such as X.25 in the early networking
days, or wireless links) it may be desirable to reduce the bit error rates on those links by
including error recovery at the end points of those links. [In general, most links nowadays
have very low BER, and for wireless links the MAC (such as IEEE 802.11) layer
provides Ack’ed delivery].
        For congestion control, a similar argument may be given. That is, congestion
reaction may be application specific and is better implemented end-to-end. Congestion
notification, on the other hand, may provide useful information to the end points to react
appropriately. Since losses in the network may be due to congestion or other factors, a
signal from the network to the end point may help distinguish congestion errors from
other errors. Only congestion errors should trigger ‘back off’ or rate cut at the end points.
So, network assistance in congestion notification may help in some scenarios. [extra: In
other scenarios network assistance may prevent synchronization effects of congestion
control, e.g., RED, or may prevent/isolate misbehavior, e.g., WFQ.].

Q2. (5 points) What advantage does a circuit-switched network have over a packet-
switched network? How can it establish such advantage?

A circuit-switched network can guarantee a certain amount of end-to-end bandwidth for
the duration of a call. Most packet-switched networks today (including the Internet)
cannot make any end-to-end guarantees for bandwidth.
Circuit-switched networks use admission control, and reserve a circuit (in TDM it is done
in the form of an assigned time slot per source that no other source can use). The
allocated resources are never exceeded.
Q3. (10 points) What are the advantages and disadvantages of having a layered protocol
architecture for the Internet? (mention at least 3 advantages and 2 disadvantages)

Is it true that the change in any of the layers does not affect the other layers? (support
your answer/arguments with examples) <4 points>
Allows an explicit structure to identify relationships between various pieces of the
complex Internet structure, by providing a reference model for discussion.

Provides a modular design that facilitates maintenance, updating/upgrading of protocols
and implementations (by various vendors) at the various layers of the stack.

Supports a flexible framework for future advances and inventions (such as mobile or
sensor networks).

Disadvantages: overhead of headers, redundancy of functions (sometimes not needed)
[such as reliability as the transport layer and the link layer, or routing at the network layer
and some link layer protocols (such as ATM)]

It is true in many cases that the change in one layer does not affect the change in the other
layers, but not always.
Examples of change that did not affect the other layers: change from FDDI to token ring,
to Ethernet at the MAC layer.
Examples of change that affected other layers: wireless vs. wired (performance of TCP
and routing degraded drastically). Introduction of 802.11 for wireless and ad hoc
networks (a change in the physical and MAC layers), does affect in a major way routing
at the network layer and the transport layers. In that case, many of the protocols needed

Q4. (10 total points) Design parameters: In order to be able to analyze performance of the
Internet protocols a researcher needs to model some parameters, such as number of nodes
in the network, in addition to many other parameters.
a. Discuss 4 different main parameters one would need to model in order to evaluate the
    performance of Internet protocols. [Elaborate on the definition of these parameters
    and their dynamics]
b. Discuss 2 more parameters for mobile wireless networks [these two parameters are
    not needed for the wired Internet]

a. Traffic model temporal and spatial (packet arrival processes, session/flow arrival
   processes, spatial distribution of traffic (src-dst) pair distribution across the topology),
   topology/connectivity model, node failure model, membership dynamics (for
   multicast) spatio-temporal models. [Any reasonable 4 parameters are ok, with 1.5
   points per parameter]
b. For mobile wireless networks there is a need to model ‘mobility’ (spatio-temporal),
   and wireless channel dynamics/loss/bandwidth since it changes with time much more
   drastically than the wired Internet (in which virtually the max bandwidth of a
   channel/link is static) [Any 2 reasonable parameters are ok, with 2 points per

II. Statistical multiplexing and queuing theory

Note: You may want to make use of the following equations:
                                     Ts(2   )
-   M/D/1: queuing delay Tq                    ; Ts is service time &  is link utilization
                                     2.(1   )
-   M/D/1: average queue length or buffer occupancy q  .Tq   
                                                                                 2.(1   )
                                       Ts                                
-   M/M/1: queuing delay Tq                  , buffer occupancy: q 
                                     (1   )                         (1   )

Q5. (8 points) Consider two queuing systems, serving packets with lengths that have
exponential distribution, and the packet arrival process is Poisson. The first queuing
system (system I) has a single queue and a single server, and hence the packet arrival rate
is X, and the server speed is Y. The second queuing system (system II) has two queues
and two servers, and hence the packet arrival rate is X/2, and the server speed is Y/2.
Derive a relation between the delays in each of these systems. What conclusion can you

Answer: (8 points)
We use the M/M/1 queue (because the question states Poisson arrival and exponentially
distributed service time).
For the first system (I): Tq=Ts/(1-)=1/M(1-/M)=1/Y(1-X/Y),
For the second system (II): Tq=2/Y(1-X/Y)=2Tq (of system I)
That is, using ‘1’ queuing system performs better than using ‘2’ queues with half of the arrival rate and half
of the output link capacity.

Q6. (5 points) In an Internet experiment it was noted that the queuing performance of the
switches/routers was worse than expected. One designer suggested increasing the buffer
size in the routers drastically to withstand any possible burst of data. Argue for or against
this suggestion, and justify your position.

A6. Increasing the buffer size allows switches to store more packets (which may reduce
loss). However, it does not alleviate the congestion. If this was the only cure proposed,
then we expect the queues to build up, increasing the buffer occupancy, and increasing
the delays. If the build up persists (due to lack of congestion control for example) the
queues shall incur losses and extended delays. Delays may lead re-transmission timers to
expire (for reliable protocols, such as TCP) leading to re-transmissions. Also, the TTL
value in the header of each packet is reduced based on time (and hop count). So, many of
the TTLs may expire leading to the discard of packets. So, in general, only increasing the
buffer sizes does not help improve the queuing performance.

Q7. (7 points) Describe the network design trade-off introduced by using statistical
multiplexing and define and describe a metric that captures this trade-off.

A7. (7 points: 3.5 for the link between stat muxing and congestion and 3.5 for the trade
off metric (network power) and its description).

Statistical multiplexing allows the network to admit flows with aggregate capacity
exceeding the network capacity (even if momentarily). This leads to the need for
buffering and the ‘store and forward’ model. Subsequently, queuing delays and build up
may be experienced as the load on the network is increased.
Two major design goals of the network is to provide maximum throughput (or goodput)
with least (or min) delay.
However, these two goals are conflicting. In order to increase the throughput, the
congestion increases and so does the delay. In order to reduce the queuing delays then we
need to reduce the load on the network and hence the goodput of the flows would
decrease. This is the throughput-delay trade off in network design.
One metric that captures both measures is the network power=Tput/Delay,
as the Tput increases, so does the network power, and when the delay decreases the
network power increases.

Q8. (8 points) Flows in the Internet vary widely in their characteristics. Someone
suggested that in order to be fair to the various heterogeneous flows then we need the
different flows to experience the same delay at the different queues. Argue for or against
this suggestion.

A8. (8 points: 4 points for the constant ratio and the link to the fluid flow model, 4 points
for the unfairnes/greed description)
In order to provide the same delay for the various flows we need to maintain the
rate/capacity ratio constant (this is based on the fluid flow model we introduced in class).
Hence, if the different flows arrive at various rates, then the capacity allocation should
reflect such variation. The allocation leading to same delays would favor (i.e., allocate
more capacity to) flows with higher rates at the expense of flows with low rates. This
strategy encourages greed in the network and cannot achieve fairness, where the
existence of high rate (large) flows in the network would adversely affect low rate (small)
flows in the network by increasing the overall delay experienced by all the flows.

Q9. (12 total points) Consider a network that uses statistical multiplexing. The network
has ‘N’ number of ON/OFF sources, each sending at a rate of R packets per second when
ON. All the sources are multiplexed through a single output link. The capacity of the
output link is ‘M’.
        - A. (3 points) What is the condition on N, R and M in order to stabilize this
        - When the number of sources to be supported is increased from R to 10R, there
            were two suggestions to modify the network:
           -    Suggestion I is to replicate the above system 10 times. That is, create 10
                links, each with capacity of ‘M’ handling R sources.
            - Suggestion II is to replace the link with another link of capacity ’10 M’
    B. (9 points) Which suggestion do you support and why? [Argue giving expressions
    for the delay/buffer performance of each system. Give both the advantages and
    disadvantages of each case]

Answer: =
A. (3 points) The conditions for a stable network are
       N.R.  < M,
       N.R > M,

where  is the fraction of the time the sources are ON (on average)

If ‘N.R.  > M’, then this leads to constant build up of the queue with no change of
recovering from congestion (and draining the queue), which would lead to unstable

B. (9 points)
Write down the equations,

                                 Ts(2   )
-   M/D/1: queuing delay Tq                ; Ts is service time &  is link utilization
                                 2.(1   )
-   M/D/1: average queue length or buffer occupancy q  .Tq   
                                                                        2.(1   )
                                   Ts                                
-   M/M/1: queuing delay Tq              , buffer occupancy: q 
                                 (1   )                         (1   )

The buffer occupancy depends on  only. If  is the same (i.e., the load on the queue
server is the same) then the buffer occupancy is the same,
         = . Ts = . N. R / M

Increasing the bandwidth of the link to 10M means that we can get the same average
buffer occupancy in the two systems. In system I we would need 10 times the buffer size
as in system II, so system II is advantageous in that sense. (more sharing and statistical
In addition, the queuing delay will be decreased drastically (by a factor of 10) where
Tq=Ts. f()
(6 points for the above argument)

(3 points) On the other hand the std deviation/fluctuation around the average in the queue
size will be higher since it is shared by more number of flows, and hence the jitter will be
relatively higher.
III. Application layer and related issues

Q10. (5 points) (Stateful vs. Stateless) Discuss one advantage and one disadvantage of
having a ‘stateful’ protocol for applications.

Advantage: The protocol can now maintain state about (i.e., remembers) users
preferences (e.g., shopping preferences as in browser cookies),
Disadvantage: when failure occurs the state needs to be reconciled (more complexity and
overhead than stateless)
[other correct and reasonable answers are accepted]

Q11. (5 point) (Web Caching) Describe how Web caching can reduce the delay in
receiving a requested object. Will Web caching reduce the delay for all objects requested
by a user or for only some of the objects? Why? <6 points>

Ans. Web caching can bring the desired content “closer” to the user, perhaps to the same
LAN to which the user’s host is connected. Web caching can reduce the delay for all
objects, even objects that are not cached, since caching reduces the traffic on links.

Q12. (10 points) Discuss three different architectures of the peer-to-peer applications.
Give examples of real applications for each architecture and discuss the advantages and
disadvantages of each architecture.

   1. Centralized directory of resources/files, as in Napster. Advantage is that search
      for resources is simple with min overhead (just ask the centralized server). The
      disadvantages are: single point of failure, performance bottleneck and target of
   2. Fully distributed, non-centralized architecture, as in Gnutella, where all peers and
      edges form a ‘flat’ overlay (without hierarchy). Advantages: robustness to failure,
      no performance bottleneck and no target for lawsuit. Disadvantages is that search
      is more involved and incurs high overhead with query flooding.
   3. Hierarchical overlay, with some nodes acting as super nodes (or cluster heads), or
      nodes forming loose neighborhoods (sometimes referred to as loose hierarchy, as
      in BitTorrent). Advantages, robust (no single point of failure), avoids flooding to
      search for resources during queries. Disadvantages, needs to keep track of at least
      some nodes using the ‘Tracker’ server. In general, this architecture attempts to
      combine the best of the 2 other architectures.

Q13. (7.5 points) Push vs. Pull:
   A. Give examples of a push protocol and a pull protocol
   B. Mention three factors one should consider when designing pull/push protocols,
   discuss how these factors would affect your decision as a protocol designer (give
   example scenarios to illustrate).

A. An example of a push protocol is: http. An example of a pull protocol: SMTP

B. The factors affecting the performance of a pull/push protocol include (but are not
limited to): 1. access pattern: how often is this object cached and how often is it accessed
(example: a push mechanism for a very popular video that is pushed closer to a large
population that is going to frequently watch it, would be better than a pull mechanism), 2.
delay: what is the delay to obtain the object, and 3. object dynamics: how often/soon
does the information in the object expires (example: in a sensor network where the
information sensed is constantly changing, but is queried once in a while would be better
‘not’ to push it, but to pull it when needed only).

Q14. (7.5 points) We refer to the problem of getting users to know about each other,
whether it is peers in a p2p network or senders and receivers in a multicast group, as the
“rendezvous problem”.
    What are possible solutions to solve the rendezvous problem in p2p networks (discuss
three different alternatives and compare/contrast them.


   The possible solutions for the rendezvous problem include:
   1.    Using a centralized server: advantages: simple to search, little communication
         overhead. Disadvantages: single-point-of-failure (not robust), bottleneck,
         doesn’t scale well
   2.     Using a search technique for discovery, perhaps using a variant of a flood (or
         scoped-flood) or expanding-ring search mechanism. Advantages: avoids
         single-point-of-failure and bottlenecks. Disadvantages: may be complex,
         incurs high communication overhead and may incur delays during the search.
   3.    hybrid (or hierarchy): where some information (e.g., pointers to potential
         bootstrap neighbors, or pointers to some resources) are kept at a centralized (or
         replicated) server or super-nodes, then the actual communication is peer-to-
         peer. Advantage: if designed carefully can avoid single-point-of-failure,
         bottlenecks, and achieve reasonable overhead and delay. Disadvantage: need to
         build and maintain the hierarchy (can trigger costly re-configuration control
         overhead in case of highly dynamic networks and unstable super-nodes).

To top