EFRRA: An Efficient Fault-Resilient-Replica-Algorithm for Content Distribution Networks

Document Sample
EFRRA: An Efficient Fault-Resilient-Replica-Algorithm for Content Distribution Networks Powered By Docstoc
					                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   Vol. 8, No. 5, August 2010

      EFRRA : An Efficient Fault-Resilient-Replica-
      Algorithm for Content Distribution Networks

                        Amutharaj. J                                                               Radhakrishnan. S.
               Assistant Professor, CSE                                                       Senior Professor, CSE
     Arulmigu Kalasalingam College of Engineering                                  Arulmigu Kalasalingam College of Engineering
          Krishnankoil, Virudhunagar, INDIA                                             Krishnankoil, Virudhunagar, INDIA
                amutharajj@yahoo.com                                                             srk@akce.ac.in


Abstract— Nowadays, content distribution is an important peer-                close to users [1]. A CDN has some combination of content-
to-peer application on the Internet that has received considerable            delivery, request-routing, distribution and accounting
research attention. Content distribution applications typically               infrastructure. The content-delivery infrastructure consists of a
allow personal computers to function in a coordinated manner as               set of edge servers (also called surrogates) that deliver copies
a distributed storage medium by contributing, searching, and                  of content to end-users. The request-routing infrastructure is
obtaining digital content. The primary task in CDN is to replicate            responsible to directing client request to appropriate edge
the contents over several mirrored web servers (i.e., surrogate               servers. It also interacts with the distribution infrastructure to
servers) strategically placed at various locations in order to deal           keep an up-to-date view of the content stored in the CDN
with the flash crowds. Geographically distributing the web
                                                                              caches.
servers’ facilities is a method commonly used by service
providers to improve performance and scalability. Hence,                          In particular, Content Delivery Networks (CDNs) optimize
contents in CDN are replicated in many surrogate servers                      content delivery by putting the content closer to the consumer
according to some content distribution strategies dictated by the             and shorting the delivery path via global networks of
application environment. Devising an efficient and resilient                  strategically placed servers [4]. The CDN’s edge servers are the
content replication policy is crucial since, the content distribution         caching servers, and if the requested content is not yet in the
can be limited by several factors in the network.                             cache, this document is pulled from the original server. For
                                                                              large documents, software packages, and media files, push
Hence, we propose a novel Efficient Fault Resilient Replica
Algorithm (EFRRA) to replicate the content from the origin
                                                                              operational mode is preferred. It is desirable to replicate these
server to a set of surrogate servers in an efficient and reliable             files at edge servers in advance [6].
manner. The contributions of this paper are twofold. First we                     While transferring a large file with individual point-to-
introduce a novel EFRRA distribution policy and theoretically                 point connections from an original server can be a viable
analyze its performance with traditional content replication                  solution in the case of limited number of mirror server, this
algorithms. Then, by means of a simulation based performance                  method does not scale when the content needs to be replicated
evaluation, we assess the efficiency and resiliency of the proposed
                                                                              across a CDN with thousands of geographically distributed
EFRR Algorithm, and compare its performance with traditional
content replication algorithms stated in the literature. We
                                                                              edge replica nodes [5].
demonstrate in experiment that EFRRA significantly reduces the                    This paper is organized as follows. Section II describes the
file replication time and maintaining the Delivery ratio as                   related work. Next, we present the working mechanisms of
compared with traditional strategies such as sequential unicast,              different content distribution algorithms and our proposed
multiple unicast, Fast Replica (FR), Resilient Fast Replica(R-FR),            EFRRA content distribution algorithm. Section IV presents
and Tornado codes (TC). This paper also analyzes the                          discussion on analytical study, experimental results and
performance of sequential unicast, multiple unicast, Fast Replica             analyzes the performance of different content distribution
(FR), Resilient Fast Replica(R-FR), Tornado codes, and EFRRA
                                                                              algorithms. Finally, the conclusion and future work is
algorithms in terms of average replication time and maximum
replication time.
                                                                              summarized.

   Keywords- CDN, Fast Replica, Resilient Fast Replica, Efficient                                   II. RELATED WORKS
Fault Resilient Replica Algorithm, Tornado Codes.
                                                                                  Ludmila et al. [5] proposed a novel algorithm, called Fast
                                                                              Replica, for an efficient and reliable replication of large files in
                        I.    INTRODUCTION                                    the Internet environment. Instead of downloading the entire file
    Content Delivery Networks (CDNs) [1][2][3] provide                        from one server, a user downloads different parts of the same
services that improve network performance by maximizing                       file from different servers in parallel. Once all the parts of the
bandwidth, accessibility and maintaining correctness through                  file are received, the user reconstructs the original file by
content replication. It offers fast and reliable applications and             reassembling the different parts. There are several advantages
services by distributing content to cache or edge servers located             while using a dynamic parallel access. First, as the block size is



                                                                        189                                http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 8, No. 5, August 2010
small, a dynamic parallel access can easily adapt to the                   optimization problem is to replicate objects so that when clients
changing network/server conditions. Second, as the client is               fetch objects from the nearest CDN server with the requested
using several connections to different servers, a parallel access          object, the average number of ASs traversed is minimized.
is more resilient to congestion and failure in the                         They showed that this optimization problem is NP complete.
network/server. Third, the server selection process is                     They developed four natural heuristics and compared them
eliminated since clients connect to all available servers to               numerically using real Internet topology data.
retrieve the parts of the file. Fourth, the throughput seen by the
client will increase. There is an overhead incurred when                       Al-Mukaddim Khan Pathan and Rajkumar Buyya [13]
opening multiple connections and extra traffic generated to                presented a comprehensive taxonomy with a broad coverage of
perform block request.                                                     CDNs in terms of organizational structure, content distribution
                                                                           mechanisms, request redirection techniques, and performance
    ZhiHui Lu [6] et al proposed Tree Round Robin Replica                  measurement methodologies. They studied the existing CDNs
(TRRR) to improve the work of Fast Replica [5]. They                       in terms of their infrastructure, request-routing mechanisms,
proposed an efficient and reliable replication algorithm for               content replication techniques, load balancing, and cache
delivering large files in the content delivery networks                    management. They provided an in depth analysis and state-of-
environment. As part of this study, they carried out some                  the-art survey of CDNs. Finally, they applied the taxonomy to
experiments to verify TRRR algorithm in small scale. They                  map various CDNs. The mapping of the taxonomy to the
demonstrated in experiment that TRRR significantly reduces                 CDNs helps in “gap” analysis in the content networking
the file distribution/replication time as compared with                    domain.
traditional policies such as multiple unicast in content delivery
networks.                                                                      James Broberg and Rajkumar Buyya [14] et al proposed
                                                                           MetaCDN a system that exploits ‘Storage Cloud’ resources,
    Several content networks attempt to address the                        creating an integrated overlay network that provides a low cost,
performance problem using different mechanisms to improve                  high performance CDN for content creators. MetaCDN
the Quality of Service (QoS). One approach is to modify the                removes the complexity of dealing with multiple storage
traditional web architecture by improving the web server                   providers, by intelligently matching and placing users’ content
hardware adding a high-speed processor, more memory, and                   onto one or many storage providers based on their quality of
disk space, or may be with a multi-processor system. This                  service, coverage and budget preferences. MetaCDN makes it
approach is not flexible [7].                                              trivial for content creators and consumers to harness the
                                                                           performance and coverage of numerous ‘Storage Clouds’ by
    In order to offload popular servers and improve end-user
                                                                           providing a single unified namespace that makes it easy to
experience, copies of popular content are often stored in
                                                                           integrate into origin websites, and is transparent for end-users.
different locations. With mirror site replication, files from
origin server are proactively replicated at surrogate servers with             M. O. Rabin [15] and Byers [16] et al proposed an efficient
the objective to improve the user perceived Quality of Service             dispersal of information for secure, and fault tolerant data
(QoS). When a copy of the same file is replicated at multiple              dissemination based on digital fountain approach. The main
surrogate servers, choosing the server that provides the best              idea underlying their technique is to take an initial file
response time is not trivial and the resulting performance can             consisting of k packets and generate an n packet encoding of
dramatically vary depending on the server selected [8,9].                  the file with the property that the initial file can be restituted
                                                                           from any k packet subset of the encoding. For the application of
    Laurent Massoulie [10] proposed an algorithm called the
                                                                           reliable multicast, the source transmits packets from this
localizer which reduces the network load, helping to evenly
                                                                           encoding, and the encoding property ensures that different
balance the number of neighbors of each node in overlay,
                                                                           receivers can recover from different sets of lost packets,
sharing the load and improving the resilience to random node
                                                                           provided they receive a sufficiently large subset of the
failures or disconnections. The localizer refines the overlay in a
                                                                           transmission. To enable parallel access to multiple mirror sites,
way that reflects geographic locality so as to reduce network
                                                                           the sources each transmit packets from the same encoding, and
load.
                                                                           the encoding property ensures that a receiver can recover the
    In [11], Rodriguez and Biersack studied a dynamic parallel-            data once they receive a sufficiently large subset of the
access scheme to access multiple mirror servers. In their study,           transmitted packets, regardless of which server the packets
a client downloads files from mirror servers residing in a wide            came from. In fact, the benefits and costs of using erasure
area network. They showed that their dynamic parallel                      codes for parallel access to multiple mirror sites are analogous
downloading scheme achieves significant downloading                        to the benefits and costs of using erasure codes for reliable
speedup with respect to a single server scheme. However, they              multicast. For both applications, simple schemes which do not
studied only the scenario where one client uses parallel                   use encoding have substantial drawbacks in terms of
downloading. The authors fail to address the effect and                    complexity, scalability, and in their ability to handle
consequences when all clients choose to adopt the same                     heterogeneity among both senders and receivers.
scheme.
                                                                               Ideally, using erasure codes, a receiver could gather an
    J. Kanagharju, J. Roberts, and K.W. Ross[12] studied the               encoded file in parallel from multiple sources, and as soon as
problem of optimally replicating objects in CDN servers. In                any k packets arrive from any combination of the sources, the
their model, each Internet Autonomous System (AS) is a node                original file could be restituted efficiently. In practice,
with finite storage capacity for replicating objects. The                  however, designing a system with this ideal property and with




                                                                     190                               http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                               Vol. 8, No. 5, August 2010
very fast encoding and decoding times appears difficult. Hence,           centralized client–server approach against each method acting
although other erasure codes could be used in this setting, we            alone. The key element of their approach is to explicitly model
suggest that a newly developed class of erasure codes called              the temporal evolution of demand. They also investigated the
Tornado codes are best suited to this application, as they have           relative performance of peer-to-peer and centralized client–
extremely fast encoding and decoding algorithms [12]. Indeed,             server schemes, as well as a hybrid of the two—both from the
previously these codes have been shown to be more effective               point of view of consumers as well as the content distributor.
than standard erasure codes in the setting of reliable multicast          They showed how awareness of demand could be used to attain
transmission of large files [16].                                         a given average delay target with lowest possible utilization of
                                                                          the central server by using the hybrid scheme.
    Byers [17] et al proposed a parallel accessing scheme based
on tornado codes in which a client is allowed to access a file                Zhijia Chen [23] et al addressed the issues related to
from multiple mirror sites in parallel to speed up the download.          distributing the media content such as audio, video, and
They eliminated complex client-server negotiations and                    software packages to increasing number of end consumers in
implemented a straightforward approach for developing a                   higher speed. They integrated Peer-to-Peer (P2P) paradigm into
feedback-free protocol based on erasure codes. They                       the Internet content distribution infrastructure which provides a
demonstrated that a protocol using fast Tornado codes can                 disruptive market opportunity to scale the Internet for high
deliver dramatic speedups at the expense of transmitting a                quality data delivery. They have done an experimental and
moderate number of additional packets into the network. Their             analytical performance study over BitTorrent-like P2P
scalable solution would be extended to allow multiple clients to          networks for accelerating large-scale content distribution over
access data from multiple mirror sites simultaneously.                    booming Internet. They explored the unique strength of P2P in
                                                                          high speed networks, identified the performance bottlenecks,
     Danny Bickson and Dahlia Malkhi [18] proposed a new
                                                                          and investigated and quantified the special requirements in the
content distribution network named “Julia” which reduces the
                                                                          new scenario, i.e. file piece length and seed capacity. They
overall communication cost, which in turn improves network
                                                                          proposed a Piece-On-Demand (POD) scheme to modify
load balance and reduces the usage of long haul links.
                                                                          BitTorrent in integration with File system in Userspace (FUSE)
Compared with the state-of-the-art BitTorrent content
                                                                          with an objective to decrease file distribution time and increase
distribution network, the authors found that while Julia
                                                                          service availability.
achieves slightly slower average finishing times relative to
BitTorrent, Julia nevertheless reduces the total communication                Ye Xia [24] et al considered a two-tier content distribution
cost in the network by approximately 33%. Furthermore, the                system for distributing massive content and proposed
Julia protocol achieves a better load balancing of the network            popularity-based file replication techniques within the CDN
resources, especially over trans-Atlantic links. They evaluated           using multiple hash functions. Their strategy is to set aside a
the Julia protocol using real WAN deployment and by                       large number of hash functions. When the demand for a file
extensive simulation. The WAN experimentation was carried                 exceeds the overall capacity of the current servers, then the
over the PlanetLab wide area testbed using over 250 machines.             previously unused hash functions were used to obtain a new
Simulations were performed using the GT-ITM topology                      node ID where the file would be replicated. They developed a
generator with 1200 nodes.                                                set of distributed, robust algorithms to implement the above
                                                                          solutions and evaluated the performance of proposed
     Amutharaj J and Radhakrishnan.S [19, 20] constructed an
                                                                          algorithms.
overlay network based on Dominating Set theory to optimize
the number of nodes for large data transfer. They investigated                Yaozhou Ma [25], and Abbas Jamalipour, presented a
the use of Fast Replica algorithm to reduce the content transfer          cooperative cache-based content dissemination framework
time for replicating the content within the semantic network. A           (CCCDF) to carry out the cooperative soliciting. They
dynamic parallel access scheme is introduced to download a                investigated two cooperative strategies such as CCCDF
file from different peers in parallel from the Semantic Overlay           (Optimal) with an objective to maximize the overall content
Network (SON), where the end users can access the members                 delivery performance while CCCDF (Max-Min) with an aim to
of the SON at the same time, fetching different portions of that          share the limited network resources among the contents in a
file from different peers and reassembling them locally. That is,         Max-Min fairness manner. They demonstrated in simulation
the load is dynamically shared among all the peers. To                    results that the enhanced delivery performance was offered by
eliminate the need for retransmission requests from the end               the proposed CCCDF over existing content dissemination
users, an enhanced digital fountain with Tornado codes is                 schemes.
applied. Decoding algorithm at the receiver will reconstruct the
original content. The authors also found that no feedback                     Oznur Ozkasap [26], MineCaglar and AliAlagoz proposed
mechanisms are needed to ensure reliable delivery. The authors            and designed a peer-to- peer system; SeCond, addressing the
investigated the performance of sequential unicast, multiple              distribution of large sized content to a large number of end
unicast and fast replica with tornado content distribution                systems in an efficient manner. It employed a self-organizing
strategies in terms of content replication time and delivery              epidemic dissemination scheme for state propagation of
ratio. They also analyzed the impact of dominating set theory             available blocks and initiation of block transmissions. Their
for the construction of semantic overlay networks.                        performance study included scalability analysis for different
                                                                          arrival/departure patterns, flash-crowd scenario, overhead
   Srinivas Shakkottai and, Ramesh Johari,[22] evaluated the              analysis, and fairness ratio. Authors studied various
benefits of a hybrid system that combines peer-to-peer and a              performance metrics such as the average file download time,



                                                                    191                              http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 8, No. 5, August 2010
load on the primary seed, uplink/downlink utilization, and                     N0 will send each node Ni the following file and
communication overhead. They showed that SeCond is a                       information.
scalable and adaptive protocol which took the heterogeneity of
the peers into account.                                                    •   Surrogate Server list: R = {N1, N2 ,.., Nm }( In next step,

    Jun Wu [27] and Kaliappa Ravindran designed a CDN and                          Sub-file Fi will be forwarded to this server list.
studied about the issues related to proxy server placement with            •   Sub-file Fi.
an objective to provide its clients with the best available
performance while consuming as little resource as possible.                •   Replica amount: k ( 1<= k <= m).
They solved surrogate server placement problem as an                           Step 3 : Every surrogate server Ni { N1,N2, … Nm }opens
optimization problem. Among the solutions greedy algorithm                 k- concurrent connections and replicate the sub file Fi to the
yielded better result with low computational cost. The                     group with k-1 surrogate servers defined in the set { Nj,
drawback was that it was easy to trap in the local optimum. Jun            i<j<i+k, if j<m, then j=(j-1)mod m+1}
Wu [27] and Kaliappa Ravindran proposed genetic algorithm
to solve this server placement problem. They mathematically                    In this step, every server Ni = {N1, N2… Nm} has the
modeled the optimization problem and applied genetic                       following output links and input links.
algorithm to solve the server placement problem. They
conducted the simulation experiments and demonstrated the                      •     K-1 Output Links : forwarding sub-file Fi to node list
results with proper justifications for a simple topology for both                    { Nj, i<j<i+k, if j<m, then j=(j-1) mod m+1 }
greedy algorithm and genetic algorithm.                                        •     K-1 Input Links : receiving sub-file Fj from server list
                                                                                     { Nj, i-k <j<i, if j<1, then j = j+m }
          III.   CONTENT DISTRIBUTION ALGORITHMS                               Step 4 : At last, every node Ni holds k sub files,
    The CDN consists of many surrogate servers located at
different locations which can be clustered or grouped together                           {Fj, i-k <j <= i, if j<1, then j = j+m }
to form a surrogate server site, so that a client has a good               In general case, node list Ni {N1, N2… Nm}, as cache servers
connectivity to at least one of the surrogate servers. A surrogate         and supports concurrent download.
server site may consist of an array of surrogate servers that
cooperate with each other to further enhance the performance                   Client Content Request Processing in O-FR: When a user
of the content delivery network and to avoid any congested                 client requests file F from origin server that request will be
paths in the network.                                                      redirected to the surrogate server list {N1, N2… Nm}, and
                                                                           concurrently downloads every sub-file Fi. Then the sub-files
    Content from the origin server are proactively replicated at           will be reassembled in to original file F in the client machine.
surrogate servers according to some content distribution
policies dictated by the application environment. Whole file                   In the ideal case, when k=m, every surrogate server Ni
replication is simple to implement and has a low state cost. It            holds all of m sub-files of original file F and reorganizes them
must only maintain the state proportional to the number of                 to form the Original file F in the local node. When the user
replicas. The cost of replicating the entire file in one operation         requests file F from the origin server, the request will be
can be cumbersome in both space and time, particularly for                 redirected to one surrogate server in the list {N1, N2... Nm} and
systems that support applications with large objects such as               download the whole file F.
audio, video, and software packages. Several content                       B. Working Mechanism of Tornado Codes:
distribution strategies such as Sequential Unicast [5, 19, and
20], Multiple Unicast [5, 19, and 20], and Fast Replica [5, 6,                 A tornado code [17] is a content distribution strategy based
19, and 20] are already described in the literature.                       on digital fountain approach [17] which follows block level
                                                                           replication. Block level replication divides each file object into
    In this paper, we described the working mechanisms of                  an ordered sequence of finite number of fixed size blocks. This
Optimal Fast Replica (O-FR), and Tornado codes content                     has a benefit of naming the individual parts of an object
replication algorithms. Next, we proposed an Efficient Fault               independently.
Resilient Replica Algorithm (EFRRA) to replicate the content
from the origin server to surrogate servers.                               1. Tornado encoding: An entire file is fragmented in to a ‘k’
                                                                           packets or blocks of equal size and encodes it into ‘n’ encoded
A.Working Mechanism of Optimal Fast Replica (O-FR)                         packets where n=2A-1 such that A is the length of the symbol.
   The objective of Optimal Fast Replica (O-FR) is to                      A random set of blocks of a file will be replicated in multiple
minimize the maximum replication time.                                     surrogate servers. A block-level system may download
                                                                           different parts of an object simultaneously from different nodes
   Step 1: Partition the Original file F into ‘m’ sub files of             and it reduces the overall download time. Since the unit of
equal size.                                                                replication is small and fixed, the cost to replicate an individual
         Size (Fi) = Size (F)/ m bytes where 1<=i<=m                       block becomes small.

   Step 2: Surrogate server N0 opens ‘m’ concurrent                        2. Collection step in Tornado: The receiver can run the
connections to surrogate servers N1, N2….Nm.                               Tornado decoding algorithm in real-time as the encoding
                                                                           packets arrive, and reconstruct the original file as soon as it
                                                                           determines that sufficiently many packets have arrived.



                                                                     192                                 http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No. 5, August 2010
Assume that any set of n*k encoded packets are received by the
receiver, the original file can be reconstructed where ‘n’ is a
small number (i.e.) 1<n<2. The basic principle behind the use
of erasure codes is that the original source data, in the form of a
sequence of ‘k’ encoded packets, along with additional
redundant packets, are transmitted by the sender and the
redundant data can be used to recover lost source data at the
receivers. The main benefit of this approach is that different
receivers can recover from different lost packets using the same
redundant data.
   Advantages of tornado codes: No need for retransmission if
some encoded packets are lost due to network traffic or
physical link outage or due to failure of network components.
    Limitations: Some amount of duplicates packets are                          Fig. 1. Distribution Step in EFRRA
generated and transmitted through the network when packet                       Step 2. Adding Fault Resiliency to EFRRA:
loss and parallel download.
                                                                                 It keeps the main structure of the EFRRA replication
The following points need to be considered when determining                  algorithm practically unchanged while adding the desired
the size of the blocks requested.                                            property of resilience to node failure. To maintain the
    •    The number of blocks chosen should be much larger                   resiliency the surrogate servers in the network are exchanging
         than the number of mirror servers that are accessed in              the heartbeat messages with their origin server. The heartbeat
         parallel.                                                           messages from surrogate servers to their origin server are
                                                                             augmented with additional information on the corresponding
    •    Each block should be small enough to provide a fine                 algorithm. Once the content is distributed in the network, the
         granularity of striping and ensure that the transfer of             receiver has to recollect all the content from the network in a
         the last block requested from each server terminates at             parallel manner.
         about the same time.
                                                                                 For example, if surrogate server N1 fails during transfer,
    •    Each block should be sufficiently large enough to                   then this may impact all surrogate servers N2……Nn in the
         keep the inter block idle time small compared to the                network because each node depends on node N1 to receive sub
         download time of a block.                                           file F1. In the described scenario as shown in Fig. 2, surrogate
                                                                             server Ni is acting as a recipient server in the replication set. If a
    •    The document requested via parallel access must be                  surrogate server fails when it acts as the origin surrogate server
         sufficiently large.                                                 Ni, this failure impacts all the surrogate servers in the
C. Working Principle of EFRRA:                                               replication group which may be the replication sub tree rooted
                                                                             in surrogate server Ni.
    A novel algorithm called EFRRA is proposed for an
efficient and fault resilient replication of large files in the CDN.
Working mechanism of EFRRA can be summarized as follows.
In order to replicate a large file among ‘n’ nodes, the original
file is partitioned into ‘n’ sub files of equal size and each sub
file is transferred to a different node in the group. After that,
each node propagates its sub file to the remaining nodes in the
group. Thus instead of the typical replication of an entire file to
‘n’ nodes by using ‘n’ internet paths connecting the original
node to the replication group, this replica algorithm exploits
n*n diverse internet paths within the replication group where
each path is used for transferring 1/nth of the file. Hence, the
bandwidth requirement is reduced by a factor of 1/n.
   Step 1. Distribution of Content to Surrogate Servers
    As shown in “Fig. 1,” the originator node N0 opens n                        Fig. 2 Adding Resiliency to EFRRA
concurrent network connections to nodes {N1...Nn}, and sends
to each recipient node Ni (1 <= i <= n) the following;                           Step 3. Adding resiliency during Content Collection at the
                                                                             Receiver: Once the entire file is distributed to all the surrogate
    •    A distribution list of nodes R = {N1...Nn} to which sub             servers in the overlay network of surrogate servers using step1
         file Fi has to be sent on the next step.                            then the recipient node or client node has to recollect all the sub
    •    Sub file Fi.                                                        files or blocks of the requested file from the overlay network of
                                                                             surrogate servers in a parallel manner. Recipient node retrieves
                                                                             the original source file, in the form of a sequence of ‘k’




                                                                       193                                 http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 5, August 2010
encoded packets, along with additional redundant packets, are                                           = Size (F) /n X B
transmitted by the sender and the redundant data can be used to
                                                                                    TimeCollection      = Size (Fi) / B
recover lost source data at the receivers. Here retransmission of
lost packets will not be needed. In this collection step also,                                          = (1/n)*Size (F) / B
EFRRA algorithm maintains resiliency against surrogate server                                           = Size (F) /n X B
failure and link outages.
                                                                              Hence Time FR          = 2 x Size (F) / (nxB)
                   IV.   PERFORMANCE ANALYSIS                                 For Resilient Fast Replica without Node Failure
The performance analysis of this paper is twofold. First we                   TimeR-FR = TimeDistribution + TimeRR+ TimeCollection    Where
theoretically analyze the performance of Efficient Fault                      TimeRR = 0 since Server failure is an occasional event.
Resilient Replica Algorithm (EFRRA) and compare its                           TimeRR – Time for Resilient Replication
performance with traditional content replication algorithms.
                                                                              Hence TimeR-FR = Time Distribution + TimeCollection
Then, by means of a simulation based performance evaluation,
we assess the efficiency and resiliency of the proposed EFRR                   =2   x Size (F) / (nxB)
Algorithm, and compare its performance with traditional                       For Resilient Fast Replica with Failure of ‘m’ servers
content replication algorithms such as sequential unicast,
multiple unicast, Fast Replica (FR), Resilient Fast Replica(R-                TimeR-FR = TimeDistribution + TimeRR+ TimeCollection
FR), and Tornado codes which were described earlier[19][20].                  Hence TimeR-FR = 2 x Size (F) / (nxB) + TimeRR
A. ANALYTICAL STUDY:                                                          If m=1, then file Fi has to be replicated to n-1 servers in the
                                                                              overlay. Then TimeRR = Size (Fi) / (n-1) * B
     Let Timei denote the transfer time of file F from the origin
server N0 to surrogate server Ni as measured at Ni. Two                       If m=2, then file Fi has to be replicated to n-2 servers in the
performance metrics: average and maximum replication times                    overlay. Then
are considered.                                                               TimeRR = Size (Fi) / (n-1) * B + Size (Fi) / (n-2) * B
Average Replication Time:
                         i=n                                                  If m=’m’, then file Fi has to be replicated to n-m servers in the
                         ∑ Time
                         i =1
                                    i                                         overlay. Then
Timeavg     = 1/n *                                                           TimeRR = Size (Fi) / (n-1) * B + Size (Fi) / (n-2) * B + ….. +
                                                                              Size (Fi) / (n-m) * B
Maximum Replication Time:
TimeMax reflects the time when all the surrogate servers in the               = Size (Fi) /B ( 1/(n-1) + 1 / (n-2) + ….. +(1/(n-m) )
overlay network receive k-subfiles (1<=k<=m) of the original                  For Large value of ‘n’ and small value of ‘m’
file.                                                                         TimeRR = Size(Fi) / B ( 1/n + 1/n + ….. 1/n)
TimeMax = max{timei} where i=1..n                                             TimeRR = Size(Fi) *m / n*B
                                                                               = m* Size(Fi) / n*B                        Where      Size      (Fi)   =
In idealistic setting all the nodes and links are homogeneous,                1/n(Size(F))
and let each node can support ‘n’ network connections to other                 = m/n ( size(F)/ n*B)
nodes at B bytes/sec. Then,
     Time distribution = Size(F) / (nxB)                        (1)            TimeRR = m/n * TimeDistribution
     Time collection =Size (F) / (nxB)                          (2)           d. For Optimal Replica
a. For Sequential Unicast                                                     TimeO-FR = TimeRound1 + Time Round2
Content transfer time is the sum of individual transfer times of
entire file from the source to each recipient independently.                  In Round1 Origin server will send File Fi to all the ‘n’
Timedistribution = Size (F) / B + Size (F) / B + …+ Size (F) / B              surrogate servers
Timedistribution = n * Size (F) / B                                           TimeRound1 = Size (Fi) /n* B
                                                                                        = 1/n*Size (F)/ n*B
b. For Multiple Unicast
                                                                                        = (1/n*n) Size (F)/B
In Multiple Unicast, the original file F is completely replicated
concurrently to all nodes {N1,….,Nm} respectively.                            In Round2 surrogate servers (N1, N2, … Ni ) send its sub file Fi
                                                                              to k number of surrogate servers in Round Robin fashion in
TimeDistribution         = Size (F) / B                                       the Surrogate server set {N1,N2, .., Nk}
c. For Fast Replica                                                                      Where 1>=k>=n
TimeFR                      = Time Distribution + Time Collection             Time Round2 = Size (Fi) / k*B if K>1
TimeDistribution            = Size (Fi) / B                                                = 1/n*size (F) /k*B if k>1
                            = (1/n)*Size (F) / B                                            = 1/n*k (Size (F)/B) if k>1




                                                                        194                                   http://sites.google.com/site/ijcsis/
                                                                                                              ISSN 1947-5500
                                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 8, No. 5, August 2010
TimeO-FR = (1/n*n)*Size (F)/ B + (1/n*k)*(Size (F)/B)                          protocol, implementing packet switching, packet transmission
            = 1/n (1/n + 1/k ) * Size(F) / B                                   upon misses etc.
            = (( k+n ) / n*n*k ) * Size (F) / B
                                                                               a. Performance of different content distribution schemes in
Replication Time of Tornado Code algorithm:                                    terms of Average Replication Time:
TimeTC = TEncoding + TTR + TDecoding              Where      TEncoding            We experimented with 12 different size files; 100 KB, 750
and TDecoding ≈ TTR                                                            KB, 1.5 MB, 3 MB, 4.5 MB, 6 MB, 7.5 MB, 9 MB, 36 MB,
Where                                                                          54 MB, 72 MB, 128 MB and 8 surrogate servers. Fig. 3 shows
                                                                               the average replication time measured by different, individual
TTR = (Size ((n*k) encoded packets) / (nXB)) + (Size ((n*k)                    surrogate servers for different file sizes of 100 KB, 750 KB,
encoded packets)/ (nXB))                                                       1.5 MB, 3 MB, 4.5 MB, 6 MB, 7.5 MB, 9 MB, 36 MB, 54
TTR = 2 * (Size ((n*k) encoded packets)/(nXB))                                 MB, 72 MB, 128 MB when 8 surrogate servers are in the
      = 2 * (c * Size(F) ) / n X B where 1<= c <=2                             replication set. High variability of average replication time
                                                                               under Multiple and Sequential Multicast is identified for larger
TTR = 2*c / n * Size(F) ) / B where 1<= c <=2
                                                                               file sizes. Average Content replication time under EFRRA
                                                                               replication algorithm across different file sizes in an 8
Replication Time of EFRRA Algorithm:                                           surrogate servers replication set is much more stable and
TimeDistribution    = TFR = (Size (F) + Size (F)) / (nxB) =                    predictable. Hence, EFRRA with resilient fast replica for
                                                                               replication and tornado based collection and decoding
                    = 2 *Size (F) / (nxB)                                      outperforms most of the cases than sequential multicast,
TimeCollection      = TTR + TDecoding Where TDecoding << TTR                   multiple unicast, Fast replica, Resilient Fast Replica(R-FR),
TimeCollection      = Size ((n*k) encoded packets)                             Optimal Fast Replica (O-FR), and Tornado codes content
                                                                               distribution schemes.
TimeEFRRA           = (2* Size (F) / (nxB)) + (Size ((n*k)
                       encoded packets) / (nxB))                                                                      Average Replication Time Analysis
                                                                                                              80
                                                                                   A vg. R eplication Tim e


                                                                                                              75
                                                                                                              70
                                                                                                              65
                     = (2* Size (F) / (nxB)) + (c * Size(F) ) / n X                                           60
                                                                                                              55
                                                                                            ( in ms)




                                                                                                              50
                       B where 1<= c <=2                                                                      45
                                                                                                              40
                                                                                                              35
                                                                                                              30
TimeEFRRA = ( 2 +c ) /n * Size(F) ) / B where 1<= c <=2                                                       25
                                                                                                              20
                                                                                                              15
Therefore, Replication Time proportion of different content                                                   10
                                                                                                               5
distribution algorithms can be expressed as follows                                                            0
                                                                                                                      B
                                                                                                                      B



                                                                                                                      B

                                                                                                                      B
                                                                                                                      B




                                                                                                                      B
                                                                                                                      B

                                                                                                                      B

                                                                                                                      B

                                                                                                               12 B
TimeSU : TimeMU : TimeFR : TimeR-FR : TimeO-FR : TimeTC :
                                                                                                               75 B

                                                                                                               1. B



                                                                                                                   5M
                                                                                                                    M




                                                                                                                   8M
                                                                                                                   3M



                                                                                                                   6M

                                                                                                                   9M

                                                                                                                    M

                                                                                                                    M

                                                                                                                    M

                                                                                                                    M
                                                                                                                   0K

                                                                                                                   0K




                                                                                                                18

                                                                                                                36

                                                                                                                54

                                                                                                                72
                                                                                                                 5


                                                                                                                4.
                                                                                                              10




TimeEFRRA
                                                                                                                                          File Size
Size(F) / B : 2 x Size (F) / (nxB) : m/n ( Size(F)/ n*B) : ((
                                                                                                                   Sequential Unicast             Multiple Unicast
k+n ) / n*n*k ) * Size (F) / B
                                                                                                                   Fast Replica                   R-FR with 'm' node Failure
1 X Size (F) / B: 2/nX Size (F) / B: m/n*n X (Size (F) / B): ((                                                    Optimal Fast Replica           Tornadocodes
k+n )/n*n*k )* Size(F)/B : 2*c / n * Size(F) ) / B : ( 2 +c ) /n                                                   EFRRA
* Size(F) ) / B                                                                             Fig. 3. Average Content Replication Times for various Content
n : 1: 2/n : m/n*n : (( k+n ) / n*n*k ) : 2*c / n : ( 2 +c ) /n                             distribution algorithms
where 1<=c<=2                                                                  b. Performance of different content distribution schemes in
                                                                               terms of Maximum Replication Time:
                                                                                  We experimented with 12 different size files; 100 KB, 750
B. EXPERIMENTAL ANALYSIS:                                                      KB, 1.5 MB, 3 MB, 4.5 MB, 6 MB, 7.5 MB, 9 MB, 36 MB,
   To evaluate the CDN, we used our complete simulation                        54 MB, 72 MB, 128 MB and 8 surrogate servers. “Fig. 4,”
environment, called CDNsim [21], which simulates a main                        shows the maximum replication time measured by different,
CDN infrastructure. It is based on OMNeT++ library which                       individual recipient nodes for different file sizes of 100 KB,
provides a discrete event simulation environment.                              750 KB, 1.5 MB, 3 MB, 4.5 MB, 6 MB, 7.5 MB, 9 MB, 36
   All CDN networking issues, such as surrogate server                         MB, 54 MB, 72 MB,128 MB when 8 surrogate servers are in
selection, SON formation, replicating the content from origin                  the replication set. High variability of maximum replication
server to surrogate servers, implementing the replication                      time under Multiple and Sequential Multicast is identified.
algorithms, propagation, queuing, utility computation and                      Maximum File replication time under EFRRA replication
pricing computation are computed dynamically via CDNsim                        algorithm across different file sizes in an 8 surrogate servers
[21], which provides a detailed implementation of the TCP/IP                   replication set are much more stable and predictable.



                                                                         195                                                      http://sites.google.com/site/ijcsis/
                                                                                                                                  ISSN 1947-5500
                                                                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                                              Vol. 8, No. 5, August 2010
                                                                                                                                                       Delivery Ratio during Surrogate Server Failure
                                              Maximum Replication Time Analysis
                                       95
                                       90                                                                                                 1
                                       85
      Max. Replication Time ( in ms)



                                       80                                                                                                0.9
                                       75
                                       70                                                                                                0.8
                                       65
                                       60                                                                                                0.7
                                       55




                                                                                                                        Delivery Ratio
                                       50                                                                                                0.6
                                       45
                                       40                                                                                                0.5
                                       35
                                       30                                                                                                0.4
                                       25
                                       20                                                                                                0.3
                                       15
                                       10                                                                                                0.2
                                        5
                                        0                                                                                                0.1

                                                                                                                                          0




                                              B
                                              B



                                              B

                                              B

                                              B

                                              B

                                              B

                                              B
                                              B
                                              B
                                              B

                                              B




                                            M

                                            M

                                            M

                                            M
                                           8M
                                           3M



                                           6M

                                           9M
                                           0K

                                           0K




                                           5M
                                            M




                                                                                                                                               0.01     0.02     0.03      0.04    0.05     0.06     0.07   0.08    0.09    0.10
                                         18

                                         36

                                         54

                                         72
                                          5
                                       10

                                       75




                                       12
                                        4.
                                       1.




                                                                                                                                                                    Surrogate Server Failure Fraction
                                                                   File Size
                                            Sequential Unicast                 Multiple Unicast                                                       Sequential Unicast          Multiple Unicast          Fast Replica
                                            Fast Replica                       R-FR with 'm' node Failure                                             Resilient Fast Replica      Optimal Fast Replica      Tornado Codes
                                            Optimal Fast Replica               Tornadocodes                                                           EFRRA
                                            EFRRA
                                                                                                                      Fig. 5. Delivery Ratio of Content Distribution Algorithms during Surrogate
     Fig. 4. Maximum Content Replication Times for various Content                                                    Server Failure.
     distribution algorithms
                                                                                                                                               V.        CONCLUSION AND FUTURE SCOPE
   Hence, EFRRA algorithm with resilient fast replica for                                                                 In this work, we proposed a novel EFRRA algorithm to
replication and tornado based collection and decoding                                                                 distribute the contents in the CDN. We have performed both
algorithm outperforms most of the cases than sequential                                                               analytical study and empirical study for analyzing the
unicast, multiple unicast, Fast replica, Resilient Fast                                                               performance of the proposed EFRRA algorithm and compared
Replica(R-FR), Optimal Fast Replica (O-FR), and Tornado                                                               its performance with other content distribution algorithms such
codes content distribution schemes.                                                                                   as Sequential Unicast, Multiple Unicast, Fast Replica (FR),
c. Performance of different content distribution schemes                                                              Resilient Fast Replica(R-FR), Optimal Fast Replica (O-FR),
during Surrogate server Failure:                                                                                      and Tornado Codes in terms of average replication time and
    The worst case delivery ratio of different content                                                                maximum replication time. We also conducted simulation
distribution schemes such as sequential unicast, multiple                                                             experiments using CDNsim and analyzed the performance of
unicast, fast replica(FR), Resilient fast replica(R-FR), Optimal                                                      content distribution algorithms in terms of average content
Fast Replica(O-FR), Tornado Codes and EFRRA Content                                                                   replication time, maximum content replication time and
Distribution algorithms when the number of simultaneous                                                               delivery ratio for large files. It is found that EFRRA algorithm
surrogate server failures in the CDN has been analyzed and its                                                        outperforms other content distribution algorithms. In future,
performance is shown in Fig. 5. The delivery ratio is defined as                                                      content distribution algorithms can be compared in terms of
the ratio of the number of data packets successfully received by                                                      other parameters such as surrogate server utilization and
the recipient surrogate server to the number of data packets sent                                                     optimized network bandwidth usage.
by the source surrogate server.
                                                                                                                                                                    ACKNOWLEDGMENT
    From the delivery ratio analysis shown in Fig.5, we found
that the delivery ratio of EFRRA algorithm is consistent during                                                          The authors would like to thank the Project Coordinator and
the surrogate server failure. Delivery ratio of traditional                                                           Project Directors of TIFAC CORE in Network Engineering,
algorithms such as Fast Replica (FR), Resilient Fast Replica                                                          Arulmigu Kalasalingam College of Engineering for providing
(R-FR), Optimal Fast Replica (O-FR), and Tornado Codes                                                                the infrastructure facility in Open Source Technology
degrades gracefully with respect to surrogate server failure. It is                                                   Laboratory and also thank Kalasalingam Anandam Ammal
also observed that delivery ratio of Sequential Unicast and                                                           Charities for providing financial support for this work.
Multiple Unicast content distribution algorithms are degrades
drastically with respect to surrogate server failure.                                                                                                                      REFERENCES
                                                                                                                                         [1]   G. Pallis, and A. Vakali, “Insight and Perspectives for Content
                                                                                                                                               Delivery Networks,” Communications of the ACM, Vol. 49, No.
                                                                                                                                               1, ACM Press, NY, USA, pp. 101-106, January 2006.
                                                                                                                                         [2]   A. Vakali, and G. Pallis, “Content Delivery Networks: Status and
                                                                                                                                               Trends,” IEEE Internet Computing, IEEE Computer Society, pp.
                                                                                                                                               68-74, November-December 2003.
                                                                                                                                         [3]   G. Peng, “CDN: Content Distribution Network,” Technical Report
                                                                                                                                               TR-125,” Experimental Computer Systems Lab, Department of




                                                                                                                196                                                            http://sites.google.com/site/ijcsis/
                                                                                                                                                                               ISSN 1947-5500
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 5, August 2010
       Computer Science, State University of New York, Stony Brook,                       Content Distribution”, Journal of Networks, Academy Publishers,
       NY, 2003. http://citeseer.ist.psu.edu/peng03cdn.htm                                Vol 3, March 2008.
[4]    Content Distribution Networks – State of the art.                           [21]    K. Stamos, G. Pallis, A. Vakali, D. Katsaros, A. Sidiropoulos, Y.
       http://www.telin.nl,2002.                                                          Manolopoulos: "CDNsim: A Simulation Tool for Content
[5]    L. Cherksova, J. Kee, “Fast Replica: Efficient Large file                          Distribution Networks",ACM Transactions on Modeling and
       Distribution within Content Delivery Networks”, Proc. of the 4th                   Computer Simulation, April 2010.
       SENIX symposium on Internet Technologies, March 2002.                       [22]   Srinivas Shakkottai, Ramesh Johari “Demand-Aware Content
[6]    ZhiHui Lu, WeiMing Fu, ShiYong Zhang, YiPing Zhong, “TRRR:                         Distribution on the Internet”, IEEE/ACM Transaction on
       A Tree-Round-Robin-Replica Content Replication Algorithm for                       Networking, Vol. 18, No.2. April 2010.
       Improving FastReplica in Content Delivery Networks” in the                  [23]   Zhijia Chen, Yang Zhao, Chuang Lin, Qingbo Wang,
       proceedings of 4th        International Conference on Wireless                     “Accelerating Large-scale Data Distribution in Booming Internet:
       Communications, Networking and Mobile Computing, 2008.                             Effectiveness, Bottlenecks and Practices“, IEEE Transactions on
[7]    M. Hofmann, and L. R. Beaumont, “Content Networking:                               Consumer Electronics, Vol. 55, No. 2, May 2009.
       Architecture, Protocols, and Practice,” Morgan Kaufmann                     [24]   Ye Xia , Shigang Chen, Chunglae Cho, Vivekanand Korgaonkar,
       Publishers, San Francisco, CA, USA, pp. 129-134, 2005.                             “Algorithms and performance of load-balancing with multiple hash
[8]    Z. Fei, S. Bhattacharjee, E. W. Zegura and M. H. Ammar, “A                         functions in massive content distribution”, The International
       novel server selection technique for improving the response time of                Journal of Computer and Telecommunications Networking,
       a replicated service”, proceedings in IEEE INFOCOM, vol. 2. San                    Volume 53 , Issue 1, Elsevier, January 2009.
       Francisco, CA, March 1998.                                                  [25]   Yaozhou Ma, Abbas Jamalipour, “A Cooperative Cache-Based
[9]    M. Sayal, Y. Breibart, P. Scheuermann, and R. Vingralek,                           Content Delivery Framework for Intermittently Connected Mobile
       ”Selection algorithm for replicated web servers,” presented at the                 Ad Hoc Networks”, IEEE Transactions on Wireless
       workshop Internet server performance, Madison, June 1998.                          Communications, Vol. 9, No. 1, January 2010.
[10]   Laurent Massoulie, Anne-Marie Kermarree, Ayalvadi J.Ganesh,                 [26]   Ozkasap O., Caglar M., Alagoz A. "Principles and performance
       “Network Awareness and Failure Resilience in Self – Organising                     analysis of SeCond: A system for epidemic peer-to-peer content
       Overlay Networks”, Proc. of the 2nd, International Symposium on                    distribution", Journal of Network and Computer Applications
       Reliable Distributed Systems, 2003.                                                Volume 32 , Issue 3 Elsevier, May 2009.
[11]   P. Rodriguez and E. Biersack, “Dynamic parallel access to                   [27]   Jun Wu, Kaliappa Ravindran, “Optimization Algorithms for Proxy
       replicated content in the Internet”, IEEE/ACM Transactions on                      Server Placement in Content Distribution Networks”, IFIP/IEEE
       Networking, 10(4), Aug. 2002.                                                      International Symposium on Integrated Network Management-
                                                                                          Workshops, 2009.
[12]   J. Kanagharju, J. Roberts, K.W. Ross, “Object Replication
       Strategies in content distribution networks”, Computer
       Communications, 25(4):367 – 383, March 2002.                                                      AUTHORS PROFILE
[13]   A. M. K. Pathan and R. Buyya, “A Taxonomy and Survey of                     Amutharaj Joyson received his Bachelor of Engineering Degree from
       CDNs”, Technical Report, GRIDS-TR-2007-4, The University of                 Manonmaniam Sundarnar University, Tirunelveli and Master of
       Melbourne, Australia, Feb. 2007.                                            Engineering from Madurai Kamaraj University, Madurai. He is currently
[14]   J. Broberg, R. Buyya, and Z. Tari, “MetaCDN: Harnessing                     doing his doctoral program from Anna University, Chennai. He is a
       ‘Storage Clouds’ for high performance content delivery”, Journal            member of CSI, IAEng, ISTE and Network Technology Group of
       of Network and Computer Applications, In Press, Corrected Proof,            TIFAC-CORE in Network Engineering. He has published Two papers in
       Available online 5 April 2009.                                              International Journals in the Communication Networks, and presented
                                                                                   four research papers in International Conferences and Twenty research
[15]   M. O. Rabin, “Efficient Dispersal of Information for Security,
                                                                                   papers in National Conferences in Network Engineering. His areas of
       Load Balancing, and Fault Tolerance.” In Journal of the ACM,
                                                                                   interest are Content Distribution Networks, Mobile Adhoc Networks,
       Volume 38, pp. 335- 348, 1989.
                                                                                   Network Technologies, Distributed Computing, Real time Systems, and
[16]   J. Byers, M. Luby, M. Mitzenmacher, A. Rege., “A Digital                    Evolutionary Optimization.
       Fountain approach to reliable distribution of bulk data”, Proc. of          S. Radhakrishnan received his Master of Technology and Doctorate in
       ACM SIGCOMM, 1998.
                                                                                   Philosophy from Institute of Technology, Banaras Hindu University,
[17]   J. Byers, M. Luby, and M. Mitzenmacher, A. Rege., “Accessing                Banaras. He is the Director and Head of Computer Science and
       multiple mirror sites in parallel: Using tornado codes to speed up          Engineering Department in Arulmigu Kalasalingam College of
       downloads”, in Proc. of IEEE INFOCOM, vol. 1, New York, NY,                 Engineering, Srivilliputhur. He is a member of ISTE. He is a Principal
       Mar. 1999, pp. 275-283.                                                     Investigator of Research Promotion Scheme (RPS) project funded by
[18]   Danny Bickson , Dahlia Malkhi, “The julia content distribution              Department of Science and Technology, Government of India. He is the
       network”, Proceedings of the 2nd conference on Real, Large                  Project Director of Network Technology Group of TIFAC-CORE in
       Distributed Systems, p.37-41, December 13, 2005, San Francisco,             Network Engineering. This prestigious project is funded by TIFAC,
       CA.                                                                         Department of Science and Technology, Government of India and Cisco
[19]   Amutharaj. J, and Radhakrishnan. S, “Dominating Set based                   Systems. He has produced eight Ph. D’s and currently guiding twelve
       Semantic Overlay Networks for Efficient Content Distribution”               Research Scholars in the areas of Network Engineering, Network
       proceedings in IEEE ICSCN 2007, vol. 1. Madras Institute of                 Processors, Network Security, Sensor Networks, Optical Networks,
       Technology, Anna University, Chennai, Feb 2007.                             Wireless Networks, Evolutionary Optimization and Bio-medical
                                                                                   Instrumentation.
[20]   Amutharaj. J, and Radhakrishnan. S, “ Dominating Set Theory
       based Semantic Overlay Networks for Efficient and Resilient




                                                                             197                                http://sites.google.com/site/ijcsis/
                                                                                                                ISSN 1947-5500

				
DOCUMENT INFO
Description: IJCSIS invites authors to submit their original and unpublished work that communicates current research on information assurance and security regarding both the theoretical and methodological aspects, as well as various applications in solving real world information security problems.