Networks - Global Soft Solutions

Document Sample
Networks - Global Soft Solutions Powered By Docstoc


1,Privacy- and Integrity-Preserving Range Queries in Sensor Networks

The architecture of two-tiered sensor networks, where storage nodes serve as an
intermediate tier between sensors and a sink for storing data and processing
queries, has been widely adopted because of the benefits of power and storage
saving for sensors as well as the efficiency of query processing. However, the
importance of storage nodes also makes them attractive to attackers. In this
paper, we propose SafeQ, a protocol that prevents attackers from gaining
information from both sensor collected data and sink issued queries. SafeQ also
allows a sink to detect compromised storage nodes when they misbehave. To
preserve privacy, SafeQ uses a novel technique to encode both data and queries
such that a storage node can correctly process encoded queries over encoded data
without knowing their values. To preserve integrity, we propose two schemes—one
using Merkle hash trees and another using a new data structure called
neighborhood chains—to generate integrity verification information so that a sink
can use this information to verify whether the result of a query contains exactly
the data items that satisfy the query. To improve performance, we propose an
optimization technique using Bloom filters to reduce the communication cost
between sensors and storage nodes.

2,Optimal Source-Based Filtering of Malicious Traffic

In this paper, we consider the problem of blocking malicious traffic on
the Internet via source-based filtering. In particular, we consider
filtering via access control lists (ACLs): These are already available at
the routers today, but are a scarce resource because they are stored in
the expensive ternary content addressable memory (TCAM). Aggregation
(by filtering source prefixes instead of individual IP addresses) helps
reduce the number of filters, but comes also at the cost of blocking
legitimate traffic originating from the filtered prefixes. We show how
to optimally choose which source prefixes to filter for a variety of
realistic attack scenarios and operators' policies. In each scenario, we
design optimal, yet computationally efficient, algorithms. Using logs from
                    , we evaluate the algorithms and demonstrate that they bring
significant benefit in practice.



In cognitive radio networks, the signal reception quality of a secondary user
degrades due to the interference from multiple heterogeneous primary networks,
and also the transmission activity of a secondary user is constrained by its
interference to the primary networks. It is difficult to ensure the connectivity of
the secondary network. However, since there may exist multiple heterogeneous
secondary networks with different radio access technologies, such secondary
networks may be treated as one secondary network via proper cooperation, to
improve connectivity. In this paper, we investigate the connectivity of such a
cooperative secondary network from a percolation-based perspective, in which each
secondary network's user may have other secondary networks' users acting as
relays. The connectivity of this cooperative secondary network is characterized in
terms of percolation threshold, from which the benefit of cooperation is justified.
For example, while a noncooperative secondary network does not percolate,
percolation may occur in the cooperative secondary network; or when a
noncooperative secondary network percolates, less power would be required to
sustain the same level of connectivity in the cooperative secondary network.



Wormhole attacks are considered as a severe security threat in multi-hop wireless
ad hoc networks. In this paper, we propose an Energy-Efficient Scheme Immune to
Wormhole attacks (our so-called E2SIW). This protocol uses the location
information of nodes to detect the presence of a wormhole, and in case a wormhole
exists in the path, it finds alternate routes involving the nodes of the selected path
so as to obtain a secure route to the destination. The protocol is capable of
detecting wormhole attacks employing either hidden or participating malicious
nodes. Simulations are conducted, showing that E2SIW can detect wormholes with

a high detection rate, less overhead, and can consume less energy in less time,
compared to the De Worm wormhole detection protocol, chosen as benchmark.


The intrusion detection systems (IDS) are usually designed to work on local
networks. However, with the development of mobile networks and their applications,
it became necessary to develop new architectures for IDSs to act on these
networks in order to detect problems and ensure the correct operation of data
communications and its applications. This paper presents a distributed IDS model
for mobile ad hoc networks that can identify and punish those network nodes that
have malicious behavior. In this paper we describe the proposed model, making a
comparison with major efforts in the literature on distributed intrusion detection
systems for mobile ad hoc networks.



Wireless links are often asymmetric due to heterogeneity in the transmission power
of devices, non-uniform environmental noise, and other signal propagation
phenomena. Unfortunately, routing protocols for mobile ad hoc networks typically
work well only in bidirectional networks. This paper first presents a simulation
study quantifying the impact of asymmetric links on network connectivity and
routing performance. It then presents a framework called BRA that provides a
bidirectional abstraction of the asymmetric network to routing protocols. BRA
works by maintaining multi-hop reverse routes for unidirectional links and provides
three new abilities: improved connectivity by taking advantage of the unidirectional
links, reverse route forwarding of control packets to enable off-the-shelf routing
protocols, and detection packet loss on unidirectional links. Extensive simulations of
AODV layered on BRA show that packet delivery increases substantially (two-fold
in some instances) in asymmetric networks compared to regular AODV, which only
routes on bidirectional links.



Low latency is a critical requirement in some switching applications, specifically in
parallel computer interconnection networks. The minimum latency in switches with
centralized scheduling comprises two components, namely, the control-path latency
and the data-path latency, which in a practical high-capacity, distributed switch
implementation can be far greater than the cell duration. We introduce a
speculative transmission scheme to significantly reduce the average control-path
latency by allowing cells to proceed without waiting for a grant, under certain
conditions. It operates in conjunction with any centralized matching algorithm to
achieve a high maximum utilization and incorporates a reliable delivery mechanism
to deal with failed speculations. An analytical model is presented to investigate the
efficiency of the speculative transmission scheme employed in a non-blocking N
times NR input-queued crossbar switch with R receivers per output. Using this
model, performance measures such as the mean delay and the rate of successful
speculative transmissions are derived. The results demonstrate that the control-
path latency can be almost entirely eliminated for loads up to 50%. Our simulations
confirm the analytical results.


We consider a scenario where a sophisticated jammer jams an area in a
single-channel wireless sensor network. The jammer controls the probability
of jamming and transmission range to cause maximal damage to the network
in terms of corrupted communication links. The jammer action ceases when it
is detected by a monitoring node in the network, and a notification message
is transferred out of the jamming region. The jammer is detected at a
monitor node by employing an optimal detection test based on the
percentage of incurred collisions. On the other hand, the network computes
channel access probability in an effort to minimize the jamming detection
plus notification time. In order for the jammer to optimize its benefit, it
needs to know the network channel access probability and number of
neighbors of the monitor node. Accordingly, the network needs to know the
jamming probability of the jammer. We study the idealized case of perfect
knowledge by both the jammer and the network about the strategy of one
another, and the case where the jammer or the network lack this knowledge.

The latter is captured by formulating and solving optimization problems, the
solutions of which constitute best responses of the attacker or the network
to the worst-case strategy of each other. We also take into account
potential energy constraints of the jammer and the network. We extend the
problem to the case of multiple observers and adaptable jamming
transmission range and propose a intuitive heuristic jamming strategy for
that case.

Although random access operations are desirable for on-demand video streaming in
peer-to-peer systems, they are difficult to efficiently achieve due to the
asynchronous interactive behaviors of users and the dynamic nature of peers. In
this paper, we propose a network coding equivalent content distribution (NCECD)
scheme to efficiently handle interactive video-on-demand (VoD) operations in peer-
to-peer systems. In NCECD, videos are divided into segments that are then further
divided into blocks. These blocks are encoded into independent blocks that are
distributed to different peers for local storage. With NCECD, a new client only
needs to connect to a sufficient number of parent peers to be able to view the
whole video and rarely needs to find new parents when performing random access
operations. In most existing methods, a new client must search for parent peers
containing specific segments; however, NCECD uses the properties of network
coding to cache equivalent content in peers, so that one can pick any parent without
additional searches. Experimental results show that the proposed scheme achieves
low startup and jump searching delays and requires fewer server resources. In
addition, we present the analysis of system parameters to achieve reasonable block
loss rates for the proposed scheme.

High-speed routers rely on well-designed packet buffers that support multiple
queues, provide large capacity and short response times. Some researchers
suggested combined SRAM/DRAM hierarchical buffer architectures to meet these
challenges. However, these architectures suffer from either large SRAM
requirement or high time-complexity in the memory management. In this paper, we
present scalable, efficient, and novel distributed packet buffer architecture. Two

fundamental issues need to be addressed to make this architecture feasible: 1) how
to minimize the overhead of an individual packet buffer; and 2) how to design
scalable packet buffers using independent buffer subsystems. We address these
issues by first designing an efficient compact buffer that reduces the SRAM size
requirement by (k-1)/k. Then, we introduce a feasible way of coordinating multiple
subsystems with a load-balancing algorithm that maximizes the overall system
performance. Both theoretical analysis and experimental results demonstrate that
our load-balancing algorithm and the distributed packet buffer architecture can
easily scale to meet the buffering needs of high bandwidth links and satisfy the
requirements of scale and support for multiple queues.

 ON OPTIMIZING OVERLAY TOPOLOGIES FOR                              SEARCH       IN

Unstructured peer-to-peer (P2P) file-sharing networks are popular in the mass
market. As the peers participating in unstructured networks interconnect randomly,
they rely on flooding query messages to discover objects of interest and thus
introduce remarkable network traffic. Empirical measurement studies indicate that
the peers in P2P networks have similar preferences, and have recently proposed
unstructured P2P networks that organize participating peers by exploiting their
similarity. The resultant networks may not perform searches efficiently and
effectively because existing overlay topology construction algorithms often create
unstructured P2P networks without performance guarantees. Thus, we propose a
novel overlay formation algorithm for unstructured P2P networks. Based on the file
sharing pattern exhibiting the power-law property, our proposal is unique in that it
poses rigorous performance guarantees. Theoretical performance results conclude
that in a constant probability, 1) searching an object in our proposed network
efficiently takes O(lnc N) hops (where c is a small constant), and 2) the search
progressively and effectively exploits the similarity of peers. In addition, the
success ratio of discovering an object approximates 100 percent. We validate our
theoretical analysis and compare our proposal to competing algorithms in
simulations. Based on the simulation results, our proposal clearly outperforms the
competing algorithms in terms of 1) the hop count of routing a query message, 2)
the successful ratio of resolving a query, 3) the number of messages required for
resolving a query, and 4) the message overhead for maintaining and formatting the



A distributed adaptive opportunistic routing scheme for multihop wireless ad hoc
networks is proposed. The proposed scheme utilizes a reinforcement learning
framework to opportunistically route the packets even in the absence of reliable
knowledge about channel statistics and network model. This scheme is shown to be
optimal with respect to an expected average per-packet reward criterion. The
proposed routing scheme jointly addresses the issues of learning and routing in an
opportunistic context, where the network structure is characterized by the
transmission success probabilities. In particular, this learning framework leads to a
stochastic routing scheme that optimally “explores” and “exploits” the opportunities
in the network.

Excess capacity (EC) is the unused capacity in a network. We propose EC
management techniques to improve network performance. Our techniques exploit
the EC in two ways. First, a connection preprovisioning algorithm is used to reduce
the connection setup time. Second, whenever possible, we use protection schemes
that have higher availability and shorter protection switching time. Specifically,
depending on the amount of EC available in the network, our proposed EC
management techniques dynamically migrate connections between high-availability,
high-backup-capacity protection schemes and low-availability, low-backup-capacity
protection schemes. Thus, multiple protection schemes can coexist in the network.
The four EC management techniques studied in this paper differ in two respects:
when the connections are migrated from one protection scheme to another, and
which connections are migrated. Specifically, Lazy techniques migrate connections
only when necessary, whereas Proactive techniques migrate connections to free up
capacity in advance. Partial Backup Reprovisioning (PBR) techniques try to migrate a
minimal set of connections, whereas Global Backup Reprovisioning (GBR) techniques
migrate all connections. We develop integer linear program (ILP) formulations and
heuristic algorithms for the EC management techniques. We then present numerical
examples to illustrate how the EC management techniques improve network
performance by exploiting the EC in wavelength-division-multiplexing (WDM) mesh


SENSOR    NETWORKS    USING                       A        SIMPLE       CRTBASED

This paper deals with a novel forwarding scheme for wireless sensor networks
aimed at combining low computational complexity and high performance in terms of
energy efficiency and reliability. The proposed approach relies on a packet-splitting
algorithm based on the Chinese Remainder Theorem (CRT) and is characterized by a
simple modular division between integers. An analytical model for estimating the
energy efficiency of the scheme is presented, and several practical issues such as
the effect of unreliable channels, topology changes, and MAC overhead are
discussed. The results obtained show that the proposed algorithm outperforms
traditional approaches in terms of power saving, simplicity, and fair distribution of
energy consumption among all nodes in the network.

The inherent measurement support in routers (SNMP counters or NetFlow) is not
sufficient to diagnose performance problems in IP networks, especially for flow-
specific problems where the aggregate behavior within a router appears normal.
Tomographic approaches to detect the location of such problems are not feasible in
such cases as active probes can only catch aggregate characteristics. To address
this problem, in this paper, we propose a Consistent NetFlow (CNF) architecture
for measuring per-flow delay measurements within routers. CNF utilizes the
existing NetFlow architecture that already reports the first and last timestamps
per flow, and it proposes hash-based sampling to ensure that two adjacent routers
record the same flows. We devise a novel Multiflow estimator that approximates
the intermediate delay samples from other background flows to significantly
improve the per-flow latency estimates compared to the naive estimator that only
uses actual flow samples. In our experiments using real backbone traces and
realistic delay models, we show that the Multiflow estimator is accurate with a
median relative error of less than 20% for flows of size greater than 100 packets.
We also show that Multiflow estimator performs two to three times better than a
prior approach based on trajectory sampling at an equivalent packet sampling rate.


Motivated by recent emerging systems that can leverage partially correct packets
in wireless networks, this paper proposes the novel concept of error estimating
coding (EEC). Without correcting the errors in the packet, EEC enables the

receiver of the packet to estimate the packet's bit error rate, which is perhaps
the most important meta-information of a partially correct packet. Our EEC design
provides provable estimation quality with rather low redundancy and computational
overhead. To demonstrate the utility of EEC, we exploit and implement EEC in two
wireless network applications, Wi-Fi rate adaptation and real-time video streaming.
Our real-world experiments show that these applications can significantly benefit
from EEC.

17.Efficient Error Estimating Coding: Feasibility and Applications

Motivated by recent emerging systems that can leverage partially correct packets
in wireless networks, this paper proposes the novel concept of error estimating
coding (EEC). Without correcting the errors in the packet, EEC enables the
receiver of the packet to estimate the packet's bit error rate, which is perhaps
the most important meta-information of a partially correct packet. Our EEC design
provides provable estimation quality with rather low redundancy and computational
overhead. To demonstrate the utility of EEC, we exploit and implement EEC in two
wireless network applications, Wi-Fi rate adaptation and real-time video streaming.
Our real-world experiments show that these applications can significantly benefit
from EEC.

18.Topology control       in   mobile    Ad    Hoc    networks   with   cooperative

Cooperative communication has received tremendous interest for wireless
networks. Most existing works on cooperative communications are focused on link-
level physical layer issues. Consequently, the impacts of cooperative communications
on network-level upper layer issues, such as topology control, routing and network
capacity, are largely ignored. In this article, we propose a Capacity-Optimized
Cooperative (COCO) topology control scheme to improve the network capacity in
MANETs by jointly considering both upper layer network capacity and physical
layer cooperative communications. Through simulations, we show that physical layer
cooperative communications have significant impacts on the network capacity, and
the proposed topology control scheme can substantially improve the network
capacity in MANETs with cooperative communications.

19.Latency Equalization as a New Network Service Primitive

Multiparty interactive network applications such as teleconferencing, network
gaming, and online trading are gaining popularity. In addition to end-to-end latency
bounds, these applications require that the delay difference among multiple clients
of the service is minimized for a good interactive experience. We propose a Latency
EQualization (LEQ) service, which equalizes the perceived latency for all clients
participating in an interactive network application. To effectively implement the
proposed LEQ service, network support is essential. The LEQ architecture uses a
few routers in the network as hubs to redirect packets of interactive applications
along paths with similar end-to-end delay. We first formulate the hub selection
problem, prove its NP-hardness, and provide a greedy algorithm to solve it. Through
extensive simulations, we show that our LEQ architecture significantly reduces
delay difference under different optimization criteria that allow or do not allow
compromising the per-user end-to-end delay. Our LEQ service is incrementally
deployable in today's networks, requiring just software modifications to edge

Shared By: