IEEE_Projects

Document Sample
IEEE_Projects Powered By Docstoc
					     DATA ALCOTT SYSTEMS                                               Ph: (0)9600095047
     3rd Floor, Old No.13/1, New No.27, Third Floor,Brindavan Street   ieeeraja@gmail.com

     West Mambalam

     Chennai-600033

S.No Title                                                             Domain               Technology




     A TABU SEARCH ALGORITHM FOR CLUSTER BUILDING IN
 1   WIRELESS SENSOR NETWORKS
                                                     MOBILE COMPUTING                       DOT NET




     ROUTE STABILITY IN MANETS UNDER THE RANDOM DIRECTION
 2   MOBILITY MODEL
                                                          MOBILE COMPUTING                  DOT NET




     GREEDY ROUTING WITH ANTI-VOID TRAVERSAL FOR WIRELESS
 3   SENSOR NETWORKS
                                                          MOBILE COMPUTING                  DOT NET




     CELL BREATHING TECHNIQUES FOR LOAD BALANCING IN
 4   WIRELESS LANS
                                                     MOBILE COMPUTING                       DOT NET
    RESEQUENCING ANALYSIS OF STOP-AND-WAIT       ARQ   FOR
5   PARALLEL MULTICHANNEL COMMUNICATIONS
                                                             NETWORKING                DOT NET




    RESOURCE    ALLOCATION      IN    OFDMA     WIRELESS
6   COMMUNICATIONS   SYSTEMS     SUPPORTING   MULTIMEDIA NETWORKING                    DOT NET
    SERVICES




                                                             INFORMATION
    ENHANCING    PRIVACY    AND  AUTHORIZATION   CONTROL
7   SCALABILITY IN THE GRID THROUGH ONTOLOGIES
                                                             TECHNOLOGY           IN JAVA
                                                             BIOMEDICINE




    COMBINATORIAL APPROACH FOR PREVENTING SQL INJECTION ADVANCE            COMPUTING
8   ATTACKS                                             CONFERENCE
                                                                                       J2EE
     DYNAMIC SEARCH ALGORITHM IN UNSTRUCTURED PEER-TO- PARALLEL            AND
9    PEER NETWORKS                                     DISTRIBUTED SYSTEMS
                                                                               JAVA




     ANALYSIS OF SHORTEST PATH ROUTING FOR LARGE MULTI-
10   HOP WIRELESS NETWORKS
                                                        NETWORKING              DOT NET




11   SECURE AND POLICY-COMPLIANT SOURCE ROUTING          NETWORKING             DOT NET




     FLEXIBLE  DETERMINISTIC   PACKET   MARKING:   AN  IP PARALLEL           AND
12   TRACEBACK SYSTEM TO FIND THE REAL SOURCE OF ATTACKS DISTRIBUTED SYSTEMS
                                                                                 JAVA
     NODE   ISOLATION  MODEL   AND   AGE-BASED    NEIGHBOR
13   SELECTION IN UNSTRUCTURED P2P NETWORKS
                                                             NETWORKING     JAVA




     DISTRIBUTED   ALGORITHMS    FOR   CONSTRUCTING
                                                    PARALLEL            AND
14   APPROXIMATE MINIMUM SPANNING TREES IN WIRELESS
                                                    DISTRIBUTED SYSTEMS
                                                                            JAVA
     SENSOR NETWORKS




     MOBILITY MANAGEMENT APPROACHES FOR          MOBILE IP
15   NETWORKS:  PERFORMANCE  COMPARISON          AND   USE NETWORKING       JAVA
     RECOMMENDATIONS




     SINGLE-LINK FAILURE DETECTION IN ALL-OPTICAL NETWORKS
16   USING MONITORING CYCLES AND PATHS
                                                           NETWORKING       DOT NET
     A FAITHFUL DISTRIBUTED MECHANISM FOR SHARING THE PARALLEL            AND
17   COST OF MULTICAST TRANSMISSIONS                  DISTRIBUTED SYSTEMS
                                                                              J2EE




     ATOMICITY ANALYSIS OF SERVICE COMPOSITION ACROSS
18   ORGANIZATIONS
                                                      SOFTWARE ENGINEERING          J2EE




                                                             PARALLEL            AND
19   DYNAMIC ROUTING WITH SECURITY CONSIDERATIONS
                                                             DISTRIBUTED SYSTEMS
                                                                                     JAVA




     CAR: CONTEXT-AWARE ADAPTIVE    ROUTING   FOR   DELAY-
20   TOLERANT MOBILE NETWORKS
                                                             MOBILE COMPUTING       JAVA
     COLLUSIVE PIRACY PREVENTION IN P2P CONTENT DELIVERY
21   NETWORKS
                                                         COMPUTERS                 J2EE




                                                        INFORMATION    FORENSICS
22   SPREAD SPECTRUM WATERMARKING SECURITY
                                                        AND SECURITY
                                                                                   DOT NET




     LOCAL CONSTRUCTION OF NEAR-OPTIMAL POWER SPANNERS
23   FOR WIRELESS AD-HOC NETWORKS
                                                       MOBILE COMPUTING            DOT NET




     MULTIPLE ROUTING CONFIGURATIONS FOR FAST IP NETWORK
24   RECOVERY
                                                         NETWORKING                JAVA
     COMPACTION OF SCHEDULES AND A TWO-STAGE APPROACH PARALLEL            AND
25   FOR DUPLICATION-BASED DAG SCHEDULING             DISTRIBUTED SYSTEMS
                                                                              DOT NET




     THE EFFECTIVENESS   OF     CHECKSUMS   FOR   EMBEDDED DEPENDABLE AND SECURE
26   NETWORKS                                              COMPUTING
                                                                                 DOT NET




                                                           PARALLEL            AND
27   DETECTING MALICIOUS PACKET LOSSES
                                                           DISTRIBUTED SYSTEMS
                                                                                   JAVA




28   VIRUS SPREAD IN NETWORKS                              NETWORKING             DOT NET
29   BIASED RANDOM WALKS IN UNIFORM WIRELESS NETWORKS   MOBILE COMPUTING   DOT NET




     INFORMATION CONTENT-BASED SENSOR SELECTION AND
30   TRANSMISSION POWER ADJUSTMENT FOR COLLABORATIVE MOBILE COMPUTING      DOT NET
     TARGET TRACKING




     PRESTO: FEEDBACKDRIVEN DATA MANAGEMENT IN SENSOR
31   NETWORKS
                                                      NETWORKING           DOT NET




     EXPLICIT LOAD BALANCING TECHNIQUE FOR NGEO SATELLITE
32   IP NETWORKS WITH ON-BOARD PROCESSING CAPABILITIES
                                                          NETWORKING       DOT NET
     DELAY ANALYSIS FOR MAXIMAL SCHEDULING WITH FLOW
33   CONTROL IN WIRELESS NETWORKS WITH BURSTY TRAFFIC
                                                      NETWORKING               DOT NET




     OPTIMIZED RESOURCE ALLOCATION FOR SOFTWARE RELEASE
34   PLANNING
                                                        SOFTWARE ENGINEERING   DOT NET




     AUTOMATIC EXTRACTION OF HEAP REFERENCE PROPERTIES
35   IN OBJECT-ORIENTED PROGRAMS
                                                       SOFTWARE ENGINEERING    DOT NET




     ENERGY MAPS FOR MOBILE WIRELESS NETWORKS:COHERENCE
36   TIME VERSUS SPREADING PERIOD
                                                        MOBILE COMPUTING       DOT NET
     RANDOMCAST: AN ENERGY EFFICIENT      COMMUNICATION
37   SCHEME FOR MOBILE AD HOC NETWORKS
                                                          MOBILE COMPUTING   DOT NET




38   EFFICIENT RESOURCE ALLOCATION FOR WIRELESS MULTICAST MOBILE COMPUTING   JAVA




     MINING FILE DOWNLOADING TIME IN STOCHASTIC PEER TO
39   PEER NETWORKS
                                                        NETWORKING           DOT NET




     ENHANCING SEARCH PERFORMANCE IN UNSTRUCTURED P2P
40   NETWORKS BASED ON USERS' COMMON INTEREST
                                                      NETWORKING             JAVA
                                                            PARALLEL            AND
41   QUIVER: CONSISTENT OBJECT SHARING FOR EDGE SERVICES
                                                            DISTRIBUTED SYSTEMS
                                                                                    JAVA




     BRA: A   BIDIRECTIONAL  ROUTING   ABSTRACTION    FOR
42   ASYMMETRIC MOBILE AD HOC NETWORKS
                                                            NETWORKING                JAVA




     AN   EFFICIENT  CLUSTERING    SCHEME    TO    EXPLOIT KNOWLEDGE     AND   DATA
43   HIERARCHICAL DATA IN NETWORK TRAFFIC ANALYSIS         ENGINEERING
                                                                                      JAVA




     RATE & DELAY GUARANTEES PROVIDED BY CLOSE PACKET
44   SWITCHES WITH LOAD BALANCING
                                                      NETWORKING                      JAVA




     GEOMETRIC APPROACH TO IMPROVING ACTIVE PACKET LOSS
45   MEASUREMENT
                                                        NETWORKING                    JAVA
     A PRECISE TERMINATION CONDITION OF THE PROBALASTIC DEPENDABLE AND SECURE
46   PACKET MARKING ALGORITHM                           COMPUTING
                                                                              JAVA




     INTRUSION DETECTION IN HOMOGENEOUS & HETEROGENEOUS
47   WIRELESS SENSOR NETWORKS
                                                        MOBILE COMPUTING     JAVA




     A DISTRIBUTED AND SCALABLE ROUTING TABLE MANAGER FOR
48   THE NEXT GENERATION OF IP ROUTERS
                                                                             DOT NET




     PERFORMANCE OF A SPECULATIVE TRANSMISSION SCHEME
49   FOR SCHEDULING LATENCY REDUCTION
                                                      NETWORKING             JAVA




     EFFICIENT   2-D     GRAY    SCALE   MORPHOLOGICAL
50   TRANSFORMATIONS   WITH ARBITRALY FLAT STRUCTURING IMAGE PROCESSING      DOT NET
     ELEMENTS
     RATE ALLOCATION & NETWORK LIFETIME PROBLEM FOR
51   WIRELESS SENSOR NETWORKS
                                                    NETWORKING                 DOT NET




     VISION BASED PROCESSING FOR REAL TIME     3-D   DATA
52   ACQUISITION BASED CODE STRUCTURED LIGHT
                                                            IMAGE PROCESSING   DOT NET




     USING THE CONCEPTUAL COHESION OF CLASSES FOR FAULT
53   PREDICTION IN OBJECT ORIENTED SYSTEMS
                                                        SOFTWARE ENGINEERING   JAVA




     LOCATION BASED SPATIAL QUERY PROCESSING IN WIRELESS
54   BROADCAST ENVIRONMENTS
                                                         MOBILE COMPUTING      JAVA
     BANDWIDTH ESTIMATION FOR IEEE 802.11 BASED ADHOC
55   NETWORK
                                                      MOBILE COMPUTING      JAVA




                                                      DEPENDABLE AND SECURE
56   MODELING & AUTOMATED CONTAINMENT OF WORMS
                                                      COMPUTING
                                                                            JAVA




     TRUST WORTHY COMUTING UNDER RESOURCE CONSTRAINTS DEPENDABLE AND SECURE
57   WITH THE DOWN POLICY                             COMPUTING
                                                                            DOT NET




58   BENEFIT-BASED DATA CACHING IN AD HOC NETWORKS    MOBILE COMPUTING      JAVA
     STATISTICAL  TECHNIQUES   FOR  DETECTING      TRAFFIC
59   ANOMALIES THROUGH PACKET HEADER DATA
                                                             NETWORKING             DOT NET




     HBA DISTRIBUTED METADATA MANAGEMENT       FOR   LARGE PARALLEL            AND
60   SCALE CLUSTER BASED STORAGE SYSTEM                    DISTRIBUTED SYSTEMS
                                                                                   DOT NET




     TEMPORAL PORTIONING OF COMMUNICATION RESOURCES IN DEPENDABLE AND SECURE
61   AN INTEGRATED ARCHITECTURE                        COMPUTING
                                                                             DOT NET




62   THE EFFECT OF PAIRS IN PROGRAM DESIGN TASKS             SOFTWARE ENGINEERING   DOT NET
     CONSTRUCTING INTER-DOMAIN PACKET FILTERS TO CONTROL DEPENDABLE AND SECURE
63   IP SPOOFING BASED ON BGP UPDATES                    COMPUTING
                                                                               JAVA




     ORTHOGONAL DATA EMBEDDING FOR BINARY IMAGES IN
64   MORPHOLOGICAL TRANSFORM DOMAIN- A HIGH-CAPACITY MULTIMEDIA                   DOT NET
     APPROACH




     PROTECTION OF DATABASE SECURITY VIA COLLABORATIVE KNOWLEDGE     AND   DATA
65   INFERENCE DETECTION                               ENGINEERING
                                                                                  J2EE




     ESTIMATION OF DEFECTS BASED ON EFECT DECAY MODEL:
66   ED3M
                                                       SOFTWARE ENGINEERING       DOT NET
     ACTIVE LEARNING   METHODS   FOR   INTERACTIVE     IMAGE
67   RETRIEVAL
                                                               IMAGE PROCESSING          DOT NET




     LOCALIZED  SENSOR   AREA    COVERAGE      WITH     LOW
68   COMMUNICATION OVERHEAD
                                                               MOBILE COMPUTING          DOT NET




     HARDWARE ENHANCED ASSOCIATION     RULE   MINING   WITH KNOWLEDGE      AND    DATA
69   HASHING AND PIPELINING                                 ENGINEERING
                                                                                         DOT NET




70   EFFICIENT RESOURCE ALLOCATION FOR WIRELESS MULTICAST MOBILE COMPUTING               DOT NET
     EFFICIENT ROUTING IN INTERMITTENTLY CONNECTED MOBILE
71   NETWORKS: THE MULTIPLE COPY CASE
                                                          NETWORKING                 DOT NET




     A NOVEL FRAMEWORK FOR SEMANTIC ANNOTATION AND
72   PERSONALIZED RETRIEVAL OF SPORTS VIDEO
                                                   MULTIMEDIA                        DOT NET




     TWO TECHNIQUES FOR FAST COMPUTATION OF CONSTRAINED
73   SHORTEST PATHS
                                                        NETWORKING                   JAVA




     WATERMARKING      RELATIONAL     DATABASES     USING KNOWLEDGE     AND   DATA
74   OPTIMIZATION-BASED TECHNIQUES                        ENGINEERING
                                                                                     DOT NET




     PROBABILISTIC PACKET MARKING FOR LARGE-SCALE IP TRACE
75   BACK
                                                           NETWORKING                DOT NET
     DUAL-LINK FAILURE   RESILIENCY   THROUGH   BACKUP   LINK
76   MUTUAL EXCLUSION
                                                                NETWORKING                JAVA




     TRUTH   DISCOVERY    WITH    MULTIPLE      CONFLICTING KNOWLEDGE        AND   DATA
77   INFORMATION PROVIDERS ON THE WEB                       ENGINEERING
                                                                                          J2EE




     DYNAMIC LOAD BALANCING IN DISTRIBUTED SYSTEMS IN THE PARALLEL          AND
78   PRESENCE OF DELAYS: A REGENERATION-THEORY APPROACH DISTRIBUTED SYSTEMS
                                                                                JAVA




     A SEMI FRAGILE CONTENT BASED IMAGE WATERMARKING FOR
79   AUTHENTICATION IN SPATIAL DOMAIN USING DISCRETE JOURNAL                              JAVA
     COSINE TRANSFORM




     OCGRR:    A   NEW    SCHEDULING      ALGORITHM      FOR PARALLEL            AND
80   DIFFERENTIATED SERVICES NETWORKS                        DISTRIBUTED SYSTEMS
                                                                                     JAVA
     AN ADAPTIVE PROGRAMMING MODEL FOR FAULT-TOLERANT DEPENDABLE AND SECURE
81   DISTRIBUTED COMPUTING                            COMPUTING
                                                                            JAVA




     AN   ACKNOWLEDGMENT-BASED     APPROACH     FOR   THE
82   DETECTION OF ROUTING MISBEHAVIOR IN MANETS
                                                            MOBILE COMPUTING     JAVA




     HYBRID INTRUSION DETECTION WITH WEIGHTED SIGNATURE DEPENDABLE AND SECURE
83   GENERATION OVER ANOMALOUS INTERNET EPISODES(HIDS)  COMPUTING
                                                                              J2EE




     PFUSION: A P2P ARCHITECTURE FOR       INTERNET-SCALE PARALLEL            AND
84   CONTENT-BASED SEARCH AND RETRIEVAL                   DISTRIBUTED SYSTEMS
                                                                                  DOT NET




85   ROUTE RESERVATION IN AD HOC WIRELESS NETWORKS          MOBILE COMPUTING     JAVA
     DISTRIBUTED CACHE UPDATING FOR THE DYNAMIC SOURCE
86   ROUTING PROTOCOL
                                                       MOBILE COMPUTING      JAVA




     DIGITAL   IMAGE   PROCESSING TECHNIQUES FOR  THE
87   DETECTION AND     REMOVAL OF CRACKS IN DIGITIZED IMAGE PROCESSING       DOT NET
     PAINTINGS




88   NOISE REDUCTION BY FUZZY IMAGE FILTERING           FUZZY SYSTEMS        JAVA




     A NOVEL SECURE COMMUNICATION PROTOCOL FOR AD HOC
89   NETWORKS [SCP]
                                                                             JAVA




                                                        PATTERN ANALYSIS AND
90   FACE RECOGNITION USING LAPLACIAN FACES
                                                        MACHINE INTELLIGENCE
                                                                             JAVA
                                                       INTERNATIONAL
     PREDICTIVE JOB SCHEDULING IN A CONNECTION LIMITED CONFERENCE              ON
91   SYSTEM USING PARALLEL GENETIC ALGORITHM           INTELLIGENT            AND
                                                                                  JAVA
                                                       ADVANCED SYSTEMS




                                                           INTERNATIONAL
                                                           CONFERENCE    ON     E-
92   PERSONALIZED WEB SEARCH WITH SELF-ORGANIZING MAP
                                                           TECHNOLOGY,          E-
                                                                                   J2EE
                                                           COMMERCE AND E-SERVICE




     A DISTRIBUTED DATABASE ARCHITECTURE FOR      GLOBAL
93   ROAMING IN NEXT-GENERATION MOBILE NETWORKS
                                                           NETWORKING             JAVA




     STRUCTURE AND TEXTURE FILLING-IN OF MISSING IMAGE
94   BLOCKS IN WIRELESS TRANSMISSION AND COMPRESSION IMAGE PROCESSING             JAVA
     APPLICATIONS




     NETWORK BORDER PATROL: PREVENTING CONGESTION
95   COLLAPSE AND PROMOTING FAIRNESS IN THE INTERNET
                                                     NETWORKING                   JAVA
     APPLICATION OF BPCS    STEGANOGRAPHY   TO   WAVELET
96   COMPRESSED VIDEO
                                                           IMAGE PROCESSING   JAVA




97   IMAGE PROCESSING FOR EDGE DETECTION                                      DOT NET




     DOUBLE-COVERED BROADCAST (DCB): A SIMPLE RELIABLE CONFERENCE-IEEE
98   BROADCAST ALGORITHM IN MANETS                     INFOCOM
                                                                              JAVA
Year




2009




2009




2009




2009
2009




2009




2009




2009
2009




2009




2009




2009
2009




2009




2009




2009
2009




2009




2009




2009
2009




2009




2009




2009
2009




2009




2009




2009
2009




2009




2009




2009
2009




2009




2009




2009
2009




2008




2008




2008
2008




2008




2008




2008




2008
2008




2008




2008




2008




2008
2008




2008




2008




2008
2008




2008




2008




2008
2008




2008




2008




2008
2008




2008




2008




2008
2008




2008




2008




2008
2008




2008




2008




2008




2008
2008




2008




2007




2007




2007
2007




2007




2007




2007




2007
2006




2006




2006




2006




2005
2005




2005




2004




2004




2004
2004




2004
Abstract


The main challenge in wireless sensor network deployment pertains to optimizing energy consumption when collecting
data from sensor nodes. This paper proposes a new centralized clustering method for a data collection mechanism in
wireless sensor networks, which is based on network energy maps and Quality-of-Service (QoS) requirements. The
clustering problem is modeled as a hypergraph partitioning and its resolution is based on a tabu search heuristic. Our
approach defines moves using largest size cliques in a feasibility cluster graph. Compared to other methods (CPLEX-
based method, distributed method, simulated annealing-based method), the results show that our tabu search-based
approach returns high-quality solutions in terms of cluster cost and execution time. As a result, this approach is
suitable for handling network extensibility in a satisfactory manner.




A fundamental issue arising in mobile ad hoc networks (MANETs) is the selection of the optimal path between any two
nodes. A method that has been advocated to improve routing efficiency is to select the most stable path so as to reduce
the latency and the overhead due to route reconstruction. In this work, we study both the availability and the duration
probability of a routing path that is subject to link failures caused by node mobility. In particular, we focus on the case
where the network nodes move according to the Random Direction model, and we derive both exact and approximate
(but simple) expressions of these probabilities. Through our results, we study the problem of selecting an optimal route
in terms of path availability. Finally, we propose an approach to improve the efficiency of reactive routing protocols.


The unreachability problem (i.e., the so-called void problem) that exists in the greedy routing algorithms has been
studied for the wireless sensor networks. Some of the current research work cannot fully resolve the void problem, while
there exist other schemes that can guarantee the delivery of packets with the excessive consumption of control
overheads. In this paper, a greedy antivoid routing (GAR) protocol is proposed to solve the void problem with increased
routing efficiency by exploiting the boundary finding technique for the unit disk graph (UDG). The proposed rolling-ball
UDG boundary traversal (RUT) is employed to completely guarantee the delivery of packets from the source to the
destination node under the UDG network. The boundary map (BM) and the indirect map searching (IMS) scheme are
proposed as efficient algorithms for the realization of the RUT technique. Moreover, the hop count reduction (HCR)
scheme is utilized as a short-cutting technique to reduce the routing hops by listening to the neighbor’s traffic, while the
intersection navigation (IN) mechanism is proposed to obtain the best rolling direction for boundary traversal with the
adoption of shortest path criterion. In order to maintain the network requirement of the proposed RUT scheme under the
non-UDG networks, the partial UDG construction (PUC) mechanism is proposed to transform the non-UDG into UDG
setting for a portion of nodes that facilitate boundary traversal. These three schemes are incorporated within the GAR
protocol to further enhance the routing performance with reduced communication overhead. The proofs of correctness


Maximizing network throughput while providing fairness is one of the key challenges in wireless LANs (WLANs). This goal
is typically achieved when the load of access points (APs) is balanced. Recent studies on operational WLANs, however,
have shown that AP load is often substantially uneven. To alleviate such imbalance of load, several load balancing
schemes have been proposed. These schemes commonly require proprietary software or hardware at the user side for
controlling the user-AP association. In this paper we present a new load balancing technique by controlling the size of
WLAN cells (i.e., AP’s coverage range), which is conceptually similar to cell breathing in cellular networks. The proposed
scheme does not require any modification to the users neither the IEEE 802.11 standard. It only requires the ability of
dynamically changing the transmission power of the AP beacon messages. We develop a set of polynomial time
algorithms that find the optimal beacon power settings which minimize the load of the most congested AP. We also
consider the problem of network-wide min-max load balancing. Simulation results show that the performance of the
proposed method is comparable with or superior to the best existing association-based methods.
In this paper, we consider a multichannel data communication system in which the stop-and-wait automatic-repeat
request protocol for parallel channels with an in-sequence delivery guarantee (MSW-ARQ-inS) is used for error control.
We evaluate the resequencing delay and the resequencing buffer occupancy, respectively. Under the assumption that all
channels have the same transmission rate but possibly different time-invariant error rates, we derive the probability
generating function of the resequencing buffer occupancy and the probability mass function of the resequencing delay.
Then, by assuming the Gilbert–Elliott model for each channel, we extend our analysis to time-varying channels. Through
examples, we compute the probability mass functions of the resequencing buffer occupancy and the resequencing delay
for time-invariant channels. From numerical and simulation results, we analyze trends in the mean resequencing buffer
occupancy and the mean resequencing delay as functions of system parameters. We expect that the modeling technique
and analytical approach used in this paper can be applied to the performance evaluation of other ARQ protocols (e.g.,
the selective-repeat ARQ) over multiple time-varying channels. Index Terms—In-sequence delivery, modeling and
performance, multichannel data communications, resequencing buffer occupancy, resequencing delay, SW-ARQ.




We design a resource allocation algorithm for down-link of orthogonal frequency division multiple access (OFDMA)
systems supporting real-time (RT) and best-effort (BE) services simultaneously over a time-varying wireless channel. The
proposed algorithm aims at maximizing system throughput while satisfying quality of service (QoS) requirements of the
RT and BE services. We take two kinds of QoS requirements into account. One is the required average transmission rate
for both RT and BE services. The other is the tolerable average absolute deviation of transmission rate (AADTR) just for
the RT services, which is used to control the fluctuation in transmission rates and to limit the RT packet delay to a
moderate level. We formulate the optimization problem representing the resource allocation under consideration and
solve it by using the dual optimization technique and the projection stochastic subgradient method. Simulation results
show that the proposed algorithm well meets the QoS requirements with the high throughput and outperforms the
modified largest weighted delay first (M-LWDF) algorithm that supports similar QoS requirements.




The use of data Grids for sharing relevant data has proven to be successful in many research disciplines. However, the
use of these environments when personal data are involved (such as in health) is reduced due to its lack of trust. There
are many approaches that provide encrypted storages and key shares to prevent the access from unauthorized users.
However, these approaches are additional layers that should be managed along with the authorization policies. We
present in this paper a privacy-enhancing technique that uses encryption and relates to the structure of the data and
their organizations, providing a natural way to propagate authorization and also a framework that fits with many use
cases. The paper describes the architecture and processes, and also shows results obtained in a medical imaging
platform.




A combinatorial approach for protecting Web applications against SQL injection is discussed in this paper, which is a
novel idea of incorporating the uniqueness of Signature based method and auditing method. The major issue of web
application security is the SQL Injection, which can give the attackers unrestricted access to the database that underlie
Web applications and has become increasingly frequent and serious. From signature based method standpoint of view, it
presents a detection mode for SQL injection using pair wise sequence alignment of amino acid code formulated from web
application form parameter sent via web server. On the other hand from the Auditing based method standpoint of view, it
analyzes the transaction to find out the malicious access. In signature based method It uses an approach called
Hirschberg algorithm, it is a divide and conquer approach to reduce the time and space complexity. This system was
able to stop all of the successful attacks and did not generate any false positives.
Designing efficient search algorithms is a key challenge in unstructured peer-to-peer networks. Flooding and random
walk (RW) are two typical search algorithms. Flooding searches aggressively and covers the most nodes. However, it
generates a large amount of query messages and, thus, does not scale. On the contrary, RW searches conservatively. It
only generates a fixed amount of query messages at each hop but would take longer search time. We propose the
dynamic search (DS) algorithm, which is a generalization of flooding and RW. DS takes advantage of various contexts
under which each previous search algorithm performs well. It resembles flooding for short-term search and RW for long-
term search. Moreover, DS could be further combined with knowledge-based search mechanisms to improve the search
performance. We analyze the performance of DS based on some performance metrics including the success rate, search
time, query hits, query messages, query efficiency, and search efficiency. Numerical results show that DS provides a
good tradeoff between search performance and cost. On average, DS performs about 25 times better than flooding and
58 times better than RW in power-law graphs, and about 186 times better than flooding and 120 times better than RW in
bimodal topologies.




In this paper, we analyze the impact of straight line routing in large homogeneous multi-hop wireless networks. We
estimate the nodal load, which is defined as the number of packets served at a node, induced by straight line routing.
For a given total offered load on the network, our analysis shows that the nodal load at each node is a function of the
node’s Voronoi cell, the node’s location in the network, and the traffic pattern specified by the source and destination
randomness and straight line routing. In the asymptotic regime, we show that each node’s probability that the node
serves a packet arriving to the network approaches the products of half the length of the Voronoi cell perimeter and the
load density function that a packet goes through the node’s location. The density function depends on the traffic pattern
generated by straight line routing, and determines where the hot spot is created in the network. Hence, contrary to
conventional wisdom, straight line routing can balance the load over the network, depending on the traffic patterns.




In today’s Internet, inter-domain route control remains elusive; nevertheless, such control could improve the
performance, reliability, and utility of the network for end users and ISPs alike. While researchers have proposed a
number of source routing techniques to combat this limitation, there has thus far been no way for independent ASes to
ensure that such traffic does not circumvent local traffic policies, nor to accurately determine the correct party to charge
for forwarding the traffic. We present Platypus, an authenticated source routing system built around the concept of
network capabilities, which allow for accountable, fine-grained path selection by cryptographically attesting to policy
compliance at each hop along a source route. Capabilities can be composed to construct routes through multiple ASes
and can be delegated to third parties. Platypus caters to the needs of both end users and ISPs: users gain the ability to
pool their resources and select routes other than the default, while ISPs maintain control over where, when, and whose
packets traverse their networks. We describe the design and implementation of an extensive Platypus policy framework
that can be used to address several issues in wide-area routing at both the edge and the core, and evaluate its
performance and security. Our results show that incremental deployment of Platypus can achieve immediate gains.


Internet Protocol (IP) traceback is the enabling technology to control Internet crime. In this paper, we present a novel and
practical IP traceback system called Flexible Deterministic Packet Marking (FDPM) which provides a defense system with
the ability to find out the real sources of attacking packets that traverse through the network. While a number of other
traceback schemes exist, FDPM provides innovative features to trace the source of IP packets and can obtain better
tracing capability than others. In particular, FDPM adopts a flexible mark length strategy to make it compatible to
different network environments; it also adaptively changes its marking rate according to the load of the participating
router by a flexible flow-based marking scheme. Evaluations on both simulation and real system implementation
demonstrate that FDPM requires a moderately small number of packets to complete the traceback process; add little
additional load to routers and can trace a large number of sources in one traceback process with low false positive rates.
The built-in overload prevention mechanism makes this system capable of achieving a satisfactory traceback result even
when the router is heavily loaded. The motivation of this traceback system is from DDoS defense. It has been used to not
only trace DDoS attacking packets but also enhance filtering attacking traffic. It has a wide array of applications for
other security systems.
Previous analytical studies of unstructured P2P resilience have assumed exponential user lifetimes and only con-sidered
age-independent neighbor replacement. In this paper, we overcome these limitations by introducing a general node-
isolation model for heavy-tailed user lifetimes and arbitrary neighbor-se-lection algorithms. Using this model, we analyze
two age-biased neighbor-selection strategies and show that they significantly improve the residual lifetimes of chosen
users, which dramatically reduces the probability of user isolation and graph partitioning compared with uniform
selection of neighbors. In fact, the second strategy based on random walks on age-proportional graphs demonstrates
that, for lifetimes with infinite variance, the system monotonically increases its resilience as its age and size grow.
Specifically, we show that the probability of isolation converges to zero as these two metrics tend to infinity. We finish the
paper with simulations in finite-size graphs that demonstrate the effect of this result in practice.



While there are distributed algorithms for the minimum spanning tree (MST) problem, these algorithms require relatively
large number of messages and time, and are fairly involved, making them impractical for resource-constrained networks
such as wireless sensor networks. In such networks, a sensor has very limited power, and any algorithm needs to be
simple, local, and energy efficient. Motivated by these considerations, we design and analyze a class of simple and local
distributed algorithms called Nearest Neighbor Tree (NNT) algorithms for energy-efficient construction of an approximate
MST in wireless networks. Assuming that the nodes are uniformly distributed, we show provable bounds on both the
quality of the spanning tree produced and the energy needed to construct them. We show that while NNT produces a
close approximation to the MST, it consumes asymptotically less energy than the classical message-optimal distributed
MST algorithm due to Gallagery, Humblet, and Spira. Further, the NNTs can be maintained dynamically with
polylogarithmic rearrangements under node insertions/deletions. We also perform extensive simulations, which show
that the bounds are much better in practice. Our results, to the best of our knowledge, demonstrate the first tradeoff
between the quality of approximation and the energy required for building spanning trees on wireless networks, and
motivate similar considerations for other important problems.


In wireless networks, efficient management of mobility is a crucial issue to support mobile users. The Mobile Internet
Protocol (MIP) has been proposed to support global mobility in IP networks. Several mobility management strategies have
been proposed which aim reducing the signaling traffic related to the Mobile Terminals (MTs) registration with the Home
Agents (HAs) whenever their Care-of-Addresses (CoAs) change. They use different Foreign Agents (FAs) and Gateway FAs
(GFAs) hierarchies to concentrate the registration processes. For high-mobility MTs, the Hierarchical MIP (HMIP) and
Dynamic HMIP (DHMIP) strategies localize the registration in FAs and GFAs, yielding to high-mobility signaling. The
Multicast HMIP strategy limits the registration processes in the GFAs. For high-mobility MTs, it provides lowest mobility
signaling delay compared to the HMIP and DHMIP approaches. However, it is resource consuming strategy unless for
frequent MT mobility. Hence, we propose an analytic model to evaluate the mean signaling delay and the mean
bandwidth per call according to the type of MT mobility. In our analysis, the MHMIP outperforms the DHMIP and MIP
strategies in almost all the studied cases. The main contribution of this paper is the analytic model that allows the
mobility management approaches performance evaluation.



In this paper, we consider the problem of fault localization in all-optical networks. We introduce the concept of
monitoring cycles (MCs) and monitoring paths (MPs) for unique identification of single-link failures. MCs and MPs are
required to pass through one or more monitoring locations. They are constructed such that any single-link failure results
in the failure of a unique combination of MCs and MPs that pass through the monitoring location(s). For a network with
only one monitoring location, we prove that three-edge connectivity is a necessary and sufficient condition for
constructing MCs that uniquely identify any single-link failure in the network. For this case, we formulate the problem of
constructing MCs as an integer linear program (ILP). We also develop heuristic approaches for constructing MCs in the
presence of one or more monitoring locations. For an arbitrary network (not necessarily three-edge connected), we
describe a fault localization technique that uses both MPs and MCs and that employs multiple monitoring locations. We
also provide a linear-time algorithm to compute the minimum number of required monitoring locations. Through
extensive simulations, we demonstrate the effectiveness of the proposed monitoring technique.
The problem of sharing the cost of multicast transmissions was studied in the past, and two mechanisms, Marginal Cost
(MC) and Shapley Value (SH), were proposed to solve it. Although both of them are strategy proof mechanisms, the
distributed protocols implementing them are susceptible to manipulation by autonomous nodes. We propose a
distributed Shapley Value mechanism in which the participating nodes do not have incentives to deviate from the
mechanism specifications. We show that the proposed mechanism is a faithful implementation of the Shapley Value
mechanism. We experimentally investigate the performance of the existing and the proposed cost-sharing mechanisms
by implementing and deploying them on PlanetLab. We compare the execution time of MC and SH mechanisms for the
Tamper-Proof and Autonomous Node models. We also study the convergence and scalability of the mechanisms by
varying the number of nodes and the number of users per node. We show that the MC mechanisms generate a smaller
revenue compared to the SH mechanisms, and thus, they are not attractive to the content provider. We also show that
increasing the number of users per node is beneficial for the systems implementing the SH mechanisms from both
computational and economic perspectives.


Atomicity is a highly desirable property for achieving application consistency in service compositions. To achieve
atomicity, a service composition should satisfy the atomicity sphere, a structural criterion for the backend processes of
involved services. Existing analysis techniques for the atomicity sphere generally assume complete knowledge of all
involved backend processes. Such an assumption is invalid when some service providers do not release all details of
their backend processes to service consumers outside the organizations. To address this problem, we propose a process
algebraic framework to publish atomicity-equivalent public views from the backend processes. These public views
extract relevant task properties and reveal only partial process details that service providers need to expose. Our
framework enables the analysis of the atomicity sphere for service compositions using these public views instead of their
backend processes. This allows service consumers to choose suitable services such that their composition satisfies the
atomicity sphere without disclosing the details of their backend processes. Based on the theoretical result, we present
algorithms to construct atomicity-equivalent public views and to analyze the atomicity sphere for a service composition.
Two case studies from the supply chain and insurance domains are given to evaluate our proposal and demonstrate the
applicability of our approach.


Security has become one of the major issues for data communication over wired and wireless networks. Different from
the past work on the designs of cryptography algorithms and system infrastructures, we will propose a dynamic routing
algorithm that could randomize delivery paths for data transmission. The algorithm is easy to implement and compatible
with popular routing protocols, such as the Routing Information Protocol in wired networks and Destination-Sequenced
Distance Vector protocol in wireless networks, without introducing extra control messages. An analytic study on the
proposed algorithm is presented, and a series of simulation experiments are conducted to verify the analytic results and
to show the capability of the proposed algorithm



Most of the existing research work in mobile ad hoc networking is based on the assumption that a path exists between
the sender and the receiver. On the other hand, applications of decentralised mobile systems are often characterised by
network partitions. As a consequence delay tolerant networking research has received considerable attention in the
recent years as a means to obviate to the gap between ad hoc network research and real applications. In this paper we
present the design, implementation and evaluation of the context-aware adaptive routing (CAR) protocol for delay
tolerant unicast communication in intermittently connected mobile ad hoc networks. The protocol is based on the idea of
exploiting nodes as carriers of messages among network partitions to achieve delivery. The choice of the best carrier is
made using Kalman filter based prediction techniques and utility theory. We discuss the implementation of CAR over an
opportunistic networking framework, outlining possible applications of the general principles at the basis of the
proposed approach. The large scale performance of the CAR protocol are evaluated using simulations based on a social
network founded mobility model, a purely random one and real traces from Dartmouth College.
Collusive piracy is the main source of intellectual property violations within the boundary of a P2P network. Paid clients
(colluders) may illegally share copyrighted content files with unpaid clients (pirates). Such online piracy has hindered the
use of open P2P networks for commercial content delivery. We propose a proactive content poisoning scheme to stop
colluders and pirates from alleged copyright infringements in P2P file sharing. The basic idea is to detect pirates timely
with identity-based signatures and time stamped tokens. The scheme stops collusive piracy without hurting legitimate
P2P clients by targeting poisoning on detected violators, exclusively. We developed a new peer authorization protocol
(PAP) to distinguish pirates from legitimate clients. Detected pirates will receive poisoned chunks in their repeated
attempts. Pirates are thus severely penalized with no chance to download successfully in tolerable time. Based on
simulation results, we find 99.9 percent prevention rate in Gnutella, KaZaA, and Freenet. We achieved 85-98 percent
prevention rate on eMule, eDonkey, Morpheus, etc. The scheme is shown less effective in protecting some poison-
resilient networks like BitTorrent and Azureus. Our work opens up the low-cost P2P technology for copyrighted content
delivery. The advantage lies mainly in minimum delivery cost, higher content availability, and copyright compliance in
exploring P2P network resources.


This paper presents both theoretical and practical analyses of the security offered by watermarking and data hiding
methods based on spread spectrum. In this context, security is understood as the difficulty of estimating the secret
parameters of the embedding function based on the observation of watermarked signals. On the theoretical side, the
security is quantified from an information-theoretic point of view by means of the equivocation about the secret
parameters. The main results reveal fundamental limits and bounds on security and provide insight into other
properties, such as the impact of the embedding parameters, and the tradeoff between robustness and security. On the
practical side, workable estimators of the secret parameters are proposed and theoretically analyzed for a variety of
scenarios, providing a comparison with previous approaches, and showing that the security of many schemes used in
practice can be fairly low.



We present a local distributed algorithm that, given a wireless ad hoc network modeled as a unit disk graph U in the
plane, constructs a planar power spanner of U whose degree is bounded by k and whose stretch factor is bounded by 1 +
(2sin pi/k)p, where k ges 10 is an integer parameter and p isin [2, 5] is the power exponent constant. For the same
degree bound k, the stretch factor of our algorithm significantly improves the previous best bounds by Song et al. We
show that this bound is near-optimal by proving that the slightly smaller stretch factor of 1 + (2sin pi/k+1)p is
unattainable for the same degree bound k. In contrast to previous algorithms for the problem, the presented algorithm is
local. As a consequence, the algorithm is highly scalable and robust. Finally, while the algorithm is efficient and easy to
implement in practice, it relies on deep insights on the geometry of unit disk graphs and novel techniques that are of
independent interest.



As the Internet takes an increasingly central role in our communications infrastructure, the slow convergence of routing
protocols after a network failure becomes a growing problem. To assure fast recovery from link and node failures in IP
networks, we present a new recovery scheme called Multiple Routing Configurations (MRC). Our proposed scheme
guarantees recovery in all single failure scenarios, using a single mechanism to handle both link and node failures, and
without knowing the root cause of the failure. MRC is strictly connectionless, and assumes only destination based hop-
by-hop forwarding. MRC is based on keeping additional routing information in the routers, and allows packet forwarding
to continue on an alternative output link immediately after the detection of a failure. It can be implemented with only
minor changes to existing solutions. In this paper we present MRC, and analyze its performance with respect to
scalability, backup path lengths, and load distribution after a failure. We also show how an estimate of the traffic
demands in the network can be used to improve the distribution of the recovered traffic, and thus reduce the chances of
congestion when MRC is used.
Many DAG scheduling algorithms generate schedules that require prohibitively large number of processors. To address
this problem, we propose a generic algorithm, SC, to minimize the processor requirement of any given valid schedule. SC
preserves the schedule length of the original schedule and reduces processor count by merging processor schedules and
removing redundant duplicate tasks. To the best of our knowledge, this is the first algorithm to address this highly
unexplored aspect of DAG scheduling. On the average, SC reduced the processor requirement 91%, 82% and 72% for
schedules generated by PLW, TCSD and CPFD algorithms, respectively. SC algorithm has a low complexity (O(|N |3) )
compared to most duplication based algorithms. Moreover, it decouples processor economization from schedule length
minimization problem. To take advantage of these features of SC, we also propose a scheduling algorithm SDS, having
the same time complexity as SC. Our experiments demonstrate that, schedules generated by SDS are only 3% longer
than CPFD (O(|N |4) ), one of the best algorithms in that respect. SDS and SC together form a two-stage scheduling
algorithm that produces schedules with high quality and low processor requirement, and has lower complexity than the
comparable algorithms that produce similar high quality results.




Embedded control networks commonly use checksums to detect data transmission errors. However, design decisions
about which checksum to use are difficult because of a lack of information about the relative effectiveness of available
options. We study the error detection effectiveness of the following commonly used checksum computations for
embedded networks: exclusive or (XOR), two’s complement addition, one’s complement addition, Fletcher checksum,
Adler checksum, and cyclic redundancy codes (CRC). A study of error detection capabilities for random independent bit
errors and burst errors reveals that XOR, two’s complement addition, and Adler checksums are suboptimal for typical
application use. Instead, one’s complement addition should be used for applications willing to sacrifice error detection
effectiveness to reduce compute cost, Fletcher checksum for applications looking for a balance of error detection and
compute cost, and CRCs for applications willing to pay a higher compute cost for further improved error detection




In this paper, we consider the problem of detecting whether a compromised router is maliciously manipulating its stream
of packets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets
destined for some victim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action
because normal network congestion can produce the same effect. Modern networks routinely drop packets when the
load temporarily exceeds their buffering capacities. Previous detection protocols have tried to address this problem with
a user-defined threshold: too many dropped packets imply malicious intent. However, this heuristic is fundamentally
unsound; setting this threshold is, at best, an art and will certainly create unnecessary false positives or mask highly
focused attacks. We have designed, developed, and implemented a compromised router detection protocol that
dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will
occur. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions.
We have tested our protocol in Emulab and have studied its effectiveness in differentiating attacks from legitimate
network behavior.



We study how the spread of computer viruses, worms, and other self-replicating malware is affected by the logical
topology of the network over which they propagate. We consider a model in which each host can be in one of 3 possible
states - susceptible, infected or removed (cured and no longer susceptible to infection). We characterize how the size of
the population that eventually becomes infected depends on the network topology. Specially, we show that if the ratio of
cure to infection rates is larger than the spectral radius of the graph, and the initial infected population is small, then
the final infected population is also small in a sense that can be made precise. Conversely, if this ratio is smaller than
the spectral radius, then we show in some graph models of practical interest (including power law random graphs) that
the final infected population is large. These results yield insights into what the critical parameters are in determining
virus spread in networks.
A recurrent problem when designing distributed applications is to search for a node with known property. File searching
in peer-to-peer (P2P) applications, resource discovery in service-oriented architectures (SOAs), and path discovery in
routing can all be cast as a search problem. Random walk-based search algorithms are often suggested for tackling the
search problem, especially in very dynamic systems-like mobile wireless networks. The cost and the effectiveness of a
random walk-based search algorithm are measured by the excepted number of transmissions required before hitting the
target. Hence, to have a low hitting time is a critical goal. This paper studies the effect of biasing random walk toward the
target on the hitting time. For a walk running over a network with uniform node distribution, a simple upper bound that
connects the hitting time to the bias level is obtained. The key result is that even a modest bias level is able to reduce
the hitting time significantly. This paper also proposes a search protocol for mobile wireless networks, whose results are
interpreted in the light of the theoretical study. The proposed solution is for unstructured wireless mobile networks.




For target tracking applications, wireless sensor nodes provide accurate information since they can be deployed and
operated near the phenomenon. These sensing devices have the opportunity of collaboration among themselves to
improve the target localization and tracking accuracies. An energy-efficient collaborative target tracking paradigm is
developed for wireless sensor networks (WSNs). A mutual-information-based sensor selection (MISS) algorithm is
adopted for participation in the fusion process. MISS allows the sensor nodes with the highest mutual information about
the target state to transmit data so that the energy consumption is reduced while the desired target position estimation
accuracy is met. In addition, a novel approach to energy savings in WSNs is devised in the information-controlled
transmission power (ICTP) adjustment, where nodes with more information use higher transmission powers than those
that are less informative to share their target state information with the neighboring nodes. Simulations demonstrate the
performance gains offered by MISS and ICTP in terms of power consumption and target localization accuracy.




This paper presents PRESTO, a novel two-tier sensor data management architecture comprising proxies and sensors
that cooperate with one another for acquiring data and processing queries. PRESTO proxies construct time-series models
of observed trends in the sensor data and transmit the parameters of the model to sensors. Sensors check sensed data
with model-predicted values and transmit only deviations from the predictions back to the proxy. Such a model-driven
push approach is energyefficient, while ensuring that anomalous data trends are never missed. In addition to supporting
queries on current data, PRESTO also supports queries on historical data using interpolation and local archival at
sensors. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings.
We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. Our experiments show
that in a temperature monitoring application, PRESTO yields one to two orders of magnitude reduction in energy
requirements over on-demand, proactive or model-driven pull approaches. PRESTO also results in an order of magnitude
reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote
sensor nodes.

Non-geostationary (NGEO) satellite communication systems offer an array of advantages over their terrestrial and
geostationary counterparts. They are seen as an integral part of next generation ubiquitous communication systems.
Given the non-uniform distribution of users in satellite footprints, due to several geographical and/or climatic
constraints, some Inter-Satellite Links (ISLs) are expected to be heavily loaded with data packets while others remain
underutilized. Such scenario obviously leads to congestion of the heavily loaded links. It ultimately results in buffer
overflows, higher queuing delays, and significant packet drops. To guarantee a better distribution of traffic among
satellites, this paper proposes an explicit exchange of information on congestion status among neighboring satellites.
Indeed, a satellite notifies its congestion status to its neighboring satellites. When it is about to get congested, it requests
its neighboring satellites to decrease their data forwarding rates by sending them a self status notification signaling
message. In response, the neighboring satellites search for less congested paths that do not include the satellite in
question and communicate a portion of data, primarily destined to the satellite, via the retrieved paths. This operation
avoids both congestion and packet drops at the satellite. It also ensures a better distribution of traffic over the entire
satellite constellation. The proposed scheme is dubbed ―Explicit Load Balancing‖ (ELB) scheme. While the multi-path
We consider the delay properties of one-hop networks with general interference constraints and multiple traffic streams
with time-correlated arrivals. We first treat the case when arrivals are modulated by independent finite state Markov
chains. We show that the well known maximal scheduling algorithm achieves average delay that grows at most
logarithmically in the largest number of interferers at any link. Further, in the important special case when each Markov
process has at most two states (such as bursty ON/OFF sources), we prove that average delay is independent of the
number of nodes and links in the network, and hence is order-optimal. We provide tight delay bounds in terms of the
individual auto-correlation parameters of the traffic sources. These are perhaps the first order-optimal delay results for
controlled queueing networks that explicitly account for such statistical information. Our analysis treats cases both with
and without flow control.



Release planning for incremental software development assigns features to releases such that technical, resource, risk,
and budget constraints are met. A feature can be offered as part of a release only if all of its necessary tasks are done
before the given release date. We assume a given pool of human resources with different degrees of productivity to
perform different types of tasks. In the context of release planning, the question studied in this paper is how to allocate
these resources to the tasks of implementing the features such that the value gained from the released features is
maximized. Planning of software releases and allocation of resources cannot be handled in isolation. To address the
inherent difficulty of this process, we propose a two-phase optimization approach called OPTIMIZERASORP that
combines the strength of two existing solution methods. Phase 1 applies integer linear programming to a relaxed version
of the full problem. Phase 2 uses genetic programming in a reduced search space to generate operational resource
allocation plans. The method is evaluated for a series of 600 randomly generated problems with varying problem
parameters. The results are compared with a heuristic that locally allocates resources based on a greedy search


We present a new technique for helping developers understand heap referencing properties of object-oriented programs
and how the actions of the program affect these properties. Our dynamic analysis uses the aliasing properties of objects
to synthesize a set of roles; each role represents an abstract object state intended to be of interest to the developer. We
allow the developer to customize the analysis to explore the object states and behavior of the program at multiple
different and potentially complementary levels of abstraction. The analysis uses roles as the basis for three abstractions:
role transition diagrams, which present the observed transitions between roles and the methods responsible for the
transitions; role relationship diagrams, which present the observed referencing relationships between objects playing
different roles; and enhanced method interfaces, which present the observed roles of method parameters. Together,
these abstractions provide useful information about important object and data structure properties and how the actions
of the program affect these properties. We have implemented the role analysis and have used this implementation to
explore the behavior of several Java programs. Our experience indicates that, when combined with a powerful graphical
user interface, roles are a useful abstraction for helping developers explore and understand the behavior of object-
oriented programs.


We show that even though mobile networks are highly unpredictable when viewed at the individual node scale, the end-
toend quality-of-service (QoS) metrics can be stationary when the mobile network is viewed in the aggregate. We define
the coherence time as the maximum duration for which the end-to-end QoS metric remains roughly constant, and the
spreading period as the minimum duration required to spread QoS information to all the nodes. We show that if the
coherence time is greater than the spreading period, the end-to-end QoS metric can be tracked. We focus on the energy
consumption as the end-to-end QoS metric, and describe a novel method by which an energy map can be constructed
and refined in the joint memory of the mobile nodes. Finally, we show how energy maps can be utilized by an application
that aims to minimize a node’s total energy consumption over its near-future trajectory.
In mobile ad hoc networks (MANETs), every node overhears every data transmission occurring in its vicinity and thus,
consumes energy unnecessarily. In IEEE 802.11 Power Saving Mechanism (PSM), a packet must be advertised before it
is actually transmitted. When a node receives an advertised packet that is not destined to itself, it switches to a lowpower
sleep state during the data transmission period, and thus, avoids overhearing and conserves energy. However, since
some MANET routing protocols such as Dynamic Source Routing (DSR) collect route information via overhearing, they
would suffer if they are used in combination with 802.11 PSM. Allowing no overhearing may critically deteriorate the
performance of the underlying routing protocol, while unconditional overhearing may offset the advantage of using PSM.
This paper proposes a new communication mechanism, called RandomCast, via which a sender can specify the desired
level of overhearing, making a prudent balance between energy and routing performance. In addition, it reduces
redundant rebroadcasts for a broadcast packet and thus saves more energy. Extensive simulation using ns-2 shows that
RandomCast is highly energy-efficient compared to conventional 802.11 as well as 802.11 PSM-based schemes, in terms
of total energy consumption, energy goodput and energy balance.



In this paper, we propose a bandwidth-efficient multicast mechanism for heterogeneous wireless networks. We reduce
the bandwidth cost of an Internet Protocol (IP) multicast tree by adaptively selecting the cell and the wireless technology
for each mobile host to join the multicast group. Our mechanism enables more mobile hosts to cluster together and
leads to the use of fewer cells to save the scarce wireless bandwidth. Besides, the paths in the multicast tree connecting
to the selected cells share more common links to save the wireline bandwidth. Our mechanism supports the dynamic
group membership and offers mobility of group members. Moreover, our mechanism requires no modification to the
current IP multicast routing protocols. We formulate the selection of the cell and the wireless technology for each mobile
host in the heterogeneous wireless networks as an optimization problem. We use Integer Linear Programming to model
the problem and show that the problem is NP-hard. To solve the problem, we propose a distributed algorithm based on
Lagrangean relaxation and a network protocol based on the algorithm. The simulation results show that our mechanism
can effectively save the wireless and wireline bandwidth as compared to the traditional IP multicast.




#################################################################################################




#################################################################################################
We present Quiver, a system that coordinates service proxies placed at the ―edge‖ of the Internet to serve distributed
clients accessing a service involving mutable objects. Quiver enables these proxies to perform consistent accesses to
shared objects by migrating the objects to proxies performing operations on those objects. These migrations dramatically
improve performance when operations involving an object exhibit geographic locality, since migrating this object into the
vicinity of proxies hosting these operations will benefit all such operations. This system reduces the workload in the
server. It performs the all operations in the proxies itself. In this system the operations performed in First-In-First-Out
process. This system handles two process serializability and strict serializabilty for durability in the consistent object
sharing . Other workloads benefit from Quiver, dispersing the computation load across the proxies and saving the costs
of sending operation parameters over the wide area when these are large. Quiver also supports optimizations for single-
object reads that do not involve migrating the object. We detail the protocols for implementing object operations and for
accommodating the addition, involuntarydue to heterogeneity in thedeparture of proxies. Finally, we discuss the use
Wireless links are often asymmetric disconnection, and voluntary transmission power of devices, non-uniform
environmental noise, and other signal propagation phenomenons. Unfortunately, routing protocols for mobile ad hoc
networks typically work well only in bidirectional networks. This paper first presents a simulation study quantifying the
impact of asymmetric links on network connectivity and routing performance. It then presents a framework called BRA
that provides a bidirectional abstraction of the asymmetric network to routing protocols. BRA works by maintaining
multi-hop reverse routes for unidirectional links and provides three new abilities: improved connectivity by taking
advantage of the unidirectional links, reverse route forwarding of control packets to enable off-the-shelf routing
protocols, and detection packet loss on unidirectional links. Extensive simulations of AODV layered on BRA show that
packet delivery increases substantially (two-fold in some instances) in asymmetric networks compared to regular AODV,
which only routes on bidirectional links.



There is significant interest in the data mining and network management communities about the need to improve
existing techniques for clustering multivariate network traffic flow records so that we can quickly infer underlying traffic
patterns. In this paper, we investigate the use of clustering techniques to identify interesting traffic patterns from
network traffic data in an efficient manner. We develop a framework to deal with mixed type attributes including
numerical, categorical, and hierarchical attributes for a one-pass hierarchical clustering algorithm. We demonstrate the
improved accuracy and efficiency of our approach in comparison to previous work on clustering network traffic.




In this paper, we consider an overarching problem that encompasses both performance metrics. In particular, we study
the network capacity problem under a given network lifetime requirement. Specifically, for a wireless sensor network
where each node is provisioned with an initial energy, if all nodes are required to live up to a certain lifetime criterion,
Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate
allocation among the nodes, we advocate the use of lexicographical max-min (LMM) rate allocation. To calculate the LMM
rate allocation vector, we develop a polynomial-time algorithm by exploiting the parametric analysis (PA) technique from
linear program (LP), which we call serial LP with Parametric Analysis (SLP-PA). We show that the SLP-PA can be also
employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in
the literature. More important, we show that there exists an elegant duality relationship between the LMM rate allocation
problem and the LMM node lifetime problem. Therefore, it is sufficient to solve only one of the two problems. Important
insights can be obtained by inferring duality results for the other problem.


Measurement and estimation of packet loss characteristics are challenging due to the relatively rare occurrence and
typically short duration of packet loss episodes. While active probe tools are commonly used to measure packet loss on
end-to-end paths, there has been little analysis of the accuracy of these tools or their impact on the network. The
objective of our study is to understand how to measure packet loss episodes accurately with end-to-end probes. We
begin by testing the capability of standard Poisson- modulated end-to-end measurements of loss in a controlled
laboratory environment using IP routers and commodity end hosts. Our tests show that loss characteristics reported
from such Poisson-modulated probe tools can be quite inaccurate over a range of traffic conditions. Motivated by these
observations, we introduce a new algorithm for packet loss measurement that is designed to overcome the deficiencies in
standard Poisson-based tools. Specifically, our method entails probe experiments that follow a geometric distribution to
1) enable an explicit trade-off between accuracy and impact on the network, and 2) enable more accurate measurements
than standard Poisson probing at the same rate. We evaluate the capabilities of our methodology experimentally by
developing and implementing a prototype tool, called BADABING. The experiments demonstrate the trade-offs between
impact on the network and measurement accuracy. We show that BADABING reports loss characteristics far more
accurately than traditional loss measurement tools.
The probabilistic packet marking (PPM) algorithm is a promising way to discover the Internet map or an attack graph
that the attack packets traversed during a distributed denial-of-service attack. However, the PPM algorithm is not
perfect, as its termination condition is not well defined in the literature. More importantly, without a proper termination
condition, the attack graph constructed by the PPM algorithm would be wrong. In this work, we provide a precise
termination condition for the PPM algorithm and name the new algorithm the Rectified PPM (RPPM) algorithm. The most
significant merit of the RPPM algorithm is that when the algorithm terminates, the algorithm guarantees that the
constructed attack graph is correct, with a specified level of confidence. We carry out simulations on the RPPM algorithm
and show that the RPPM algorithm can guarantee the correctness of the constructed attack graph under 1) different
probabilities that a router marks the attack packets and 2) different structures of the network graph. The RPPM
algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising
means of enhancing the reliability of the PPM algorithm.
Intrusion detection in Wireless Sensor Network (WSN) is of practical interest in many applications such as detecting an
intruder in a battlefield. The intrusion detection is defined as a mechanism for a WSN to detect the existence of
inappropriate, incorrect, or anomalous moving attackers. In this paper, we consider this issue according to
heterogeneous WSN models. Furthermore, we consider two sensing detection models: single-sensing detection and
multiple-sensing detection... Our simulation results show the advantage of multiple sensor heterogeneous WSNs.




In recent years, the exponential growth of Internet users with increased bandwidth requirements has led to the
emergence of the next generation of IP routers. Distributed architecture is one of the promising trends providing petabit
routers with a large switching capacity and high-speed interfaces. Distributed routers are designed with an optical
switch fabric interconnecting line and control cards. Computing and memory resources are available on both control and
line cards to perform routing and forwarding tasks. This new hardware architecture is not efficiently utilized by the
traditional software models where a single control card is responsible for all routing and management operations. The
routing table manager plays an extremely critical role by managing routing information and in particular, a forwarding
information table. This article presents a distributed architecture set up around a distributed and scalable routing table
manager. This architecture also comes provides improvements in robustness and resiliency. The proposed architecture




This work was motivated by the need to achieve low latency in an input centrally-scheduled cell switch for high-
performance computing applications; specifically, the aim is to reduce the latency incurred between issuance of a
request and arrival of the corresponding grant. We introduce a speculative transmission scheme to significantly reduce
the average latency by allowing cells to proceed without waiting for a grant. It operates in conjunction with any
centralized matching algorithm to achieve a high maximum utilization. An analytical model is presented to investigate
the efficiency of the speculative transmission scheme employed in a non-blocking N*NR input-queued crossbar switch
with receivers R per output. The results demonstrate that the can be almost entirely eliminated for loads up to 50%. Our
simulations confirm the analytical results.

An efficient algorithm is presented for the computation of grayscale morphological operations with arbitrary 2-D flat
structuring elements (S.E.). The required computing time is independent of the image content and of the number of gray
levels used. It always outperforms the only existing comparable method, which was proposed in the work by Van
Droogen broeck and Talbot, by a factor between 3.5 and 35.1, depending on the image type and shape of S.E. So far,
filtering using multiple S.E.s is always done by performing the operator for each size and shape of the S.E. separately.
With our method, filtering with multiple S.E.s can be performed by was proposed in the work by Van Droogen broeck
and Talbot, by a factor between 3.5 and 35.1, depending on the image type and shape of S.E. So far, filtering using
multiple S.E.s is always done by performing the operator for each size and shape of the S.E. separately. With our
method, filtering with multiple S.E.s can be performed by a single operator for a slightly reduced computational cost per
size or shape, which makes this method more suitable for use in granulometries, dilation-erosion scale spaces, and
template matching using the hit-or-miss transform. The discussion focuses on erosions and dilations, from which other
transformations can be derived. a single operator for a slightly reduced computational cost per size or shape, which
makes this method more suitable for use in granulometries, dilation-erosion scale spaces, and template matching using
the hit-or-miss transform. The discussion focuses on erosions and dilations, from which other transformations can be
In this paper, we consider an overarching problem that encompasses both performance metrics. In particular, we study
the network capacity problem under a given network lifetime requirement. Specifically, for a wireless sensor network
where each node is provisioned with an initial energy, if all nodes are required to live up to a certain lifetime criterion,
Since the objective of maximizing the sum of rates of all the nodes in the network can lead to a severe bias in rate
allocation among the nodes, we advocate the use of lexicographical max-min (LMM) rate allocation. To calculate the LMM
rate allocation vector, we develop a polynomial-time algorithm by exploiting the parametric analysis (PA) technique from
linear program (LP), which we call serial LP with Parametric Analysis (SLP-PA). We show that the SLP-PA can be also
employed to address the LMM node lifetime problem much more efficiently than a state-of-the-art algorithm proposed in
the literature. More important, we show that there exists an elegant duality relationship between the LMM rate allocation
problem and the LMM node lifetime problem. Therefore, it is sufficient to solve only one of the two problems. Important
insights can be obtained by inferring duality results for the other problem.



The structured light vision system is a successfully used for the measurement of 3D surface in vision. There is some
limitation in the above scheme, that is tens of picture are captured to recover a 3D sense. This paper presents an idea
for real-time Acquisition of 3-D surface data by a specially coded vision system. To achieve 3-D measurement for a
dynamic scene, the data acquisition must be performed with only a single image. A principle of uniquely color-encoded
pattern projection is proposed to design a color matrix for improving the reconstruction efficiency. The matrix is
produced by a special code sequence and a number of state transitions. A color projector is controlled by a computer to
generate the desired color patterns in the scene. The unique indexing of the light codes is crucial here for color
projection since it is essential that each light grid be uniquely identified by incorporating local neighborhoods so that 3-D
reconstruction can be performed with only local analysis of a single image. A scheme is presented to describe such a
vision processing method for fast 3-D data acquisition. Practical experimental performance is provided to analyze the
efficiency of the proposed methods



High cohesion is desirable property in software systems to achieve reusability and maintainability. In this project we are
measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture
different aspects of it. In existing approaches the cohesion is calculate from the structural information for example
method attributes and references. In conceptual cohesion of classes, i.e. in our project we are calculating the
unstructured information from the source code such as comments and identifiers. Unstructured information is
embedded in the source code. To retrieve the unstructured information from the source code Latent Semantic Indexing
is used. A large case study on three open source software systems is presented which compares the new measure with
an extensive set of existing metrics and uses them to construct models that predict software faults. In our project we are
achieving the high cohesion and we are predicting the fault in Object –Oriented Systems




Location-based spatial queries (LBSQ s) refer to spatial queries whose answers rely on the location of the inquirer.
Efficient processing of LBSQ s is of critical importance with the ever-increasing deployment and use of mobile
technologies. We show that LBSQ s has certain unique characteristics that the traditional spatial query processing in
centralized databases does not address. For example, a significant challenge is presented by wireless broadcasting
environments, which have excellent scalability but often exhibit high-latency database access. In this paper, we present
a novel query processing technique that, though maintaining high scalability and accuracy, manages to reduce the
latency considerably in answering LBSQ s. Our approach is based on peer-to-peer sharing, which enables us to process
queries without delay at a mobile host by using query results cached in its neighboring mobile peers. We demonstrate
the feasibility of our approach through a probabilistic analysis, and we illustrate the appeal of our technique through
extensive simulation results.
Since 2005, IEEE 802.11-based networks have been able to provide a certain level of quality of service (QoS) by the
means of service differentiation, due to the IEEE 802.11e amendment. However, no mechanism or method has been
standardized to accurately evaluate the amount of resources remaining on a given channel. Such an evaluation would,
however, be a good asset for bandwidth-constrained applications. In multihop ad hoc networks, such evaluation
becomes even more difficult. Consequently, despite the various contributions around this research topic, the estimation
of the available bandwidth still represents one of the main issues in this field. In this paper, we propose an improved
mechanism to estimate the available bandwidth in IEEE 802.11-based ad hoc networks. Through simulations, we
compare the accuracy of the estimation we propose to the estimation performed by other state-of-the-art QoS protocols,
BRuIT, AAC, and QoS-AODV.




Self-propagating codes, called worms, such as Code Red, Nimda, and Slammer, have drawn significant attention due to
their enormously adverse impact on the Internet. Thus, there is great interest in the research community in modeling the
spread of worms and in providing adequate defense mechanisms against them. In this paper, we present a (stochastic)
branching process model for characterizing the propagation of Internet worms. The model is developed for uniform
scanning worms and then extended to preference scanning worms. This model leads to the development of an automatic
worm containment strategy that prevents the spread of a worm beyond its early stage. Specifically, for uniform scanning
worms, we are able to determine whether the worm spread will eventually stop. We then extend our results to contain
uniform scanning worms. Our automatic worm containment schemes effectively contain both uniform scanning worms
and local preference scanning worms, and it is validated through simulations and real trace data to be non intrusive.




In this project we present a simple way to resolve a complicated network security. This is done by the following two ways.
They are as follows, first is the decrypt only when necessary (DOWN) policy, which can substantially improve the ability
of low-cost to protect the secrets. The DOWN policy relies on the ability to operate with fractional parts of secrets. We
discuss the feasibility of extending the DOWN policy to various asymmetric and symmetric cryptographic primitives. The
second is cryptographic authentication strategies which employ only symmetric cryptographic primitives, based on novel
ID-based key pre-distribution schemes that demand very low complexity of operations to be performed by the secure
coprocessors (ScP) and can take good advantage of the DOWN policy.



Data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the
access latency and bandwidth usage. However, designing efficient distributed caching algorithms is non-trivial when
network nodes have limited memory. In this article, we consider the cache placement problem of minimizing total data
access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. The above
optimization problem is known to be NP-hard. Defining benefit as the reduction in total access cost, we present a
polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least one-
fourth (one-half for uniform-size data items) of the optimal benefit. The approximation algorithm is amenable to localized
distributed implementation, which is shown via simulations to perform close to the approximation algorithm. Our
distributed algorithm naturally extends to networks with mobile nodes. We simulate our distributed algorithm using a
network simulator (ns2), and demonstrate that it significantly outperforms another existing caching technique (by Yin
and Cao [30]) in all important performance metrics. The performance differential is particularly large in more challenging
scenarios, such as higher access frequency and smaller memory.
THE frequent attacks on network infrastructure, using various forms of denial of service (DoS) attacks and worms, have
led to an increased need for developing techniques for analyzing and monitoring network traffic. If efficient analysis tools
were available, it could become possible to detect the attacks, anomalies and take action to suppress them before they
have had much time to propagate across the network. In this paper, we study the possibilities of traffic-analysis based
mechanisms for attack and anomaly detection. The motivation for this work came from a need to reduce the likelihood
that an attacker may hijack the campus machines to stage an attack on a third party. A campus may want to prevent or
limit misuse of its machines in staging attacks, and possibly limit the liability from such attacks. In particular, we study
the utility of observing packet header data of outgoing traffic, such as destination addresses, port numbers and the
number of flows, in order to detect attacks/anomalies originating from the campus at the edge of a campus. Detecting
anomalies/attacks close to the source allows us to limit the potential damage close to the attacking machines. Traffic
monitoring close to the source may enable the network operator quicker identification of potential anomalies and allow
better control of administrative domain’s resources. Attack propagation could be slowed through early detection. Our
approach passively monitors network traffic at regular intervals and analyzes it to find any abnormalities in the
aggregated traffic. By observing the traffic and correlating it to previous states of traffic, it may be possible to see whether
the current traffic is behaving in a similar (i.e., correlated) manner. The network traffic could look different because of
flash crowds, changing access patterns, infrastructure problems such as router failures, and DoS attacks. In the case of
An efficient and distributed scheme for file mapping or file lookup is critical in decentralizing metadata management
within a group of metadata servers, here the technique used called HIERARCHICAL BLOOM FILTER ARRAYS (HBA) to
map filenames to the metadata servers holding their metadata. The Bloom filter arrays with different levels of accuracies
are used on each metadata server. The first one with low accuracy and used to capture the destination metadata server
information of frequently accessed files. The other array is used to maintain the destination metadata information of all
files. Simulation results show our HBA design to be highly effective and efficient in improving the performance and
scalability of file systems in clusters with 1,000 to 10,000 nodes (or super clusters) and with the amount of data in the
petabyte scale or higher. HBA is reducing metadata operation by using the single metadata architecture instead of 16
metadata server.



Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better
coordination of application subsystems compared to federated systems. An integrated architecture shares the system’s
communication resources by using a single physical network for exchanging messages of multiple application
subsystems. Similarly, the computational resources (for example, memory and CPU time) of each node computer are
available to multiple software components. In order to support a seamless system integration without unintended side
effects in such an integrated architecture, it is important to ensure that the software components do not interfere
through the use of these shared resources. For this reason, the DECOS integrated architecture encapsulates application
subsystems and their constituting software components. At the level of the communication system, virtual networks on
top of an underlying time-triggered physical network exhibit predefined temporal properties (that is, bandwidth, latency,
and latency jitter). Due to encapsulation, the temporal properties of messages sent by a software component are
independent from the behavior of other software components, in particular from those within other application
subsystems


In this project efficiency of pairs in program design tasks is identified by using pair programming concept. Pair
programming involves two developers simultaneously collaborating with each other on the same programming task to
design and code a solution. Algorithm design and its implementation are normally merged and it provides feedback to
enhance the design. Previous controlled pair programming experiments did not explore the efficacy of pairs against
individuals in program design-related tasks. Variations in programmer skills in a particular language or an integrated
development environment and the understanding of programming instructions can cover the skill of subjects in program
design-related tasks. Programming aptitude tests (PATs) have been shown to correlate with programming performance.
PATs do not require understanding of programming instructions and do not require a skill in any specific computer
language. By conducting two controlled experiments, with full-time professional programmers being the subjects who
worked on increasingly complex programming aptitude tasks related to problem solving and algorithmic design. In both
experiments, pairs significantly outperformed individuals, providing evidence of the value of pairs in program design-
related tasks.
The Distributed Denial-of-Service (DDoS) attack is a serious threat to the legitimate use of the Internet. Prevention
mechanisms are thwarted by the ability of attackers to forge or spoof the source addresses in IP packets. By employing
IP spoofing, attackers can evade detection and put a substantial burden on the destination network for policing attack
packets. In this paper, we propose an inter-domain packet filter (IDPF) architecture that can mitigate the level of IP
spoofing on the Internet. A key feature of our scheme is that it does not require global routing information. IDPFs are
constructed from the information implicit in Border Gateway Protocol (BGP) route updates and are deployed in network
border routers. We establish the conditions under which the IDPF framework correctly works in that it does not discard
packets with valid source addresses. Based on extensive simulation studies, we show that, even with partial deployment
on the Internet, IDPFs can proactively limit the spoofing capability of attackers. In addition, they can help localize the
origin of an attack packet to a small number of candidate networks.



This paper proposes a data-hiding technique for binary images in morphological transform domain for authentication
purpose. To achieve blind watermark extraction, it is difficult to use the detail coefficients directly as a location map to
determine the data-hiding locations. Hence, we view flipping an edge pixel in binary images as shifting the edge location
one pixel horizontally and vertically. Based on this observation, we propose an interlaced morphological binary wavelet
transform to track the shifted edges, which thus facilitates blind watermark extraction and incorporation of
cryptographic signature. Unlike existing block-based approach, in which the block size is constrained by 3times3 pixels
or larger, we process an image in 2times2 pixel blocks. This allows flexibility in tracking the edges and also achieves low
computational complexity. The two processing cases that flipping the candidates of one does not affect the flippability
conditions of another are employed for orthogonal embedding, which renders more suitable candidates can be identified
such that a larger capacity can be achieved. A novel effective Backward-Forward Minimization method is proposed,
which considers both backwardly those neighboring processed embeddable candidates and forwardly those unprocessed
flippable candidates that may be affected by flipping the current pixel. In this way, the total visual distortion can be
minimized. Experimental results demonstrate the validity of our arguments.

Malicious users can exploit the correlation among data to infer sensitive information from a series of seemingly
innocuous data accesses. Thus, we develop an inference violation detection system to protect sensitive data content.
Based on data dependency, database schema and semantic knowledge. we constructed a semantic inference model (SIM)
that represents the possible inference channels from any attribute to the pre-assigned sensitive attributes. The SIM is
then instantiated to a semantic inference graph (SIG) for query-time inference violation detection. For a single user case,
when a user poses a query, the detection system will examine his/her past query log and calculate the probability of
inferring sensitive information. The query request will be denied if the inference probability exceeds the pre specified
threshold. For multi-user cases, the users may share their query answers to increase the inference probability.
Therefore, we develop a model to evaluate collaborative inference based on the query sequences of collaborators and
their task-sensitive collaboration levels. Experimental studies reveal that information authoritativeness, communication
fidelity and honesty in collaboration are three key factors that affect the level of achievable collaboration. An example is
given to illustrate the use of the proposed technique to prevent multiple collaborative users from deriving sensitive
information via inference.



An accurate prediction of the number of defects in a software product during system testing contributes not only to the
management of the system testing process but also to the estimation of the product’s required maintenance. Here, a new
approach, called Estimation of Defects based on Defect Decay Model (ED3M) is presented that computes an estimate the
defects in an ongoing testing process. ED3M is based on estimation theory. Unlike many existing approaches, the
technique presented here does not depend on historical data from previous projects or any assumptions about the
requirements and/or testers’ productivity. It is a completely automated approach that relies only on the data collected
during an ongoing testing process. This is a key advantage of the ED3M approach as it makes it widely applicable in
different testing environments. Here, the ED3M approach has been evaluated using five data sets from large industrial
projects and two data sets from the literature. In addition, a performance analysis has been conducted using simulated
data sets to explore its behavior using different models for the input data. The results are very promising; they indicate
the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known
alternative techniques, while only using defect data as the input.
Active learning methods have been considered with increased interest in the statistical learning community. Initially
developed within a classification framework, a lot of extensions are now being proposed to handle multimedia
applications. This paper provides algorithms within a statistical framework to extend active learning for online content-
based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful
classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is
then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection
process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy
carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error
to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a
batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets
of online images (query concept). Experiments on large databases show that the RETIN method performs well in
comparison to several other active strategies.



We propose several localized sensor area coverage protocols for heterogeneous sensors, each with arbitrary sensing and
transmission radii. Each sensor has a time out period and listens to messages sent by respective nodes before the time
out expires. Sensor nodes whose sensing area is not fully covered (or fully covered but with a disconnected set of active
sensors) when the deadline expires decide to remain active for the considered round and transmit an activity message
announcing it. In our approach, sensor decides to sleep only if neighbor sensor is active or not covered. Covered nodes
decide to sleep, with or without transmitting a withdrawal message to inform neighbors about the status. After hearing
from more neighbors, inactive sensors may observe that they became covered and may decide to alter their original
decision and transmit a retreat message.



Data mining techniques have been widely used in various applications. One of the most important data mining
applications is association rule mining. Apriori-based association rule mining in hardware, one has to load candidate
item sets and a database into the hardware. Since the capacity of the hardware architecture is fixed, if the number of
candidate item sets or the number of items in the database is larger than the hardware capacity, the items are loaded
into the hardware separately. The time complexity of those steps that need to load candidate item sets or database items
into the hardware is in proportion to the number of candidate item sets multiplied by the number of items in the
database. Too many candidate item sets and a large database would create a performance bottleneck. In this paper, we
propose a HAsh-based and PiPelIned (abbreviated as HAPPI) architecture for hardware enhanced association rule
mining. Therefore, we can effectively reduce the frequency of loading the database into the hardware. HAPPI solves the
bottleneck problem in a priori-based hardware schemes.




In this paper, we propose a bandwidth-efficient multicast mechanism for heterogeneous wireless networks. We reduce
the bandwidth cost of a IP multicast tree by adaptively selecting the cell and the wireless technology for each mobile host
to join the multicast group. Our mechanism enables more mobile hosts to cluster together and lead to the use of fewer
cells to save the scarce wireless bandwidth. Besides, the paths in the multicast tree connecting to the selected cells
share more common links to save the wireline bandwidth. Our mechanism supports the dynamic group membership and
offers mobility of group members. Moreover, our mechanism requires no modification on the current IP multicast routing
protocols. We formulate the selection of the cell and the wireless technology for each mobile host in the heterogeneous
wireless networks as an optimization problem. We use Integer Linear Programming to model the problem and show that
the problem is NP-hard. To solve the problem, we propose an distributed algorithm based on Lagrangean relaxation and
a network protocol based on the algorithm. The simulation results show that our mechanism can effectively save the
wireless and wireline bandwidth as compared to the traditional IP multicast.
Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete
path from the source to the destination. There are many real networks that follow this model, for example, wildlife
tracking sensor networks, military networks, vehicular ad hoc networks, etc. In this context, conventional routing
schemes fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such
networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high
probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their
performance. Furthermore, proposed efforts to reduce the overhead of flooding-based schemes have often been plagued
by large delays. With this in mind, we introduce a new family of routing schemes that ―spray‖ a few message copies into
the network, and then route each copy independently towards the destination. We show that, if carefully designed, spray
routing



Sports video annotation is important for sports video semantic analysis such as event detection and personalization. We
propose a novel approach for sports video semantic annotation and personalized retrieval. Different from the state of the
art sports video analysis methods which heavily rely on audio/visual features, the proposed approach incorporates web-
casting text into sports video analysis. Compared with previous approaches, the contributions of our approach include
the following. 1) The event detection accuracy is significantly improved due to the incorporation of web-casting text
analysis. 2) The proposed approach is able to detect exact event boundary and extract event semantics that are very
difficult or impossible to be handled by previous approaches. 3) The proposed method is able to create personalized
summary from both general and specific point of view related to particular game, event, player or team according to
user’s preference. We present the framework of our approach and details of text analysis, video analysis, text/video
alignment, and personalized retrieval. The experimental results on event boundary detection in sports video are
encouraging and comparable to the manually selected events. The evaluation on personalized retrieval is effective in
helping meet users’ expectations.



Computing constrained shortest paths is fundamental to some important network functions such as QoS routing, MPLS
path selection, ATM circuit routing, and traffic engineering. The problem is to find the cheapest path that satisfies
certain constraints. In particular, finding the cheapest delay-constrained path is critical for real-time data flows such as
voice/video calls. Because it is NP-complete, much research has been designing heuristic algorithms that solve the -
approximation of the problem with an adjustable accuracy. A common approach is to discretize (i.e., scale and round)
the link delay or link cost, which transforms the original problem to a simpler one solvable in polynomial time. The
efficiency of the algorithms directly relates to the magnitude of the errors introduced during discretization. In this paper,
we propose two techniques that reduce the discretization errors, which allow faster algorithms to be designed. Reducing
the overhead of computing constrained shortest paths is practically important for the successful design of a high-
throughput QoS router, which is limited at both processing power and memory space. Our simulations show that the
new algorithms reduce the execution time by an order of magnitude on power-law topologies with 1000 nodes.




Proving ownerships rights on outsourced relational database is a crucial issue in today's internet based application
environments and in many content distribution applications. In this paper, we present a mechanism for proof of
ownership based on the secure embedding of a robust imperceptible watermark in relational data. We formulate the
watermarking of relational databases as a constrained optimization problem and discus efficient techniques to solve the
optimization problem and to handle the onstraints. Our watermarking technique is resilient to watermark
synchronization errors because it uses a partioning approach that does not require marker tuple. Our approach
overcomes a major weakness in previously proposed watermarking techniques. Watermark decoding is based on a
threshold-based technique characterized by an optimal threshold that minimizes the probability of decoding errors. We
implemented a proof of concept implementation of our watermarking technique and showed by experimental results that
our technique is resilient to tuple deletion, alteration and insertion attacks



An approach to IP traces back based on the probabilistic packet marking paradigm. Our approach, which we call
randomize-and-link, uses large checksum cords to ―link‖ message fragments in a way that is highly scalable, for the
checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum
cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker
to easily create messages that collide with legitimate messages.
Networks employ link protection to achieve fast recovery from link failures. While the first link failure can be protected
using link protection, there are several alternatives for protecting against the second failure. This paper formally
classifies the approaches to dual-link failure resiliency. One of the strategies to recover from dual-link failures is to
employ link protection for the two failed links independently, which requires that two links may not use each other in
their backup paths if they may fail simultaneously. Such a requirement is referred to as backup link mutual exclusion
(BLME) constraint and the problem of identifying a backup path for every link that satisfies the above requirement is
referred to as the BLME problem. This paper develops the necessary theory to establish the sufficient conditions for
existence of a solution to the BLME problem. Solution methodologies for the BLME problem is developed using two
approaches by: 1) formulating the backup path selection as an integer linear program; 2) developing a polynomial time
heuristic based on minimum cost path routing. The ILP formulation and heuristic are applied to six networks and their
performance is compared with approaches that assume precise knowledge of dual-link failure. It is observed that a
solution exists for all of the six networks considered. The heuristic approach is shown to obtain feasible solutions that
are resilient to most dual-link failures, although the backup path lengths may be significantly higher than optimal. In
addition, the paper illustrates the significance of the knowledge of failure location by illustrating that network with


The World Wide Web has become the most important information source for most of us. Unfortunately, there is no
guarantee for the correctness of information on the Web. Moreover, different websites often provide conflicting
information on a subject, such as different specifications for the same product. In this paper, we propose a new problem,
called Veracity, i.e., conformity to truth, which studies how to find true facts from a large amount of conflicting
information on many subjects that is provided by various websites. We design a general framework for the Veracity
problem and invent an algorithm, called TRUTHFINDER, which utilizes the relationships between websites and their
information, i.e., a website is trustworthy if it provides many pieces of true information, and a piece of information is
likely to be true if it is provided by many trustworthy websites. An iterative method is used to infer the trustworthiness of
websites and the correctness of information from each other. Our experiments show that TRUTHFINDER successfully
finds true facts among conflicting information and identifies trustworthy websites better than the popular search
engines.



A regeneration-theory approach is undertaken to analytically characterize the average overall completion time in a
distributed system. The approach considers the heterogeneity in the processing rates of the nodes as well as the
randomness in the delays imposed by the communication medium. The optimal one-shot load balancing policy is
developed and subsequently extended to develop an autonomous and distributed load-balancing policy that can
dynamically reallocate incoming external loads at each node. This adaptive and dynamic load balancing policy is
implemented and evaluated in a two-node distributed system. The performance of the proposed dynamic load-balancing
policy is compared to that of static policies as well as existing dynamic load-balancing policies by considering the average
completion time per task and the system processing rate in the presence of random arrivals of the external loads

A new semi fragile method for embedding watermark data into gray scale images has been proposed. In the proposed
scheme, the cover image is partitioned into non-overlapping blocks of size 8 x 8 pixels. The discrete cosine transform of
each block is computed. Each block is scaled and a quantization function is used to construct the watermark bit from
each block. The image is reconstructed by computing the inverse cosine transform. The gray threshold value of the
image in spatial domain is computed. The watermark is embedded to all pixels whose pixel intensity is less than the gray
threshold. The proposed scheme has been designed as public watermarking; it does not require the original image to
verify its integrity. Experimental results prove the efficiency of the scheme. The proposed scheme can be used to
authenticate digital documents of high significance.



Abstract—We propose a new fair scheduling technique, called OCGRR (Output Controlled Grant-based Round Robin), for
the support of DiffServ traffic in a core router. We define a stream to be the same-class packets from a given immediate
upstream router destined to an output port of the core router. At each output port, streams may be isolated in separate
buffers before being scheduled in a frame. The sequence of traffic transmission in a frame starts from higher-priority
traffic and goes down to lower-priority traffic. A frame may have a number of small rounds for each class. Each stream
within a class can transmit a number of packets in the frame based on its available grant, but only one packet per small
round, thus reducing the intertransmission time from the same stream and achieving a smaller jitter and startup
latency. The grant can be adjusted in a way to prevent the starvation of lower priority classes. We also verify and
demonstrate the good performance of our scheduler by simulation and comparison with other algorithms in terms of
queuing delay, jitter, and start-up latency.
The capability of dynamically adapting to distinct runtime conditions is an important issue when designing distributed
systems where negotiated quality of service (QOS) cannot always be delivered between processes. Providing fault
tolerance for such dynamic environments is a challenging task. Considering such a context, this paper proposes an
adaptive programming model for fault-tolerant distributed computing, which provides upper-layer applications with
process state information according to the current system synchrony (or QOS). The underlying system model is hybrid,
composed by a synchronous part (where there are time bounds on processing speed and message delay) and an
asynchronous part (where there is no time bound). However, such a composition can vary over time, and, in particular,

We study routing misbehavior in MANETs (Mobile Ad Hoc Networks) in this paper. In general, routing protocols for
MANETs are designed based on the assumption that all participating nodes are fully cooperative. However, due to the
open structure and scarcely available battery-based energy, node misbehaviors may exist. One such routing misbehavior
is that some selsh nodes will participate in the route discovery and maintenance processes but refuse to forward data
packets. In this paper, we propose the 2ACK scheme that serves as an add-on technique for routing schemes to detect
routing misbehavior and to mitigate their adverse effect. The main idea of the 2ACK scheme is to send two-hop
acknowledgment packets in the opposite direction of the routing path. In order to reduce additional routing overhead,
only a fraction of the received data packets are acknowledged in the 2ACK scheme. Analytical and simulation results are
presented to evaluate the performance of the proposed scheme.


This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system
(HIDS). This hybrid system combines the advantages of low false-positive rate of signature-based intrusion detection
system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. By mining anomalous
traffic episodes from Internet connections, we build an ADS that detects anomalies beyond the capabilities of signature-
based SNORT or Bro systems. A weighted signature generation scheme is developed to integrate ADS with SNORT by
extracting signatures from anomalies detected. HIDS extracts signatures from the output of ADS and adds them into the
SNORT signature database for fast and accurate intrusion detection. By testing our HIDS scheme over real-life Internet
trace data mixed with 10 days of Massachusetts Institute of Technology/ Lincoln Laboratory (MIT/LL) attack data set,
our experimental results show a 60 percent detection rate of the HIDS, compared with 30 percent and 22 percent in
using the SNORT and Bro systems, respectively. This sharp increase in detection rate is obtained with less than 3
percent false alarms. The signatures generated by ADS upgrade the SNORT performance by 33 percent. The HIDS
approach proves the vitality of detecting intrusions and anomalies, simultaneously, by automated data mining and
signature generation over Internet connection episodes.



The emerging Peer-to-Peer (P2P) model has become a very powerful and attractive paradigm for developing Internet-scale
systems for sharing resources, including files and documents. The distributed nature of these systems, where nodes are
typically located across different networks and domains, inherently hinders the efficient retrieval of information. In this
paper, we consider the effects of topologically aware overlay construction techniques on efficient P2P keyword search
algorithms. We present the Peer Fusion (pFusion) architecture that aims to efficiently integrate heterogeneous
information that is geographically scattered on peers of different networks. Our approach builds on work in unstructured
P2P systems and uses only local knowledge. Our empirical results, using the pFusion middleware architecture and data
sets from Akamai’s Internet mapping infrastructure (AKAMAI), the Active Measurement Project (NLANR), and the Text
Retrieval Conference (TREC) show that the architecture we propose is both efficient and practical




This paper investigates whether and when route reservation-based (RB) communication can yield better delay
performance than non-reservation-based (NRB) communication in ad hoc wireless networks. In addition to posing this
fundamental question, the requirements (in terms of route discovery, medium access control (MAC) protocol, and
pipelining, etc.) for making RB switching superior to NRB switching are also identified. A novel analytical framework is
developed and the network performance under both RB and NRB schemes is quantified. It is shown that if the
aforementioned requirements are met, then RB schemes can indeed yield better delay performance than NRB schemes.
This advantage, however, comes at the expense of lower throughput and goodput compared to NRB schemes.
On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become
stale. To address the cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to predict the
lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are
unpredictable. In this paper, we propose proactively disseminating the broken link information to the nodes that have
that link in their caches. We define a new cache structure called a cache table and present a distributed cache update
algorithm. Each node maintains in its cache table the information necessary for cache updates. When a link failure is
detected, the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm
does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes. We show that the
algorithm outperforms DSR with path caches and with Link-Max Life, an adaptive timeout mechanism for link caches.
We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility.
An integrated methodology for the detection and removal of cracks on digitized paintings is presented in this project. The
cracks are detected by threshold the output of the morphological top-hat transform. Afterward, the thin dark brush
strokes which have been misidentified as cracks are removed using either a median radial basis function neural network
on hue and saturation data or a semi-automatic procedure based on region growing. Finally, crack filling using order
statistics filters or controlled anisotropic diffusion is performed. The methodology has been shown to perform very well
on digitized paintings suffering from cracks.



A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of two
stages. The first stage computes a fuzzy derivative for eight different directions. The second stage uses these fuzzy
derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Both stages are based
on fuzzy rules which make use of membership functions. The filter can be applied iteratively to effectively reduce heavy
noise. In particular, the shape of the membership functions is adapted according to the remaining noise level after each
iteration, making use of the distribution of the homogeneity in the image. A statistical model for the noise distribution
can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. Experimental
results are obtained to show the feasibility of the proposed approach. These results are also compared to other filters by
numerical measures and visual inspection.



An ad hoc network is a self organized entity with a number of mobile nodes without any centralized access point and
also there is a topology control problem which leads to high power consumption and no security, while routing the
packets between mobile hosts. Authentication is one of the important security requirements of a communication
network. The common authentication schemes are not applicable in Ad hoc networks. In this paper, we propose a secure
communication protocol for communication between two nodes in ad hoc networks. This is achieved by using clustering
techniques. We present a novel secure communication framework for ad hoc networks (SCP); which describes
authentication and confidentiality when packets are distributed between hosts with in the cluster and between the
clusters. These cluster head nodes execute administrative functions and network key used for certification. The cluster
head nodes (CHs) perform the major operations to achieve our SCP framework with help of Kerberos authentication
application and symmetric key cryptography technique which will be secure reliable transparent and scalable and will
have less over head.


Abstract: The face recognition is a fairly controversial subject right now. A system such as this can recognize and track
dangerous criminals and terrorists in a crowd, but some contend that it is an extreme invasion of privacy. The
proponents of large-scale face recognition feel that it is a necessary evil to make our country safer. It could benefit the
visually impaired and allow them to interact more easily with the environment. Also, a computer vision-based
authentication system could be put in place to allow computer access or access to a specific room using face recognition.
Another possible application would be to integrate this technology into an artificial intelligence system for more realistic
interaction with humans. We propose an appearance-based face recognition method called the Laplacianface approach.
By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different
from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the
Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace
that best detects the essential face manifold structure. The Laplacian faces are the optimal linear approximations to the
eigen functions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting
from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA,
LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with
Job scheduling is the key feature of any computing environment and the efficiency of computing depends largely on the
scheduling technique used. Intelligence is the key factor which is lacking in the job scheduling techniques of today.
Genetic algorithms are powerful search techniques based on the mechanisms of natural selection and natural genetics.
Multiple jobs are handled by the scheduler and the resource the job needs are in remote locations. Here we assume that
the resource a job needs are in a location and not split over nodes and each node that has a resource runs a fixed
number of jobs. The existing algorithms used are non predictive and employs greedy based algorithms or a variant of it.
The efficiency of the job scheduling process would increase if previous experience and the genetic algorithms are used.
In this paper, we propose a model of the scheduling algorithm where the scheduler can learn from previous experiences
and an effective job scheduling is achieved as time progresses.



The widely used web search engines give different users the same answer set, although the users may have different
preferences. Personalized web search carry out the search for each user with his preference. In order to reduce the
wastage of time on browsing unnecessary documents, this paper presents an intelligent Personal Agent forWeb Search
(PAWS). The PAWS intelligently utilizes the Self-Organizing Map (SOM) as the user’s profile and therefore, is capable of
providing high quality answer set to the user.

The next-generation mobile network will support terminal mobility, personal mobility, and service provider portability,
making global roaming seamless. A location-independent personal telecommunication number (PTN) scheme is
conducive to implementing such a global mobile system. However, the non-geographic PTNs coupled with the anticipated
large number of mobile users in future mobile networks may introduce very large centralized databases. This
necessitates research into the design and performance of high-throughput database technologies used in mobile systems
to ensure that future systems will be able to carry efficiently the anticipated loads. This paper proposes a scalable,
robust, efficient location database architecture based on the location-independent PTNs. The proposed multi tree
database architecture consists of a number of database subsystems, each of which is a three-level tree structure and is
connected to the others only through its root. By exploiting the localized nature of calling and mobility patterns, the
proposed architecture effectively reduces the database loads as well as the signaling traffic incurred by the location
registration and call delivery procedures. In addition, two memory-resident database indices, memory-resident direct file
and T-tree, are proposed for the location databases to further improve their throughput. Analysis model and numerical
results are presented to evaluate the efficiency of the proposed database architecture. Results have revealed that the
proposed database architecture for location management can effectively support the anticipated high user density in the

An approach for filling-in blocks of missing data in wireless image transmission is presented in this paper. When
compression algorithms such as JPEG are used as part of the wireless transmission process, images are first tiled into
blocks of 8 x 8 pixels. When such images are transmitted over fading channels, the effects of noise can destroy entire
blocks of the image. Instead of using common retransmission query protocols, we aim to reconstruct the lost data using
correlation between the lost block and its neighbors. If the lost block contained structure, it is reconstructed using an
image in painting algorithm, while texture synthesis is used for the textured blocks. The switch between the two
schemes is done in a fully automatic fashion based on the surrounding available blocks. The performance of this method
is tested for various images and combinations of lost blocks. The viability of this method for image compression, in
association with loss JPEG, is also discussed.



The Internet's excellent scalability and robustness result in part from the end-to-end nature of Internet congestion
control. End-to-end congestion control algorithms alone, however, are unable to prevent the congestion collapse and
unfairness created by applications that are unresponsive to network congestion. To address these maladies, we propose
and investigate a novel congestion-avoidance mechanism called network border patrol (NBP). NBP entails the exchange
of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before
they enter the network, thereby preventing congestion within the network. Moreover, NBP is complemented with the
proposed enhanced core-stateless fair queueing (ECSFQ) mechanism, which provides fair bandwidth allocations to
competing flows. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing complexity toward the
edges of the network whenever possible. Simulation results show that NBP effectively eliminates congestion collapse and
that, when combined with ECSFQ, approximately max-min fair bandwidth allocations can be achieved for competing
flows.
This paper presents a steganography method using lossy compressed video which provides a natural way to send a large
amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity
segmentation (BPCS) steganography. In wavelet-based video compression methods such as 3-D set partitioning in
hierarchical trees (SPIHT) algorithm and motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video
are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D
SPIHT-BPCS steganography and motion-JPEG2000-BPCS steganography are presented and tested, which are the
integration of 3-D SPIHT video coding and BPCS steganography and that of motion-JPEG2000 and BPCS, respectively.
Experimental results show that 3-D SPIHT-BPCS is superior to motion-JPEG2000-BPCS with regard to embedding
performance.



Edge detection is a fundamental tool used in most image processing applications to obtain information from the frames
as a precursor step to feature extraction and object segmentation. This process detects outlines of an object and
boundaries between objects and the background in the image. An edge-detection filter can also be used to improve the
appearance of blurred or anti-aliased image streams. The basic edge-detection operator is a matrix area gradient
operation that determines the level of variance between different pixels. The edge-detection operator is calculated by
forming a matrix centered on a pixel chosen as the center of the matrix area. If the value of this matrix area is above a
given threshold, then the middle pixel is classified as an edge. Examples of gradient-based edge detectors are Roberts,
Prewitt, and Sobel operators. All the gradient-based algorithms have kernel operators that calculate the strength of the
slope in directions, which are orthogonal to each other, commonly vertical and horizontal. Later, the contributions of the
different components of the slopes are combined to give the total value of the edge strength. The Prewitt operator
measures two components.


Mobile ad hoc networks (MANETs) suffer from high transmission error rate because of the nature of radio
communications. The broadcast operation, as a fundamental service in MANETs, is prone to the broadcast storm
problem if forward nodes are not carefully designated. The objective of reducing the broadcast redundancy while still
providing high delivery ratio for each broadcast packet is a major challenge in a dynamic environment. In this paper, we
propose a simple, reliable broadcast algorithm, called double-covered broadcast (DCB), that takes advantage of
broadcast redundancy to improve the delivery ratio in the environment that has rather high transmission error rate.
Among 1-hop neighbors of the sender, only selected forward nodes retransmit the broadcast message. Forward nodes are
selected in such a way that (1) the sender’s 2-hop neighbors are covered and (2) the sender’s 1-hop neighbors are either
a forward node, or a non-forward node but covered by at least two forwarding neighbors. The retransmissions of the
forward nodes are received by the sender as confirmation of their receiving the packet. The non-forward 1-hop neighbors
of the sender do not acknowledge the reception of the broadcast. If the sender does not detect all its forward nodes’
retransmissions, it will resend the packet until the maximum times of retry is reached. Simulation results show that the
algorithm provides good performance for a broadcast operation under high transmission error rate environment

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:53
posted:11/11/2011
language:English
pages:69