dotnet prj11

Document Sample
dotnet prj11 Powered By Docstoc
					ABSTARCT: Dotnet PROJECTS

1.       Detecting Application Denial-of-Service Attacks: A Group-Testing-Based Approach - 2010
Application DoS attack, which aims at disrupting application service rather than depleting the network resource, has
emerged as a larger threat to network services, compared to the classic DoS attack. Owing to its high similarity to legitimate
traffic and much lower launching overhead than classic DDoS attack, this new assault type cannot be efficiently detected or
prevented by existing detection solutions. To identify application DoS attack, we propose a novel group testing (GT)-based
approach deployed on back-end servers, which not only offers a theoretical method to obtain short detection delay and low
false positive/negative rate, but also provides an underlying framework against general network attacks. More specifically,
we first extend classic GT model with size constraints for practice purposes, then redistribute the client service requests to
multiple virtual servers embedded within each back-end server machine, according to specific testing matrices. Based on
this framework, we propose a two-mode detection mechanism using some dynamic thresholds to efficiently identify the
attackers. The focus of this work lies in the detection algorithms proposed and the corresponding theoretical complexity
analysis. We also provide preliminary simulation results regarding the efficiency and practicability of this new scheme.
Further discussions over implementation issues and performance enhancements are also appended to show its great
potentials.

2.       Fast Detection of Replica Node Attacks in Mobile Sensor Networks Using Sequential Analysis -
2011
Due to the unattended nature of wireless sensor networks, an adversary can capture and compromise sensor nodes, generate
replicas of those nodes, and mount a variety of attacks with the replicas he injects into the network. These attacks are
dangerous because they allow the attacker to leverage the compromise of a few nodes to exert control over much of the
network. Several replica node detection schemes in the literature have been proposed to defend against these attacks in static
sensor networks. These approaches rely on fixed sensor locations and hence do not work in mobile sensor networks, where
sensors are expected to move. In this work, we propose a fast and effective mobile replica node detection scheme using the
Sequential Probability Ratio Test. To the best of our knowledge, this is the first work to tackle the problem of replica node
attacks in mobile sensor networks. We show analytically and through simulation experiments that our schemes achieve
effective and robust replica detection capability with reasonable overheads.

3.       Data Integrity Proofs in Cloud Storage ---- 2011
Cloud computing has been envisioned as the de-facto solution to the rising storage costs of IT Enterprises. With the high
costs of data storage devices as well as the rapid rate at which data is being generated it proves costly for enterprises or
individual users to frequently update their hardware. Apart from reduction in storage costs data outsourcing to the cloud also
helps in reducing the maintenance. Cloud storage moves the user’s data to large data centers, which are remotely located, on
which user does not have any control. However, this unique feature of the cloud poses many new security challenges which
need to be clearly understood and resolved. We provide a scheme which gives a proof of data integrity in the cloud which
the customer can employ to check the correctness of his data in the cloud. This proof can be agreed upon by both the cloud
and the customer and can be incorporated in the Service level agreement (SLA).

4.       Bridging Socially-Enhanced Virtual Communities --- 2011
Interactions spanning multiple organizations have become an important aspect in today's collaboration landscape.
Organizations create alliances to fulfill strategic objectives. The dynamic nature of collaborations increasingly demands for
automated techniques and algorithms to support the creation of such alliances. Our approach bases on the recommendation
of potential alliances by discovery of currently relevant competence sources and the support of semi-automatic formation.
The environment is service-oriented comprising humans and software services with distinct capabilities. To mediate
between previously separated groups and organizations, we introduce the broker concept that bridges disconnected
networks. We present a dynamic broker discovery approach based on interaction mining techniques and trust metrics.

5.       Adaptive Provisioning of Human Expertise in Service-oriented Systems --- 2011
Web-based collaborations have become essential in today’s business environments. Due to the availability of various SOA
frameworks, Web services emerged as the de facto technology to realize flexible compositions of services. While most
existing work focuses on the discovery and composition of software based services, we highlight concepts for a people-
centric Web. Knowledge-intensive environments clearly demand for provisioning of human expertise along with sharing of
computing resources or business data through software-based services. To address these challenges, we introduce an
adaptive approach allowing humans to provide their expertise through services using SOA standards, such as WSDL and
SOAP. The seamless integration of humans in the SOA loop triggers numerous social implications, such as evolving




#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank, Vijaynagar, Bangalore–40. Ph
: 080 – 23208045 / 23207367, 9886173099, mail ID: hr@citlindia.com,vinatha@citlindia.com
ABSTARCT: Dotnet PROJECTS
expertise and drifting interests of human service providers. Here we propose a framework that is based on interaction
monitoring techniques enabling adaptations in SOA-based socio-technical systems.


6.       Optimal service pricing for a cloud cache
Cloud applications that offer data management services are emerging. Such clouds support caching of data in order to
provide quality query services. The users can query the cloud data, paying the price for the infrastructure they use. Cloud
management necessitates an economy that manages the service of multiple users in an efficient, but also, resource economic
way that allows for cloud profit. Naturally, the maximization of cloud profit given some guarantees for user satisfaction
presumes an appropriate price-demand model that enables optimal pricing of query services. The model should be plausible
in that it reflects the correlation of cache structures involved in the queries. Optimal pricing is achieved based on a dynamic
pricing scheme that adapts to time changes. This paper proposes a novel price-demand model designed for a cloud cache
and a dynamic pricing scheme for queries executed in the cloud cache. The pricing solution employs a novel method that
estimates the correlations of the cache services in an time-efficient manner. The experimental study shows the efficiency of
the solution.

7.       Automated Certification for Compliant Cloud-based Business Processes
A key problem in the deployment of large-scale, reliable cloud computing concerns the difficulty to certify the compliance
of business processes operating in the cloud. Standard audit procedures such as SAS-70 and SAS- 117 are hard to conduct
for cloud based processes. The paper proposes a novel approach to certify the compliance of business processes with
regulatory requirements. The approach translates process models into their corresponding Petri net representations and
checks them against requirements also expressed in this formalism. Being Based on Petri nets, the approach provides well-
founded evidence on adherence and, in case of noncompliance, indicates the possible vulnerabilities. Keywords: Business
process models, Cloud computing, Compliance certification, Audit, Petri nets.

8.       Privacy-Preserving Multi-keyword Ranked Search over Encrypted Cloud Data
The advent of cloud computing, data owners are motivated to outsource their complex data management systems from local
sites to commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data
has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus,
enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and
documents in cloud, it is crucial for the search service to allow multi-keyword query and provide result similarity ranking to
meet the effective data retrieval need. Related works on searchable encryption focus on single keyword search or Boolean
keyword search, and rarely differentiate the search results. In this paper, for the first time, we define and solve the
challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and establish a
set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Among various multi-
keyword semantics, we choose the efficient principle of “coordinate matching”, i.e., as many matches as possible, to capture
the similarity between search query and data documents, and further use “inner product similarity” to quantitatively
formalize such principle for similarity measurement. We first propose a basic MRSE scheme using secure inner product
computation, and then significantly improve it to meet different privacy requirements in two levels of threat models.
Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given, and experiments on the
real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication.

9.       Continuous Neighbor Discovery in Asynchronous Sensor Networks
In most sensor networks the nodes are static. Nevertheless, node connectivity is subject to changes because of disruptions in
wireless communication, transmission power changes, or loss of synchronization between neighboring nodes. Hence, even
after a sensor is aware of its immediate neighbors, it must continuously maintain its view, a process we call continuous
neighbor discovery. In this work we distinguish between neighbor discovery during sensor network initialization and
continuous neighbor discovery. We focus on the latter and view it as a joint task of all the nodes in every connected
segment. Each sensor employs a simple protocol in a coordinate effort to reduce power consumption without increasing the
time required to detect hidden sensors.

10.      Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing
Cloud computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the
cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing
resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. Thus, enabling




#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank, Vijaynagar, Bangalore–40. Ph
: 080 – 23208045 / 23207367, 9886173099, mail ID: hr@citlindia.com,vinatha@citlindia.com
ABSTARCT: Dotnet PROJECTS
public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party
to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the
following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage
without demanding the local copy of data, and introduce no additional on-line burden to the cloud user. Specifically, our
contribution in this work can be summarized as the following three aspects:
1) We motivate the public auditing system of data storage security in Cloud Computing and provide a privacy-preserving
auditing protocol, i.e., our scheme supports an external auditor to audit user’s outsourced data in the cloud without learning
knowledge on the data content.

2) To the best of our knowledge, our scheme is the first to support scalable and efficient public auditing in the Cloud
Computing. In particular, our scheme achieves batch auditing where multiple delegated auditing tasks from different users
can be performed simultaneously by the TPA.

3) We prove the security and justify the performance of our proposed schemes through concrete experiments and
comparisons with the state-of-the-art.

11.      Multicast multi-path power efficient routing in mobile adhoc networks
The proposal of this paper presents a measurement-based routing algorithm to load balance intra domain traffic along
multiple paths for multiple multicast sources. Multiple paths are established using application-layer overlaying. The
proposed algorithm is able to converge under different network models, where each model reflects a different set of
assumptions about the multicasting capabilities of the network. The algorithm is derived from simultaneous perturbation
stochastic approximation and relies only on noisy estimates from measurements. Simulation results are presented to
demonstrate the additional benefits obtained by incrementally increasing the multicasting capabilities. The main application
of mobile ad hoc network is in emergency rescue operations and battlefields. This paper addresses the problem of power
awareness routing to increase lifetime of overall network. Since nodes in mobile ad hoc network can move randomly, the
topology may change arbitrarily and frequently at unpredictable times. Transmission and reception parameters may also
impact the topology. Therefore it is very difficult to find and maintain an optimal power aware route. In this work a scheme
has been proposed to maximize the network lifetime and minimizes the power consumption during the source to destination
route establishment. The proposed work is aimed to provide efficient power aware routing considering real and non real
time data transfer.

12.      Fuzzy Keyword Search over Encrypted Data in Cloud Computing
As Cloud Computing becomes prevalent, more and more sensitive information are being centralized into the cloud.
Although traditional searchable encryption schemes allow a user to securely search over encrypted data through keywords
and selectively retrieve files of interest, these techniques support only exact keyword search. In this paper, for the first time
we formalize and solve the problem of effective fuzzy keyword search over encrypted cloud data while maintaining
keyword privacy. Fuzzy keyword search greatly enhances system usability by returning the matching files when users’
searching inputs exactly match the predefined keywords or the closest possible matching files based on keyword similarity
semantics, when exact match fails. In our solution, we exploit edit distance to quantify keywords similarity and develop two
advanced techniques on constructing fuzzy keyword sets, which achieve optimized storage and representation overheads.
We further propose a brand new symbol-based trie-traverse searching scheme, where a multi-way tree structure is built up
using symbols transformed from the resulted fuzzy keyword sets. Through rigorous security analysis, we show that our
proposed solution is secure and privacy-preserving, while correctly realizing the goal of fuzzy keyword search. Extensive
experimental results demonstrate the efficiency of the proposed solution.

13.      A heuristic-based approach for detecting SQL-injection vulnerabilities in Web applications.




#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank, Vijaynagar, Bangalore–40. Ph
: 080 – 23208045 / 23207367, 9886173099, mail ID: hr@citlindia.com,vinatha@citlindia.com
ABSTARCT: Dotnet PROJECTS

14.      Next Generation Cloud Computing Architecture(2010)
Cloud computing is fundamentally altering expectations for how and when computing, storage an networking resources
should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume.
Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage
resources in response to changing demand patterns in real-time. Ultimately, Service Providers are under pressure to architect
their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine grained control to
reduce total cost of ownership while also improving agility. The current approaches to enabling real-time, dynamic
infrastructure are inadequate, expensive and not scalable to support consumer mass-market requirements. Over time, the
server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to
automate systems administration functions that are knowledge and labor intensive. This expensive and non-real time
paradigm is ill suited for a world where customers are demanding communication, collaboration and commerce at the speed
of light. Thanks to hardware assisted virtualization, and the resulting decoupling of infrastructure and application
management, it is now possible to provide dynamic visibility and control of service management to meet the rapidly
growing                        demand                        for                   cloud-based                      services.

15.      Design of Home Gateway based on Intelligent Network(2008)
Home networking is called as Digital Home Network. It means that PC, home entertainment equipment, home appliances,
Home wirings, security, illumination system were communicated with each other by some composing network technology,
constitute a networking internal home, and connect with WAN by home gateway. It is a new network technology and
application technology, and can provide many kinds of services inside home or between homes. Currently, home
networking can be divided into three kinds: Information equipment, Home appliances, Communication equipment.
Equipment inside home networking can exchange information with outer networking by home gateway, this information
communication is bidirectional, user can get information and service which provided by public networking by using home
networking internal equipment through home gateway connecting public network, meantime, also can get information and
resource to control the internal equipment which provided by home networking internal equipment .Based on the general
network model of home networking, there are four functional entities inside home networking: HA, HB, HC, and HD. (1)
HA (Home Access) - home networking connects function entity; (2) HB (Home Bridge) – Home networking bridge
connects function entity; (3) HC (Home Client) - Home networking client function entity; (4) HD (Home Device) – decoder
function entity. There are many physical ways to implement four function entities. Based on these four functional entities,
there are reference model of physical layer, reference model of link layer, reference model of IP layer and application
reference model of high layer. In the future home network should have broadband network function, public network
function, and compositive multi-service and multi-application function, etc.

16.      Random Cast: An Energy-Efficient Communication Scheme for Mobile Ad Hoc Networks (2009)
In mobile ad hoc networks (MANETs), every node overhears every data transmission occurring in its vicinity and thus,
consumes energy unnecessarily. However, since some MANET routing protocols such as Dynamic Source Routing (DSR)
collect route information via overhearing, they would suffer if they are used in combination with 802.11 PSM. Allowing no
overhearing may critically deteriorate the performance of the underlying routing protocol, while unconditional overhearing
may offset the advantage of using PSM.

17.      Credit Card Fraud Detection Using Hidden Markov Model(2008)
Now a day the usage of credit cards has dramatically increased. As credit card becomes the most popular mode of payment
for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the
sequence of operations in credit card transaction processing using a Hidden Markov Model (HMM) and show how it can be
used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit
card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At
the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show
the effectiveness of our approach and compare it with other techniques available in the literature.

18.      FARMERS BUDDY(2010).
The objective of this project is to help farmers by providing information regarding Market price, Weather forecast, Tips,
News through SMS which is cost effective. The Server will maintain the database related to agriculture like market price,
weather report (state), news        related to agriculture, Various reports on government policies , tips like suggestions.
Information stored in database is then sent as sms to registered farmer to assist them which may help them to take next step




#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank, Vijaynagar, Bangalore–40. Ph
: 080 – 23208045 / 23207367, 9886173099, mail ID: hr@citlindia.com,vinatha@citlindia.com
ABSTARCT: Dotnet PROJECTS
or precautions based on message. Server also responds to farmer request which is received in the form of sms to
maximum extend but sms sent from farmer should be in specific format. Database can be updated only by the
authentic user authorized by administrator.

19.    A Large-Scale Hidden Semi-Markov Model for Anomaly Detection on User Browsing
Behaviors(2009)
There are many solution based methods created against distributed denial of service (DDoS) attacks are focused on the
Transmission Control Protocol and Internet Protocol layers as a substitute of the high layer. The DDoS attack makes an
attempt to make a computer resource unavailable to its respective users. DoS attacks are implemented by either forcing the
targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service and actually
they are not suitable for handling the new type of attack which is based on the application layer. In this project, we establish
a new system to achieve early attack discovery and filtering for the application-layer-based DDoS attack. An extended
hidden semi-Markov model is proposed to describe the browsing habits of web searchers. A forward algorithm is derived
for the online implementation of the model based on the M-algorithm in order to reduce the computational amount
introduced by the model’s large state space. Entropy of the user’s HTTP request sequence accurate to the replica is used as a
principle to measure the user’s normality. Finally, experiments are conducted to validate our model and algorithm.

20.      BENEFIT-BASED DATA CACHING IN AD HOC NETWORKS(2008)
The project titled “BENEFIT-BASED DATA CACHING IN AD HOC NETWORKS” Data caching is the simple concept
which is known by us very clearly nothing but clipboard memory. Here in the proposed the caching can significantly
improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth
usage. However, designing efficient distributed caching algorithms is nontrivial when network nodes have limited memory.
The above optimization problem is known to be NP-hard. Defining benefit as the reduction in total access cost, we present
a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least 1/4 (1/2
for uniform-size data items) of the optimal benefit. Our distributed algorithm using a network simulator (ns2) and
demonstrate that it significantly outperforms another existing caching technique in all important performance metrics. The
performance differential is particularly large in more challenging scenarios such as higher access frequency and smaller
memory.

21.    Information Content-Based Sensor Selection and Transmission Power Adjustment for
Collaborative Target Tracking(2009)
For target tracking applications, wireless sensor nodes provide accurate information since they can be deployed and
operated near the phenomenon. These sensing devices have the opportunity of collaboration among themselves to improve
the target localization and tracking accuracies. An energy-efficient collaborative target tracking paradigm is developed for
wireless sensor networks (WSNs). In addition, a novel approach to energy savings in WSNs is devised in the information-
controlled transmission power (ICTP) adjustment, where nodes with more information use higher transmission powers than
those that are less informative to share their target state information with the neighboring nodes.

22.      Jamming-Aware Traffic Allocation for Multiple-Path Routing Using Portfolio Selection(2011)
Multiple-path source routing protocols allow a data source node to distribute the total traffic among available paths. In this
Project, we consider the problem of jamming-aware source routing in which the source node performs traffic allocation
based on empirical jamming statistics at individual network nodes. We formulate this traffic allocation as a lossy network
flow optimization problem using portfolio selection theory from financial statistics. We show that in multi-source networks,
this centralized optimization problem can be solved using a distributed algorithm based on decomposition in network utility
maximization (NUM). We demonstrate the network's ability to estimate the impact of jamming and incorporate these
estimates into the traffic allocation problem. Finally, we simulate the achievable throughput using our proposed traffic
allocation method in several scenarios.

23.      Data Leakage Detection(2010)
A data distributor has given sensitive data to a set of supposedly trusted agents (third parties). Some of the data are leaked
and found in an unauthorized place (e.g., on the web or somebody’s laptop). The distributor must assess the likelihood that
the leaked data came from one or more agents, as opposed to having been independently gathered by other means. We
propose data allocation strategies (across the agents) that improve the probability of identifying leakages. These methods do
not rely on alterations of the released data (e.g., watermarks). In some cases, we can also inject “realistic but fake” data
records to further improve our chances of detecting leakage and identifying the guilty party.




#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank, Vijaynagar, Bangalore–40. Ph
: 080 – 23208045 / 23207367, 9886173099, mail ID: hr@citlindia.com,vinatha@citlindia.com
ABSTARCT: Dotnet PROJECTS


24.      Enhanced Security for Online Exams Using Group Cryptography(2009)
Online exam is field that is very popular and made many security assurances. Then also it fails to control cheating. Online
exams have not been widely adopted well, but online education is adopted and using allover the world without any security
issues. An online exam is defined in this project as one that takes place over the insecure Internet, and where no proctor is in
the same location as the examinees. This project proposes an enhanced secure filled online exam management environment
mediated by group cryptography techniques using remote monitoring and control of ports and input. The target domain of
this project is that of online exams for any subject’s contests in any level of study, as well as exams in online university
courses with students in various remote locations. This project proposes a easy solution to the issue of security and cheating
for online exams. This solution uses an enhanced Security Control system in the Online Exam (SeCOnE) which is based on
group cryptography

25.     Host Based Intrusion Detection and Prevention System.
 An approach to IP traces back based on the probabilistic packet marking paradigm. Our approach, which we call
 randomize-and-link, uses large checksum cords to “link” message fragments in a way that is highly scalable, for the
 checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum cords is
 that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily
 create messages that collide with legitimate messages.

 26.     DDOS Tracking and Monitoring System Using HMM.
 There are many solution based methods created against distributed denial of service (DDoS) attacks are focused on the
 Transmission Control Protocol and Internet Protocol layers as a substitute of the high layer. The DDoS attack makes an
 attempt to make a computer resource unavailable to its respective users. DoS attacks are implemented by either forcing the
 targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service and actually
 they are not suitable for handling the new type of attack which is based on the application layer. In this project, we
 establish a new system to achieve early attack discovery and filtering for the application-layer-based DDoS attack. An
 extended hidden semi-Markov model is proposed to describe the browsing habits of web searchers. A forward algorithm is
 derived for the online implementation of the model based on the M-algorithm in order to reduce the computational amount
 introduced by the model’s large state space. Entropy of the user’s HTTP request sequence accurate to the replica is used as
 a principle to measure the user’s normality. Finally, experiments are conducted to validate our model and algorithm.




#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank, Vijaynagar, Bangalore–40. Ph
: 080 – 23208045 / 23207367, 9886173099, mail ID: hr@citlindia.com,vinatha@citlindia.com

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:5
posted:1/23/2013
language:English
pages:6