Secure Practical Outsourcing

Document Sample
Secure Practical Outsourcing Powered By Docstoc
					  International Conference on Computing and Control Engineering (ICCCE 2012), 12 & 13 April, 2012

           Secure Practical Outsourcing
    Dependable Storage And in Cloud Computing
                  Using honeypot
                                       R.Narendran#, S. Varadharajan*, N.Delhiganesh#
                    #
                        computer and science engineering, Surya College Of Engineering and Technology
                                                                Vikkravandi
                                                            1
                                                            narendran007@in.com
                                                        3
                                                         inform2delhi@gmail.com
                                      *
                                          Asst.Prof(dept of computer and science engineering)
                                            Surya College Of Engineering and Technology
                                                              Vikkravandi
                                                    2
                                                        varathan2k20@gmail.com
Abstract— Cloud Computing has been envisioned as the next-      placed in the cloud is available to everyone, security is
generation architecture of IT Enterprise. In contrast to        not assured. Storing the most secret data in cloud is more
traditional solutions, where the IT services are under proper   risk because the third party leakage of the data stored in
physical, logical and personnel controls, Cloud Computing moves cloud. To avoid this problem we introducing a scheme
the application software and databases to the large data centres,
                                                                called watermarking based on Steganography.
where the management of the data and services may not befully
trustworthy. Third party outflow of the data is the major          Steganography can be defined as security in obscurity – the
problem in the cloud data storage. We recommend in this         art of science to communicate in such a way that the presence
paper a more valuable secure storage of data in the cloud. The  of a message cannot be detected Steganography can be used to
projected design allows users to store data in secure mode      hide a message intended for later retrieval by a specific
with lightweight communication and computation cost.            individual or group. The aim is to prevent any other
Bearing in mind the cloud data are dynamic in nature, the       party from detecting it. The message inserted is used to
proposed design extra supports secure and efficient dynamic     assert copyright over a document. Watermarking of digital
operations on outsourced data, as well as block modification,   images in which one can embed some secret text in an
deletion and append.
                                                                encrypted manner and a secret image more than once in the
                                                                host image, starting from different pixel positions based on
Keywords— Inc dependable distributed storage ,Third party       the key. All this can be done without violating any of
outflow,security key ,honeypot,Intrusion Detection.             the enviable properties of the standard watermarking
                                                                algorithms. The key plays a essential role in this approach
                       I.    INTRODUCTION                       shaping how the secret information is set in the host
   Cloud computing is a general term for anything that          image. It involves the practice of hiding a message
   involves delivering hosted services over the internet.       secretly in an image, audio, video or any other form of digital
   These services are broadly divided into three categories: media and is a very admired scheme to resort their copyright
   Infrastructure-as-a-Service (IaaS), Platform-as-a-Service    issues. Document is a template. An electronic copy can
   (PaaS) and Software-as-a-Service (SaaS). A cloud service bedownloaded from the conference website. For questions
has three discrete characteristics that differentiate it from   onpaper guidelines, please contact the conference
   traditional hosting. It is sold on demand, typically by publicationscommittee as indicated on the conference website.
the minute or the hour; it is elastic - a user can have as much Information about final paper submission is available from the
or as little of a service as they want at any given time conference website.
and the service is fully managed by the cloud service
provider (the consumer needs nothing but a personal                             II.     PROBLEM STATEMENT
computer and Internet access).The advantage of cloud is           A. System Model
cost savings. The prime disadvantage is security. Cloud             A representative network architecture for cloud data
computing is used by many software industries nowadays.             storage is illustrated in Figure 1. Three different network
Since the security is not provided in cloud, many                   entities can be identified as follows:
companies adopt their unique security structure. For eg:                  User: users, who have data to be stored in the cloud
Amazon has its own security structure. Introducing a new                  andrely on the cloud for data computation, consist of
and uniform security structure for all types of cloud is the              bothindividual consumers and organizations.
problem we are going to tackle in this paper. Since the data




                               ISBN 978-1-4675-2248-9 © 2012 Published by Coimbatore Institute of Information Technology
  International Conference on Computing and Control Engineering (ICCCE 2012), 12 & 13 April, 2012
          Cloud Service Provider (CSP): a CSP, who has number of cloud data storage servers in different time
          signif-icant resources and expertise in building and intervals and subsequently is able to modify or delete users’
          managing distributed cloud storage servers, owns and data while remaining undetected by CSPs for a certain period.
          operates live Cloud Computing systems.                  Specifically, we consider two types of adversary with different
          Third Party Auditor (TPA): an optional TPA, who levels of capability in this paper: Weak Adversary: The
          has expertise and capabilities that users may not have, adversary is interested in corrupting the user’s data files stored
          is trusted to assess and expose risk of cloud storage on individual servers. Once a server is comprised, an
          services on behalf of the users upon request.           adversary can pollute the original data files by modifying or
    In cloud data storage, a user stores his data through a CSP introducing its own fraudulent data to prevent the original data
into a set of cloud servers, which are running in a simulta- from being retrieved by the user. Strong Adversary: This is
neous, cooperated and distributed manner. Data redundancy the worst case scenario, in which we assume that the
can be employed with technique of erasure-correcting code to adversary can compromise all the storage servers so that he
further tolerate faults or server crash as user’s data grows in can intentionally modify the data files as long as they are
size and importance. Thereafter, for application purposes, the internally consistent. In fact, this is equivalent to the case
user interacts with the cloud servers via CSP to access or where all servers are colluding together to hide a data loss or
retrieve his data. In some cases, the user may need to perform    corruption incident.
block level operations on his data. The most general forms of       C. Design Goals




Fig. 1: Cloud data storage architecture

   these operations we are considering are block update, delete,             Fig. 2: Cloud data request and response
   insert and append. As users no longer possess their data
locally, it is of critical importance to assure users that their         To ensure the security and dependability for cloud data
data are being correctly stored and maintained. That is, users        Storage under the aforementioned adversary model, we aim to
should be equipped with security means so that they can make          design efficient mechanisms for dynamic data verification and
continuous correctness assurance of their stored data even            operation and achieve the following goals: (1) Storage
without the existence of local copies. In case that users do not      correctness: to ensure users that their data are indeed stored
necessarily have the time, feasibility or resources to monitor        appropriately and kept intact all the time in the cloud. (2) Fast
their data, they can delegate the tasks to an optional trusted        localization of data error: to effectively locate the mal-
TPA of their respective choices. In our model, we assume that         functioning server when data corruption has been detected. (3)
the point-to-point communication channels between each                Dynamic data support: to maintain the same level of storage
cloud server and the user is authenticated and reliable, which        correctness assurance even if users modify, delete or append
can be achieved in practice with little overhead. Note that we        their data files in the cloud. (4) Dependability: to enhance data
don’t address the issue of data privacy in this paper, as in          availability against Byzantine failures, malicious data modifi-
Cloud Computing, data privacy is orthogonal to the problem            cation and server colluding attacks, i.e. minimizing the effect
we study here.                                                        brought by data errors or server failures. (5) Lightweight: to
  B. Adversary Model                                                  enable users to perform storage correctness checks with
    Security threats faced by cloud data storage can come             minimum overhead. No more than 3 levels of headings should
from two different sources. On the one hand, a CSP can be             be used.All headings must be in 10pt font.Every word in a
self-interested, untrusted and possibly malicious. Not                heading must be capitalized except for short minor words as
onlydoes it desire to move data that has not been or is rarely        listed in Section III-B.
accessed to a lower tier of storage than agreed for monetary            D. Selection of Provider
reasons, but it may also attempt to hide a data loss incident               A good service provider is the key to good service. So, it
due to management errors, Byzantine failures and so on. On            is Very important to select the right service provider. One
the other hand, there may also exist an economically-                 must make sure that the provider is trustworthy, well-
motivated adversary, who has the capability to compromise a




                              ISBN 978-1-4675-2248-9 © 2012 Published by Coimbatore Institute of Information Technology
  International Conference on Computing and Control Engineering (ICCCE 2012), 12 & 13 April, 2012
reputed for their customer service and should have a correctness and availability of the data files being stored on
established track record in IT- correlated ventures. As the distributed cloud servers must be guaranteed. One of the
cloud computing has taken hold, there are six major key issues is to effectively detect any unauthorized data
benefits that have become clear,                                 modifica-tion and corruption, possibly due to server
      1)Anywhere/anytime access - It promises ―universal‖ compromise and/or random Byzantine failures. Besides, in the
access to high-powered computing and storage resources for distributed case when such inconsistencies are successfully
anyone with a network access device.                             detected, to find which server the data error lies in is also of
      2)Collaboration among users - Cloud represents an great significance, since it can be the first step to fast recover
environment in which users can develop software based the storage errors. To address these problems, our main
services and from which they can deliver them.                   scheme for ensuring cloud data storage is presented in this
   3)Storage as a universal service - The cloud section. The first part of the section is devoted to a review of
represents a distant but scalable storage resource for basic tools from coding theory that are needed in our scheme
users anywhere and everywhere.                                   for file distribution across cloud servers.
   4) Cost benefits - The cloud promises to deliver                 A. File Distribution Preparation
computing power and services at a lower cost.                              Algorithm 1 Token Pre-computation [18]
                                                                           In order to achieve assurance of data storage
                                                                 correctness and data error localization simultaneously, our
         III.     PROPOSED SYSTEM MODEL                          scheme entirely relies on the pre-computed verification tokens.
   We designed the system model, without having the The main idea is as follows: before file distribution the user
third party to securely store the data and log files into pre-computes a certain number of short verification tokens on
the cloud server. Our model has the Steganography individual vector G(j) (j ∈ {1, . . ., n}), each token covering a
technique that will limit the access ability of the third party  random subset of data blocks. Later, when the user wants to
   User: an entity, who has data to be stored and make sure the storage correctness for the data in the cloud, he
computation, can be either enterprise or individual customers. challenges the cloud servers with a set of randomly generated
Cloud server(CS): an entity, which managed by the block indices. Upon receiving challenge, each cloud server
cloud service provider(CSP) to provide data storage service computes a short ―signature‖ over the specified blocks and
and has significant storage space and computation returns them to the user. The values of these signatures should
resources                                                        match the corresponding tokens pre-computed by the user.
   Important computers such as servers are usually protected, Meanwhile, as all servers operate over the same subset of the
patched and updated and maintained better than computers indices, the requested response values for integrity check must
such as test servers, workstations in school labs, desktops used also be a valid codeword determinedby secret matrix P
by organizational staff etc. These ubiquitous computers are         2.Challenge Token Precomputation[18]
the ones that administrators find it difficult to secure. If you    3.Correctness Verification and Error Localization[18]
think a system is hidden from the world or is not an important             Error localization is a key prerequisite for
and that it will be left alone you are wrong. In fact computers eliminating errors in storage systems. It is also of critical
which are not regularly monitored are the first ones to be importance to identify potential threats from external attacks.
compromised. There are many reasons why these computers However, many previous schemes, do not explicitly consider
will be attacked. A ―Script kiddie‖ picks random computers the problem of data error localization, thus only providing
to try out his exploit tools and code. A more experienced binary results for the storage verification. Our scheme
hacker will want to use the computer in order to cover his outperforms those by integrating the correctness verification
tracks before attacking more important computer like and error localization (misbehaving server identification) in
commercial servers. Another use of such unattended our challenge-response protocol: the response values from
computers is to use them in a ―Denial of Service‖ attackThe servers for each challenge not only determine the correctness
idea behind the entire package was to put together a set of of the distributed storage, but also contain information to
tools that collectively work as an intrusion detection system locate potential data error(s).
and also as an early warning system. The honeypot module            4. Error Recovery[18]Since our layout of file matrix is
can be used to simulate many services.The honeypots have systematic, the user can reconstruct the original file by
never been compromised so we are yet to see a complete downloading the data vectors from the first m servers,
intrusion but nevertheless the honeypots recorded enough assuming that they return the correct response values. Notice
data to show that computers today are not safe from attackers. that our verification scheme is based on random spot-checking,
These honeypots were behind multiple networks and were not so the storage correctness assurance is a probabilistic one.
providing any public services, nor were they advertised in any However, by choosing system parameters (e.g., r, l, t)
way.                                                             appropriately and conducting enough times of verification, we
                                                                 can guarantee the successful file retrieval with high
       IV.     ENSURING CLOUD DATA STORAGE                       probability. On the other hand, whenever the data corruptionis
   In cloud data storage system, users store their data in the detected, the comparison of pre-computed tokens and received
cloud and no longer possess the data locally. Thus, the response values can guarantee the identification of




                              ISBN 978-1-4675-2248-9 © 2012 Published by Coimbatore Institute of Information Technology
  International Conference on Computing and Control Engineering (ICCCE 2012), 12 & 13 April, 2012
misbehaving server(s) (again with high probability), which           HTTP – Fake web server versions, web-pages and error
will be discussed shortly. Therefore, the user can always ask messages.
servers to send back blocks of the r rowsspecified in the            FTP – Fake ftp sessions, logins and error messages.
challenge and regenerate the correct blocks by erasure               POP3 - Simple pop3 commands and messages.
correction                                                           SSH& TELNET – Fake SSH and TELNET servers.
        B. File Retrieval and Error Recovery                        The first and foremost requirement is the intrusion
   1) Derive a random challenge value αi [18].                   detection technique itself. The idea behind this technique is to
   2) Compute the set of r randomly-chosen indices[18].          simply scan all network packets either on a per-host basis or
                                                                 the entire network itself and match these packets with known
  V.     HONEYPOT            BASED       SECURE         CLOUD attack patterns usually called attack signatures. If a network
         SYSTEM                                                  packet matches a known attack then trigger an alert or
   Important computers such as servers are usually protected, perform some function to prevent it. This project aims at
patched and updated and maintained better than computers incorporating some of the tools already in use along with
such as test servers, workstations in school labs, desktops used adding some of the newer concepts in intrusion detection. The
by organizational staff etc. These ubiquitous computers are intrusion detection system should be extendable in terms of
the ones that administrators find it difficult to secure. If you attack signatures and detection rules and have the ability to
think a system is hidden from the world or is not an important add custom rules. Snort, an open source tool, was the
and that it will be left alone you are wrong. In fact computers intrusion detection tool of choice for this thesis work.
which are not regularly monitored are the first ones to be           Logging Mechanism :The clients should be capable of
compromised. There are many reasons why these computers both text based logging and logging to a database. The text
will be attacked. A ―Script kiddie‖ picks random computers based logging helps in deployment of clients with minimum
to try out his exploit tools and code. A more experienced dependences and requirements. Database logging helps in
hacker will want to use the computer in order to cover his better storage, adds flexibility in terms of logging, and also
tracks before attacking more important computer like allows expendability in terms of further processing of the logs.
commercial servers. Another use of such unattended                  Alerting Mechanism :Alerting methods can be email alerts,
computers is to use them in a ―Denial of Service‖ attack. local system alarms. The frequency of emails and their content
Organizations also face a considerable degree of security risk can be configured.
from within their own network. Recent CERT reports show
that about 71% of the attacks were instigated by insiders. The
element of the malicious insider poses an even bigger threat.
With more and more computers/networks to secure, an NIDS
should be easy to use, both in installation and configuration,
since many network administrators are concerned with
securing And managing a larger number of computers
systems. Alerting is also an important factor with an intrusion
detection system. An intrusion detection system should
provide a reasonably good alerting mechanism such as email
of some other network based mechanism. Another thing that
would help an administrator is having a central management
system using which he can not only view logs and alerts but
also configure the intrusion detection system. Having all
these issues in mind we proposed an intrusion detection
system which takes into account the issues described above.
Since there are quite a few freely available intrusion detection
systems, the idea behind this project was to incorporate some
of these tools and add some new techniques into an intrusion
detection system package. The network administrator who
has to manage a reasonably large number of computers in the
same local area network is the main user that this product
intends to target. Honeypots provided the deception systems         Fig. 3: Tracing the Attack Source
which would work for generating early warnings. Another use
of the honeypots was in conjunction with the tracing module.        Tracing the Attack Source :Another valuable feature is to
The honeypots would provide information to the tracing detect the source of the attack. There are several passive and
module which can then be used to trace the attacker back to active methods that can be used to trace an attacker back to
the source. The honeypot services are just simulations and not the source. Among the active scanning tools nmap, is
real servers and hence don’t have any security concerns.         probably the most popular and feely available was the tool of
                                                                 choice.




                              ISBN 978-1-4675-2248-9 © 2012 Published by Coimbatore Institute of Information Technology
   International Conference on Computing and Control Engineering (ICCCE 2012), 12 & 13 April, 2012
    Configuration of the Package:The intrusion detection            In this section, we analyze our proposed scheme in terms
mechanism itself should be configurable on a per client basis.      of security and efficiency. Our security analysis focuses on
The configuration can also be loaded using configuration files.     the adversary model defined in Section II. We also evaluate
                                                                    the efficiency of our scheme via implementation of both file
  VI.     PROVIDING DYNAMIC DATA OPERATION                          distribution     preparation     and     verification     token
          SUPPORT                                                precomputation.
This model may fit some application scenarios, suchas                A. Security Strength Against Weak Adversary
libraries and scientific datasets. However, in cloud datastorage,    1) Detection Probability against data modification:
there are many potential scenarios where data storedin the          In ourscheme, servers are required to operate on specified
cloud is dynamic, like electronic documents, photos, orlog rows ineach correctness verification for the calculation of
files etc. Therefore, it is crucial to consider the dynamiccase, requested
where a user may wish to perform various block-                     B. Security Strength Against Strong Adversary
leveloperations of update, delete and append to modify the          In this section, we analyze the security strength of our
data filewhile maintaining the storage correctness                   schemes against server colluding attack and explain why
assurance.The straightforward and trivial way to support these blind-ing the parity blocks can help improve the security
opera-tions is for user to download all the data from the cloud strengthof our proposed scheme.Recall that in the file
serversand re-compute the whole parity blocks as well as distribution preparation, the redun-dancy parity vectors are
verificationtokens. This would clearly be highly inefficient. In calculated via multiplying the filematrix F by P, where P is the
this section,we will show how our scheme can explicitly and secret parity generationmatrix we later rely on for storage
efficientlyhandle dynamic data operations for cloud data correctness assurance.If we disperse all the generated vectors
storage.                                                         directly after tokenprecomputation
A. Update Operation                                                        1) File Distribution Preparation: WeImplemented
Registered user can upload the files to cloud database. The is the stage of the project when the theoretical design is turned
secret key or permutation key are get generated while out into a working system. Thus it can be considered to be the
uploading files. This permutation key is automatically sent to most critical stage in achieving a successful new system and
user email ID                                                    in giving the user, confidence that the new system will work
B. Delete Operation                                              and be effective.Theimplementation stage involves careful
    Sometimes, af ter being stored in the c loud, certa in planning, investigation of the existing system and it’s
datablocks may need to be deleted. The delete operation weare constraints on implementation, designing of methods to
considering is ageneral one , in whic h user replacesthe data achieve changeover and evaluation of changeover methods.
block with zero or some specia l reserved datasymbol. From
this point of v iew, the delete opera tionis actua lly a special VII.      CONCLUSION
case of the data update operation,. Therefore,we can rely on        In this paper, we investigate the problem of data security in
the update proc edure to support deleteoperation. Also,allthe cloud data storage, which is essentially a distributed storage
affected tokens have to be modified and theupdated parity system. To achieve the assurances of cloud data integrity and
information has to be blinded using thesame method specified availability and enforce the quality of dependable cloud
in update operation.                                             storage service for users, we propose an effective and flexible
C. Append Operation                                              distributed scheme with explicit dynamic data support,
    In some ca ses, the u ser ma y wa nt to increa se the sizeof including block update, delete ,and append. We rely on
his stored data by adding blocks at the end of thedata file, erasure-correcting code in the file distribution preparation to
which we refer as data append . We anticipatethat the most provide redundancy parity vectors and guarantee the data
frequent append operation in cloud datastorage is bulk dependability.This clearly shows that any computer on the
append , in which the user needs touploadalarge number of Internet is not safe from probes, scans and attempted exploits.
blocks ( not a single block) atone time                          In general portscans occurred simultaneously on both
D. Insert Operation                                              honeypots indicating that the scans were generated by scripts
    An insert operation to the data file refers to an append      or tools. Exploit attempts were directed only to the relevant
    operation atthe desired index position while maintaining the systems. For example the mod-ssl exploit wasdirected only at
same da ta block structure f or the whole datafile,all blocks sta the Linux honeypot which was running apache. Even though
rting with index j +1 by one slot. Thus,a n insert opera t ion the Windows honeypot was online for less time it received
ma y a ff e ct ma ny rows in the logicaldata file matrix F, a nd more connection attempts. There could be many reasons for
a substa ntia l numbe r of c omputations a re required to this statistic. Windows is the most used OS among personal
renumber a ll the subsequent blocksas well as recompute the desktops. Often they are not fully patched and updated and
challenge response tokens.Henc e, a d irect insert operation is easy to exploit.One can even consider an out-of-the-box
difficult to support.                                             honeypot distribution with a modified kernel to make it easy
                                                                 for system administrators to deploy honeypots.
    V. SECURITY ANALYSIS AND PERFORMANCE
EVALUATION




                              ISBN 978-1-4675-2248-9 © 2012 Published by Coimbatore Institute of Information Technology
  International Conference on Computing and Control Engineering (ICCCE 2012), 12 & 13 April, 2012
                   VIII.    REFERENCES
[1] Amazon.com, ―Amazon Web Services (AWS),‖ Online at http://aws.
amazon.com, 2008.
[2] N. Gohring, ―Amazon’s S3 down for several hours,‖ Onlineat
http://www.pcworld.com/businesscenter/article/142549/amazons s3down for
several hours.html, 2008.
[3] A. Juels and J. Burton S. Kaliski, ―PORs: Proofs of Retrievability for
Large Files,‖ Proc. of CCS ’07, pp. 584–597, 2007.
[4] H. Shacham and B. Waters, ―Compact Proofs of Retrievability,‖ Proc.
of Asiacrypt ’08, Dec. 2008.
[5] K. D. Bowers, A. Juels, and A. Oprea, ―Proofs of Retrievability: Theory
and Implementation,‖ Cryptology ePrint Archive, Report 2008/175,2008,
http://eprint.iacr.org/.
[6] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson,
and D. Song, ―Provable Data Possession at Untrusted Stores,‖ Proc. of
CCS ’07, pp. 598–609, 2007.
[7] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, ―Scalable and
Efficient Provable Data Possession,‖ Proc. of SecureComm’08, pp.1–10, 2008.
[8] T. S. J. Schwarz and E. L. Miller, ―Store, Forget, and Check: Using
Algebraic Signatures to Check Remotely Administered Storage,‖ Proc.
of ICDCS ’06, pp. 12–12, 2006.
[9] M. Lillibridge, S. Elnikety, A. Birrell, M. Burrows, and M. Isard,
―A Cooperative Internet Backup Scheme,‖ Proc. of the 2003 USENIX
Annual Technical Conference (General Track), pp. 29–41, 2003.
[10] K. D. Bowers, A. Juels, and A. Oprea, ―HAIL: A High-Availability and
Integrity Layer for Cloud Storage,‖ Cryptology ePrint Archive, Report
2008/489, 2008, http://eprint.iacr.org/.
[11] L. Carter and M. Wegman, ―Universal Hash Functions,‖ Journal of
Computer and System Sciences, vol. 18, no. 2, pp. 143–154, 1979.
[12] J. Hendricks, G. Ganger, and M. Reiter, ―Verifying Distributed Erasure-
coded Data,‖ Proc. 26th ACM Symposium on Principles of Distributed
Computing, pp. 139–146, 2007.
[13] J. S. Plank and Y. Ding, ―Note: Correction to the 1997 Tutorial on
Reed-Solomon Coding,‖ University of Tennessee, Tech.Rep. CS-03-504,2003.
[14] Q. Wang, K. Ren, W. Lou, and Y. Zhang, ―Dependable and Secure
Sensor Data Storage with Dynamic Integrity Assurance,‖ Proc. of IEEE
INFOCOM, 2009.
[15] R. Curtmola, O. Khan, R. Burns, and G. Ateniese, ―MR-PDP: Multiple-
Replica Provable Data Possession,‖ Proc. of ICDCS ’08, pp. 411–420,2008.
[16] D. L. G. Filho and P. S. L. M. Barreto, ―Demonstrating Data Possession
and Uncheatable Data Transfer,‖ Cryptology ePrint Archive, Report
2006/150, 2006, http://eprint.iacr.org/.
[17] M. A. Shah, M. Baker, J. C. Mogul, and R. Swaminathan, ―Auditing to
Keep Online Storage Services Honest,‖ Proc. 11th USENIX Workshop
on Hot Topics in Operating Systems (HOTOS ’07), pp. 1–6, 2007
[18]Towards Secure and Dependable Storage Services in Cloud Computing
Cong Wang, Student Member, IEEE, Qian Wang, Student Member, IEEE,
Kui Ren, Member, IEEE,Ning Cao, Student Member, IEEE, and Wenjing Lou,
Senior Member, IEEE
[19] Yogendra Kumar Jain et al. / International Journal on Computer Science
and Engineering (IJCSE) ISSN : 0975-3397Vol. 3 No. 2 Feb 2011




                                    ISBN 978-1-4675-2248-9 © 2012 Published by Coimbatore Institute of Information Technology

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:7
posted:8/14/2012
language:English
pages:6
Description: bing INC google INC Honeypot technologies and their applicability as an internal countermeasure