Docstoc

Reliability and Security in MDRTS: A Combine Colossal Expression

Document Sample
Reliability and Security in MDRTS: A Combine Colossal Expression Powered By Docstoc
					                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. . 9, No. 3, March 2011

                        Reliability and Security in MDRTS
                                              A Combine Colossal Expression


       Gyanendra Kumar Gupta                                    A. K Sharma                                   Vishnu Swaroop
     Computer Sc. & Engg. Deptt.                       Computer Sc. & Engg. Deptt                      Computer Sc. & Engg. Deptt
     Kanpur Institute of Technology                    M.M.M. Engineering College                      M.M.M. Engineering College
      Kanpurr, UP, India, 208001                       Gorakhpur, UP, India, 273010                    Gorakhpur, UP, India, 273010
       gyanendrag@gmail.com                              akscse@rediffmail.com                           rsvsgkp@rediffmail.com

Abstract—Numerous types of Information Systems are broadly
used in various fields. With the fast development of computer                               I.     INTRODUCTION (HEADING 1)
network, Information System users care more about data                         Data reliability summarizes the validity, accuracy,
sharing in networks. Sharing of information and changes made               usability and integrity of related data between applications
by dissimilar user at different permission level is controlled by          and across Information Technology. This ensures that each
super user, but the read/write operation is performed in a
                                                                           user observes a reliable view of the constant data, including
reliable manner. In conventional relational database, data
reliability was controlled by consistency control mechanism
                                                                           visible changes made by the user's own transactions
when a data object is locked in a sharing mode, other                      (read/write) and transactions of other users or processes
transactions can only read it, but can not update it. If the               [1,2]. Data Reliability problems may arise at any time but are
conventional consistency control method has been used yet, the             frequently introduced during or following recovery situations
system’s concurrency will be inadequately influenced. So there             when backup copies of the data are used in place of the
are many new necessities for the consistency control in the field          original data. Reliability is mostly concerned with
of Information system (MDRTS). In present era not only the                 consistency [3].
information grows enormously it also brings together in
                                                                               Building distributed database system reliability is very
different nature of data like text, image, and picture, graphic
and sound. The problem not limited only to type of data (e.g.
                                                                           important. The failure of a distributed database system can
databases) it has used in different environment of database like           result in anything from easily repairable errors to disastrous
Mobile Database, Distributed, Real Time Database, and                      meltdowns. A reliable distributed database system is
Database and Multimedia database. There are many aspects of                designed to be as fault tolerant as feasible. Fault tolerance
data reliability problems in mobile distributed real time system           deals with making the system function in the presence of
(MDRTS), such as inconsistency between attribute and type of               faults. Faults can occur in any of the components of a
data; the inconsistency of topological relations after objects has         distributed system. This article gives a brief overview of the
been modified. In this paper, many cases of data reliability are           different types of faults in a system and some of their
discussed for Information System. As the mobile computing                  solutions.
becomes well-liked and the database grows with information
sharing security is a big issue for researchers. Reliability and              Various kinds of data consistency have been identified.
Security of data is a big confront for researchers because when            These include Application Consistency, Transaction
ever the data is not reliable and secure no maneuver on the                Consistency and Point-in-Time Consistency
data (e.g. transaction) is useful. It becomes more and more
crucial when the data changes from one form to another (i.e.                          II.        VARIUOS TYPE OF CONSISTENCY
transactions) that are used in non-traditional environment like
Mobile, Distributed, Real Time and Multimedia databases. In                A. Point in Time Consistency
this paper we raise the different aspects and analyze the
available solution for reliability and security of databases.                  Data is said to be Point in Time consistent if all of the
Conventional Database Security has focused primarily on                    interrelated data components are as they were at any single
creating user accounts and managing user privileges level to               instant in time. This type of consistency can be visualized by
database objects. In this paper we also talk about an                      picturing a data center that has experienced a power failure.
impression of the present and past            database security            Before the lights come back on and processing resumes, the
challenges.                                                                data is considered time consistent, due to the fact that the
                                                                           entire processing environment failed at the same instant of
    Key Words- System Reliability, Sharing , Data                          time.
Consistency, Data Privileges, Data Loss, Data Recovery,                        Different types of failures may create a situation where
Integrity, Concurrency Control & Recovery, Distributed                     Point in Time consistency is not maintained. For example,
Databases, Transactions, Security, Authentication,                         consider the failure of a single logical volume containing
Integrity, Access Control, Encryption                                      data from several applications. If the only recovery option is
                                                                           to restore that volume from a backup taken sometime earlier,
                                                                           the data contained on the restored volume is not consistent



                                                                     144                               http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. . 9, No. 3, March 2011
with the other volumes, and additional recovery steps must                  By and large we take synchronization features for granted
be undertaken.[101]                                                     and do not give much thought to how they all work together
                                                                        to protect both the integrity and consistency of the data. It is
B. Transaction Consistency                                              the integrity of the data and various systems that allows
   A transaction is a logical unit of work that may include             applications to restart after a power failure or other
any number of file or database updates. During normal                   unscheduled event.
processing, transaction consistency is present only
                                                                                  III.   DATA LOSS VS. DATA CONSISTENCY
    •    Before any transactions have run.                                  How does one reconcile the possibility of lost data versus
                                                                        the integrity and consistency of the data? Often times,
    •    Follow the completion of a successful transaction              traditional backups were created while files were being
         and before the next transaction begins, and                    updated. Eventually, backups created in this fashion were
                                                                        referred to as “fuzzy backups” as neither the consistency nor
    •    When the application ends normally or the                      the integrity of the data could be assured.
         database is closed.
                                                                             Importantly it is better idea to capture as many updates as
    A failure of some kind, the data will not be transaction            possible, even if the end result is not consistent. Let us
consistent if transactions were in-flight at the time of the            consider this point within the confines of a "typical" large
failure. In most cases what occurs is that once the application         systems data center. For the sake of discussion, let us assume
or database is restarted, the incomplete transactions are               that there are many applications sharing data on hundreds of
identified and the updates relating to these transactions are           logical volumes in many thousands of data sets. What
either “backed-out” or processing resumes with the next                 happens to the integrity of the data if some updates are
dependant write [4].                                                    applied and others are not? Should this occur, the data is in
                                                                        an artificial state, one that is neither time, transaction nor
C. Application Consistency                                              application consistent? When the applications are restarted, it
    It is similar to Transaction consistency, but on a grander          is likely that some data will be duplicated, while other data
scale. Instead of data consistency within the scope of a single         will still be missing. The difficulty here is in identifying
transaction, data must be consistent within the confines of             which updates were successful, which updates caused
many different transaction streams from one or more                     erroneous results and which updates are missing.
applications. An application may be made up of many                        In all cases it is preferable to have time consistent data,
different types of data, such as multiple database                      even if a few partial transactions are lost or rolled back in the
components, various types of files, and data feeds from other           process.
applications. Application consistency is the state in which all
related files and databases are in-synch and represent the true             Data loss can be defined as data that is lost and cannot be
status of the application.                                              recovered by another means. Often, individual transactions
                                                                        or files can be restored or recreated, which is inconvenient,
     Data Consistency refers to the usability of data and is            but does not represent a true loss of data. Even in cases
often taken for granted in the single site environment. Data            where some transactional data cannot be recreated or
Consistency problems may arise even in a single-site                    recovered by the data center support teams, it can sometimes
environment during recovery situations when backup copies               be re-entered by the end user if necessary.
of the production data are used in place of the original data
[5].                                                                       If considering an asynchronous Business Continuity and
                                                                        Disaster Recovery solution, it is important to understand that
    In order to ensure that your backup data is useable, it is          some updates may be lost in flight. However, the greater
necessary to understand the backup methodologies that are in            consideration is that the asynchronous solution you select
place as well as how the primary data is created and                    provides you time consistent data for all of your interrelated
accessed. Another very important consideration is the                   applications. In this way, recovery is similar to the process
consistency of the data once the recovery has been completed            necessary to achieve Transaction and Application
and the application is ready to begin processing.                       Consistency following an outage at the primary site.
    In order to appreciate the integrity of your data, it is                Data loss does not imply a loss of data integrity.
important to understand the dependent write process. This               However, given a choice, most organizations will protect
occurs within individual programs, applications, application            data consistency—for example, ensuring that bank deposits
systems and databases. A dependent write is a data update               and withdrawals occur in the proper sequence so that account
that is not to be written until a previous update has been              balances reflect a consistent picture any given point in time.
successfully completed. In the large systems environments,              This is preferable to processing transactions out of sequence,
the logic that determines the sequence in which systems issue           or, to use our banking example again, to record the
“writes” is controlled by the application processing flow and           withdrawal and not the preceding deposit [7].
supported by basic system functions [6].




                                                                  145                             http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. . 9, No. 3, March 2011
         IV.   THE BACKUP PROBLEM - AN OVERVIEW                               It is true that different records would have been backed
   For a set of backup data to be of any value it needs to be             up if the write I/O pattern had been different, or if the backup
consistent in some fashion; Time, Transaction or Application              process was either faster or slower. The point here is that
consistency is required. For an individual data set, one with             unless the backup could have been processed instantaneously
no dependencies on any other data, this can be accomplished               (or at least in the time between two of the file write I/Os), the
by creating a simple Point in Time copy of the data and                   backup copy does not represent consistent data within the
ensuring that the data is not updated during the backup                   file.
process.[8]                                                                   In order to address this failing, various methods were
    At peek, this appears to be a relatively simple thing to              developed including transaction logging, transaction back out
accomplish -- at least for an individual data set. However, if            and file reload with applied journal transactions, just to name
this data set is being updated by a critical on-line application,         a few. These methods are all share the attributes of requiring
there may never be an opportunity to create a consistent                  extra effort (before the backup) and additional time -
backup-copy without temporarily halting the critical                      possibly even manual intervention - before the data can be
application. With today's dependence on 24x7 processing,                  used. More importantly, the corrective process requires an
the opportunities for even temporarily interrupting critical              in-depth understanding of both the application and data.
applications to create a window” are seldom available [9].                These requirements dictate that a unique recovery scenario
                                                                          be designed for nearly each and every data set.
    As this problem became more prevalent, there were
various methods used to attempt to address the situation. One                 The integrity problem is daunting enough when viewed
of these methods was to create a “fuzzy” backup of the data,              in the context of just these 20 records, but what about when
that is, to create the backup copy while updates were allowed             there are interdependencies between thousands of data sets
to continue. Various utilities were used to perform this                  residing on hundreds (or even thousands) of volumes?
“backup while open” (BWO), but they all shared the attribute                   In this greater context, simple data consistency within
that the backup copy of the data may or may not be useable:               individual data sets is no longer sufficient. What is required
If no additional actions were taken to validate and ensure the            is time consistency across all of the interdependent data. As
consistency of the data, any use of this backup data was                  it is impossible to achieve this with the traditional backup
predicated on the hope that “some data is better than                     methodologies, newer technologies are required to support
nothing” and generally produced unpredictable and/or un-                  time consistent data?
repeatable results.
                                                                              Fortunately, there are solutions available today. For a
In fact, there are three different possible outcomes, should              single-site solution, FlashCopy with Consistency Groups can
this fuzzy backup be restored:                                            be used to create a consistent Point-in-Time copy that can
                                                                          then be backed-up by traditional means [11].
    1.   The data is accidentally consistent and useable.                     To guarantee the correct results and consistency of
         This is a happy circumstance that may or may not                 databases, the conflicts between transactions can be either
         be repeatable.                                                   avoided, or detected and then resolved. Most of the existing
                                                                          mobile database CC techniques use the (conflict)
    2.   The data is not consistent and not useable. A                    serializability as the correctness criterion. They are either
         subsequent attempt to use the data detects the                   pessimistic if they avoid conflicts at the beginning of
         errors and abnormal end subsequent processing.                   transactions, or optimistic if they detect and resolve conflicts
                                                                          right before the commit time, or hybrid if they are mixed. To
    3.   The data is NOT consistent, but does not cause an                fulfill this goal, locking, timestamp ordering (TO) and
         ABEND and happens to be useable to the                           serialization graph testing can be used as either a pessimistic
         application. It is used by subsequent processing and             or optimistic algorithm.
         any data errors go undetected and uncorrected. This
         is the worst possible outcome.                                                   V.    SECURITY IN DATABASES
    One of the first things it might be notice when looking at                Database security is the system, processes, and
the records contained on the backup is that they are different            procedures that protect a database from unintended activity.
from the data records that were present on the file both                  Unintended activity can be categorized as authenticated
before the backup started and immediately after the backup                misuse, malicious attacks or inadvertent mistakes made by
ended. In fact, the records contained within the backup are a             authorized individuals or processes. Database security is also
completely artificial construct and does not accurately                   a specialty within the broader discipline of computer
describe the contents of the file at any Point in Time. This is           security. Databases introduce a number of unique security
not a consistent backup of the data. It is neither data-                  requirements for their users and administrators. On one hand,
consistent within itself nor is it time-consistent from any               databases are designed to promote open and flexible access
point in time. It is a completely artificial representation of a          to data. On the other hand, it’s this same open access that
file that never existed. [10]                                             makes databases vulnerable to many kinds of malicious
                                                                          activity. These are just a few of the database security
                                                                          problems that exist within organizations. The best way to



                                                                    146                             http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                               Vol. . 9, No. 3, March 2011
avoid a lot of these problems is to employ qualified                     your physical features, your voice, and fingerprint locks can
personnel and separate the security responsibilities from the            read your fingerprints [14, 15].
daily database maintenance responsibilities [12, 31].
                                                                             Access control is a rapidly growing market and soon may
    Traditionally databases have been protected from                     manifest itself in such ways we cannot even imagine.
external connections by firewalls or routers on the network              Nowadays, security access control is a necessary component
perimeter with the database environment existing on the                  for businesses. There are many ways to create this security.
internal network opposed to being located within a                       Some companies hire a security guard to stand in the
demilitarized zone. Additional network security devices that             gateway. There are many security devices that prevent or
detect and alert on malicious database protocol traffic                  permit access such as a turnstile. The best most effective
include network intrusion detection systems along with host-             access control systems are operated by computers.
based intrusion detection systems.
                                                                             Auditing is a computer security audit is a manual or
    One of the main issues faced by database security                    systematic measurable technical assessment of a system or
professionals is avoiding inference capabilities. Basically,             application. Manual assessments include interviewing staff,
inference occurs when users are able to piece together                   performing security vulnerability scans, reviewing
information at one security level to determine a fact that               application and operating system access controls, and
should be protected at a higher security level. Database                 analyzing physical access to the systems. Automated
security is more critical as networks have become more                   assessments include system generated audit reports or using
open.                                                                    software to monitor and report changes to files and settings
                                                                         on a system. Systems can include personal computers,
Databases provide many layers and types of information                   servers, mainframes, network routers, switches. Applications
security, typically specified in the data dictionary, including:         can include Web Services, Databases [16].
                                                                             Authentication is the process of confirming a user or
    •    Access control                                                  computer’s identity. The process normally consists of four
                                                                         steps:
    •    Auditing

    •    Authentication                                                  1. The user makes a claim of identity, usually by providing a
                                                                         username. For example, It might make this claim by telling a
    •    Encryption                                                      database that my username is something.
                                                                            2. The system challenges the user to prove his or her
    •    Integrity controls                                              identity. The most common     challenge is a request for a
    Database security can begin with the process of creation             password.
and publishing of appropriate security standards for the
                                                                             3. The user responds to the challenge by providing the
database environment. The standards may include specific
                                                                         requested proof. In this example, It would provide the
controls for the various relevant database platforms; a set of
                                                                         database with my password.
best practices that cross over the platforms; and linkages of
the standards to higher level polices and governmental                      4. The system verifies that the user has provided
regulations.                                                             acceptable proof by, for example, checking the password
                                                                         against a local password database or using a centralized
    Access Control is a term taken from the linguistic world
                                                                         authentication server
of security. In general, it means the execution of limitations
and constrictions on whoever tries to occupy a certain                       Encryption is good. It helps make things more secure.
protected property. Guarding an entrance of a person is also a           However, the idea that strong cryptography is good security
practice of access control. There are many types of access               by itself is simply wrong. Encrypted messages eventually
control.[28]. Some of them are mentioned in this article.                have to be decrypted so they are useful to the sender or
You, the reader of this article, will have several types of              receiver. If those end-points are not secured, then getting the
access control around you.         Nowadays, almost every                plain-text messages is trivial [17]. This is a demonstration of
computer user has a firewall or antivirus is running on every            a crude process of accomplishing that. There is no dispute
computer, a popup blocker and many other programs. All of                about the need for strong encryption, particularly for
these are with access control functions [13]. All of these               privileged communications. There is no way to have a high
programs guard us from intruders of sorts. They inspect                  level of assurance that the entire path between endpoints of a
everything trying to enter the computer and let it in or leave           message is secure, so the message has to be hidden in transit.
it out. Computers have complicated access control abilities.             While brute-force decryption is possible, modern forms of
They ask for authentication and search for the digital                   encryption have made this process too long to be valuable
signatures. Also, there are different types of keypads and               [18].
access control systems. In today's world the keys and locks
are beginning to look different. With the passage of time, the               Computer security authentication means verifying the
key locks also got smarter. They can identify the patterns of            identity of a user logging onto a network. Passwords, digital
                                                                         certificates, smart cards and biometrics can be used to prove



                                                                   147                             http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. . 9, No. 2, March 2011
the identity of the user to the network. Computer security                  database security manager also provides different types
authentication includes verifying message integrity, e-mail                 of access control for different users and assesses new
authentication and MAC (Message Authentication Code),                       programs that are performing with the database. If these
checking the integrity of a transmitted message. There are                  tasks are performed on a daily basis, you can avoid a lot
human authentication, challenge-response authentication,                    of problems with users that may pose a threat to the
password, digital signature, IP spoofing and biometrics [19,                security of the database.
26].
    Human authentication is the verification that a person              •   Varied Security Methods for Applications: More often
initiated the transaction, not the computer. Challenge-                     than not applications developers will vary the methods
response authentication is an authentication method used to                 of security for different applications that are being
prove the identity of a user logging onto the network. When                 utilized within the database. This can create difficulty
a user logs on, the network access server (NAS), wireless                   with creating policies for accessing the applications.
access point or authentication server creates a challenge,                  The database must also possess the proper access
typically a random number sent to the client machine. The                   controls for regulating the varying methods of security
client software uses its password to encrypt the challenge                  otherwise sensitive data is at risk.
through an encryption algorithm or a one-way hash function
and sends the result back to the network. This is the                   •   Post-Upgrade Evaluation: When a database is upgraded
response.                                                                   it is necessary for the administrator to perform a post-
    Two- factor authentication requires two independent                     upgrade evaluation to ensure that security is consistent
ways to establish identity and privileges. The method of                    across all programs. Failure to perform this operation
using more than one factor of authentication is also called                 opens up the database to attack.
strong authentication. This contrasts with traditional
password authentication, requiring only one factor in order to          •   Split the Position: Sometimes organizations fail to split
gain access to a system. Password is a secret word or code                  the duties between the IT administrator and the
used to serve as a security measure against unauthorized                    database security manager. Instead the company tries to
access to data. It is normally managed by the operating                     cut costs by having the IT administrator do everything.
system or DBMS. However, a computer can only verify the                     This action can significantly compromise the security of
legality of the password, not the legality of the user.                     the data due to the responsibilities involved with both
    The two major applications of digital signatures are for                positions. The IT administrator should manage the
setting up a secure connection to a website and verifying the               database while the security manager performs all of the
integrity of files transmitted. IP spoofing refers to inserting             daily security processes.
the IP address of an authorized user into the transmission of
an unauthorized user in order to gain illegal access to a               •   Application Spoofing: Hackers are capable of creating
computer system.                                                            applications that resemble the existing applications
                                                                            connected to the database. These unauthorized
    Biometrics is a more secure form of authentication than                 applications are often difficult to identify and allow
typing passwords or even using smart cards that can be                      hackers access to the database via the application in
stolen. However, some ways have relatively high failure
                                                                            disguise.
rates. For example, fingerprints can be captured from a water
glass and fool scanners.
                                                                        •   Manage User Passwords: Sometimes IT database
                                                                            security managers will forget to remove IDs and access
    VI.   DATABASE SECURITY ISSUES: DATABASE SECURITY                       privileges of former users which leads to password
            PROBLEMS AND HOW TO AVOID THEM                                  vulnerabilities in the database. Password rules and
    A database security manager is the most important asset                 maintenance needs to be strictly enforced to avoid
to maintaining and securing sensitive data within an                        opening up the database to unauthorized users.
organization. Database security managers are required to
multitask and juggle a variety of headaches that accompany              •   Windows OS Flaws: Windows operating systems are
the maintenance of a secure database. For any organization it               not effective when it comes to database security. Often
is important to understand some of the database security                    theft of passwords is prevalent as well as denial of
problems that occur within an organization and how to avoid                 service issues. The database security manager can take
them. If it is understand that how, where, and why of                       precautions through routine daily maintenance checks.
database security you can prevent future problems from
occurring [20].                                                             As organizations increase their reliance on, possibly
                                                                        distributed, information systems for daily business, they
•     Regular Maintenance: Database audit logs require daily            become more vulnerable to security breaches even as they
      review to make certain that there has been no data                gain productivity and efficiency advantages. Though a
      misuse. This requires overseeing database privileges              number of techniques, such as encryption and electronic
                                                                        signatures, are currently available to protect data when
      and then consistently updating user access accounts. A
                                                                        transmitted across sites, a truly comprehensive approach for



                                                                  148                             http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. . 9, No. 3, March 2011
data protection must also include mechanisms for enforcing                       transaction has inserted additional rows that satisfy
access control policies based on data contents, subject                          the condition.
qualifications and characteristics, and other relevant
contextual information, such as time. It is well understood                 VIII. INTRODUCTION TO DATA CONCURRENCY AND
today that the semantics of data must be taken into account in                CONSISTENCY IN A MULTIUSER ENVIRONMENT
order to specify effective access control policies. Also,
techniques for data integrity and availability specifically                 In a single-user database, the user can modify data in the
tailored to database systems must be adopted. In this respect,          database without concern for other users modifying the same
over the years the database security community has                      data at the same time. However, in a multiuser database, the
developed a number of different techniques and approaches               statements within multiple simultaneous transactions can
to assure data confidentiality, integrity, and availability.            update the same data. Transactions executing at the same
However, despite such advances, the database security area              time need to produce meaningful and consistent results.
faces several new challenges. Factors such as the evolution             Therefore, control of data concurrency and data consistency
of security concerns, the "disintermediation¿ of access to              is vital in a multiuser database [22].
data, a new computing paradigms and applications, such as
grid-based computing and on-demand business. we have                        •    Data concurrency means that many users can access
introduced both new security requirements and new contexts                       data at the same time.
in which to apply and possibly extend current approaches. In
this review, we first survey the most relevant concepts                     •    Data consistency means that each user sees a
underlying the notion of database security and summarize the                     consistent view of the data, including visible
most well-known techniques. We focus on access control                           changes made by the user's own transactions and
systems, on which a large body of research has been devoted,                     transactions of other users.
and describe the key access control models, namely, the
discretionary and mandatory access control models, and the                  To describe consistent transaction behavior when
role-based access control model. We also discuss security for           transactions execute at the same time, database researchers
advanced data management systems, and cover topics such                 have defined a transaction isolation model called
as access control for XML. We then discuss current                      serializability. The serializable mode of transaction behavior
challenges for database security and some preliminary                   tries to ensure that transactions execute in such a way that
approaches that address some of these challenges [21].                  they appear to be executed one at a time, or serially, rather
                                                                        than concurrently [31].
              VII. MAJOR SECURITIES CHALLENGES                              While this degree of isolation between transactions is
                                                                        generally desirable, running many applications in this mode
                                                                        can seriously compromise application throughput. Complete
 1.       Security Awareness and End-users
                                                                        isolation of concurrently running transactions could mean
 2.       Google Exposure
                                                                        that one transaction cannot perform an insert into a table
 3.       Standard Compliance & Regulations Updates                     being queried by another transaction. In short, real-world
 4.       Vulnerability Management                                      considerations usually require a compromise between perfect
 5.       Frequently Change of Management and Lack of Co-               transaction isolation and performance.
          ordination in Management
                                                                            In general, multiuser databases use some form of data
    Review the four levels of transaction isolation with                locking to solve the problems associated with data
differing degrees of impact on transaction processing                   concurrency, consistency, and integrity. Locks are
throughput. These isolation levels are defined in terms of              mechanisms that prevent destructive interaction between
three phenomena that must be prevented between                          transactions accessing the same resource.
concurrently executing transactions.
                                                                        Resources include two general types of objects:
The three preventable phenomena are:
                                                                            •    User objects, such as tables and rows (structures
      •     Dirty reads: A transaction reads data that has been                  and data)
            written by another transaction that has not been
            committed yet.                                                  •    System objects not visible to users, such as shared
                                                                                 data structures in the memory and data dictionary
      •     Non-repeatable (fuzzy) reads: A transaction rereads                  rows
            data it has previously read and finds that another
            committed transaction has modified or deleted the               Database automatically provides read consistency to a
            data.                                                       query so that all the data that the query sees comes from a
                                                                        single point in time (statement-level read consistency).
      •     Phantom reads: A transaction re-executes a query            Database can also provide read consistency to all of the
            returning a set of rows that satisfies a search             queries in a transaction (transaction-level read consistency).
            condition and finds that another committed



                                                                  149                             http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                             Vol. . 9, No. 32, March 2011
    Database uses the information maintained in its rollback           this situation by setting higher values of INITRANS for
segments to provide these consistent views. The rollback               tables that will experience many transactions updating the
segments contain the old values of data that have been                 same blocks. Doing so enables Database to allocate sufficient
changed by uncommitted or recently committed transactions.             storage in each block to record the history of recent
Database provides statement-level read consistency using               transactions that accessed the block.
data in rollback segments.
                                                                           Database generates an error when a serializable
  1) Statement-Level Read Consistency                                  transaction tries to update or delete data modified by a
    Database always enforces statement-level read                      transaction that commits after the serializable transaction
consistency. This guarantees that all the data returned by a           began.When a serializable transaction fails with the "Cannot
single query comes from a single point in time--the time that          serialize access" error, the application can take any of several
the query began. Therefore, a query never sees dirty data nor          actions:
any of the changes made by transactions that commit during
query execution. As query execution proceeds, only data                    •    Commit the work executed to that point
committed before the query began is visible to the query. The
query does not see changes committed after statement                       •    Execute additional (but different) statements
execution begins.                                                               (perhaps after rolling back to a save point
   2) Read Consistency with Real Application Clusters                           established earlier in the transaction)
    Real Application Clusters use a cache-to-cache block
transfer mechanism known as Cache Fusion to transfer read-                 •    Roll back the entire transaction
consistent images of blocks from one instance to another.                 5) Comparison of Read Committed and Serializable
Real Application Clusters does this using high speed, low
                                                                       Isolation
latency interconnects to satisfy remote requests for data
                                                                           Database gives the application developer a choice of two
blocks.
                                                                       transaction isolation levels with different characteristics.
  3) Read Committed Isolation                                          Both the read committed and serializable isolation levels
   The default isolation level for Database is read                    provide a high degree of consistency and concurrency. Both
committed. This degree of isolation is appropriate for                 levels provide the contention-reducing benefits of Database's
environments where few transactions are likely to conflict.            read consistency multiversion concurrency control model
Database causes each query to execute with respect to its              and exclusive row-level locking implementation and are
own materialized view time, thereby permitting                         designed for real-world application deployment.
nonrepeatable reads and phantoms for multiple executions of
a query, but providing higher potential throughput. Read                    a) Transaction Set Consistency
committed isolation is the appropriate level of isolation for              A useful way to view the read committed and serializable
environments where few transactions are likely to conflict.            isolation levels in Database is to consider the following
                                                                       scenario: Assume you have a collection of database tables (or
  4) Serializable Isolation                                            any set of data), a particular sequence of reads of rows in
Serializable isolation is suitable for environments:                   those tables, and the set of transactions committed at any
                                                                       particular time. An operation (a query or a transaction) is
                                                                       transaction set consistent if all its reads return data written by
    •    With large databases and short transactions that
                                                                       the same set of committed transactions. An operation is not
         update only a few rows
                                                                       transaction set consistent if some reads reflect the changes of
                                                                       one set of transactions and other reads reflect changes made
    •    Where the chance that two concurrent transactions             by other transactions. An operation that is not transaction set
         will modify the same rows is relatively low                   consistent in effect sees the database in a state that reflects no
                                                                       single set of committed transactions.
    •    Where relatively long-running transactions are
         primarily read-only                                               Database provides transactions executing in read
                                                                       committed mode with transaction set consistency for each
    Serializable isolation permits concurrent transactions to          statement. Serializable mode provides transaction set
make only those database changes they could have made if               consistency for each transaction.
the transactions had been scheduled to execute one after
another. Specifically, Database permits a serializable                      b) Row-Level Locking
transaction to modify a data row only if it can determine that             Both read committed and serializable transactions use
prior changes to the row were made by transactions that had            row-level locking, and both will wait if they try to change a
committed when the serializable transaction began.                     row updated by an uncommitted concurrent transaction. The
    Under some circumstances, Database can have                        second transaction that tries to update a given row waits for
insufficient history information to determine whether a row            the other transaction to commit or roll back and release its
has been updated by a "too recent" transaction. This can               lock. If that other transaction rolls back, the waiting
occur when many transactions concurrently modify the same              transaction, regardless of its isolation mode, can proceed to
data block, or do so in a very short period. You can avoid



                                                                 150                             http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                               Vol. . 9, No. 3, March 2011
change the previously locked row as if the other transaction             and serializable isolation provide a high level of concurrency
had not existed.                                                         for high performance, without the need for reading
                                                                         uncommitted ("dirty") data [23, 24]
     However, if the other blocking transaction commits and
releases its locks, a read committed transaction proceeds with                e) Read Committed Isolation
its intended update. A serializable transaction, however, fails              For many applications, read committed is the most
with the error "Cannot serialize access", because the other              appropriate isolation level. Read committed isolation can
transaction has committed a change that was made since the               provide considerably more concurrency with a somewhat
serializable transaction began.                                          increased risk of inconsistent results due to phantoms and
     c) Referential Integrity                                            non-repeatable reads for some transactions.
    Because Database does not use read locks in either read-                 Many high-performance environments with high
consistent or serializable transactions, data read by one                transaction arrival rates require more throughput and faster
transaction can be overwritten by another. Transactions that             response times than can be achieved with serializable
perform database consistency checks at the application level             isolation. Other environments that support users with a very
cannot assume that the data they read will remain unchanged              low transaction arrival rate also face very low risk of
during the execution of the transaction even though such                 incorrect results due to phantoms and no repeatable reads.
changes are not visible to the transaction. Database                     Read committed isolation is suitable for both of these
inconsistencies can result unless such application-level                 environments.
consistency checks are coded with this in mind, even when
using serializable transactions.                                             Database read committed isolation provides transaction
                                                                         set consistency for every query. That is, every query sees
     d) Distributed Transactions                                         data in a consistent state. Therefore, read committed isolation
    In a distributed database environment, a given transaction           will suffice for many applications that might require a higher
updates data in multiple physical databases protected by two-            degree of isolation if run on other database management
phase commit to ensure all nodes or none commit. In such an              systems that do not use multiversion concurrency control.
environment, all servers, whether Database or non-Database,                  Read committed isolation mode does not require
that participates in a serializable transaction are required to          application logic to trap the "Cannot serialize access" error
support serializable isolation mode.                                     and loop back to restart a transaction. In most applications,
                                                                         few transactions have a functional need to issue the same
If a serializable transaction tries to update data in a database         query twice, so for many applications protection against
managed by a server that does not support serializable                   phantoms and non-repeatable reads is not important.
transactions, the transaction receives an error. The                     Therefore many developers choose read committed to avoid
transaction can roll back and retry only when the remote                 the need to write such error checking and retry code in each
server does support serializable transactions.                           transaction.
    In contrast, read committed transactions can perform                      f) Serializable Isolation
distributed transactions with servers that do not support                    Database's serializable isolation is suitable for
serializable transactions.                                               environments where there is a relatively low chance that two
    Application designers and developers should choose an                concurrent transactions will modify the same rows and the
isolation level based on application performance and                     long-running transactions are primarily read-only. It is most
consistency needs as well as application coding                          suitable for environments with large databases and short
requirements.                                                            transactions that update only a few rows.
    For environments with many concurrent users rapidly                      Serializable isolation mode provides somewhat more
submitting transactions, designers must assess transaction               consistency by protecting against phantoms and
performance requirements in terms of the expected                        nonrepeatable reads and can be important where a read/write
transaction arrival rate and response time demands.                      transaction executes a query more than once.
Frequently, for high-performance environments, the choice                    Unlike other implementations of serializable isolation,
of isolation levels involves a trade-off between consistency             which lock blocks for read as well as write, Database
and concurrency.                                                         provides nonblocking queries and the fine granularity of
    Application logic that checks database consistency must              row-level locking, both of which reduce write/write
take into account the fact that reads do not block writes in             contention. For applications that experience mostly
either mode.                                                             read/write contention, Database serializable isolation can
                                                                         provide significantly more throughput than other systems.
   Database isolation modes provide high levels of                       Therefore, some applications might be suitable for
consistency, concurrency, and performance through the                    serializable isolation on Database but not on other systems.
combination of row-level locking and Database's
multiversion concurrency control system. Readers and                         Coding serializable transactions requires extra work by
writers do not block one another in Database. Therefore,                 the application developer to check for the "Cannot serialize
while queries still see consistent data, both read committed             access" error and to roll back and retry the transaction.



                                                                   151                             http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                               Vol. . 9, No. 3, March 2011
Similar extra coding is needed in other database management              impair the integrity and availability of a database. [26, 27,
systems to manage deadlocks. For adherence to corporate                  28]
standards or for applications that are run on multiple
database management systems, it may be necessary to design                   There are several technique have been built for
transactions for serializable mode. Transactions that check              maintaining the security and reliability of systems like Data
for serializability failures and retry can be used with                  Consistency Techniques - Two-Process Mutual Exclusion:
Database read committed mode, which does not generate                    Dekker's- and Peterson's Algorithms, N-Process Mutual
serializability errors.                                                  Exclusion using Hardware, N-Reader, 1-Writer Mutual
                                                                         Exclusion using Head/Tail Flags. But the available
    Serializable mode is probably not the best choice in an              techniques are not sufficient for the different database
environment with relatively long transactions that must                  environment where the data is huge and complex for
update the same rows accessed by a high volume of short                  transactions including security system. The unusual
update transactions. Because a longer running transaction is             requirement in security, how ever mean that designers must
unlikely to be the first to modify a given row, it will                  careful consider their opinions when choosing database
repeatedly need to roll back, wasting work. Note that a                  technology for deployment commercially available products
conventional read-locking, pessimistic implementation of                 can provide outstanding performance, reliability, scalability
serializable mode would not be suitable for this environment             but unless they are expressly for embedded use, may
either, because long-running transactions--even read                     compromise overall security system. Security is more than
transactions--would block the progress of short update                   Just Good Crypto - The point here is not that encryption is
transactions and vice versa.)                                            worthless. The point is that encryption by itself is not helpful
                                                                         [29]. The endpoints need to be secure, passwords need to be
    Developers should take into account the cost of rolling              difficult to crack, and those who do have access to the
back and retrying transactions when using serializable mode.             system need to be trustworthy. One might ask what is the
As with read-locking systems, where deadlocks occur                      point of being able to see plaintext versions of encrypted
frequently, use of serializable mode requires rolling back the           communication if they already have root access? Getting
work done by terminated transactions and retrying them. In a             additional passwords for other systems, obtaining
high contention environment, this activity can use significant           information that passes through the system but is not stored
resources.                                                               on the system (text conversations, for instance), or bypassing
    For the most part in transaction operations, a transaction           system controls that might catch direct attempts at data.
that restarts after receiving the "Cannot serialize access"              System call traces can be used on any kind of process such as
error is improbable to encounter a second conflict with                  e-mail daemons, web servers, or encrypted chat programs. In
another transaction. For this reason it can help to execute              order for any security tool to be effective, it needs to be
those statements most likely to contend with other                       layered with other strong security tools, starting with a
transactions as early as possible in a serializable transaction.         security policy. No one tool, by itself, can ever prevent
However, there is no guarantee that the transaction will                 information theft or attacks, but several layers of security
complete successfully, so the application should be coded to             provide the most solid defense against would-be hackers.
limit the number of retries.                                             Encryption needs to be accompanied by server hardening,
                                                                         intrusion detection, firewalls, and auditing. Without it,
    Database management systems implement a multi-                       encryption is easily compromised.
version concurrency control algorithm called snapshot
isolation rather than providing full serializability based on
locking. There are well-known anomalies permitted by                                                  REFERENCES
snapshot isolation that can lead to violations of data
consistency by interleaving transactions that would maintain             [1]   Turner, S., L. Albert, B. Gajewski, and W. Eisele, “Archived
consistency if run serially. Until now, the only way to                        Intelligent Transportation System Data Quality”, Preliminary
                                                                               Analyses of San Antonio TransGuide Data. Transportation Research
prevent these anomalies was to modify the applications by                      Record, 2000(1719), p.p. 8.
introducing explicit locking or artificial update conflicts,
                                                                         [2]   Wang, R. Y., V.C. Storey and C.P. Firth, “A Framework for Analysis
following careful analysis of conflicts between all pairs of                   of Data Quality Research,” IEEE Transactions on Knowledge and
transactions [25]                                                              Data Engineering, Vol. 7, No. 4, August, 1995, pp. 623-640
                                                                         [3]   Redman, Thomas C., “Improve Data Quality for Competitive
                      IX.   CONCLUSION                                         Advantage” , Sloan Management Review, 1995, pp. 99-107.
                                                                         [4]   Ronald Fagin, “On an authorization mechanism”, ACM Transactions
    Database security concerns the confidentiality, integrity,                  on Database Systems (TODS), v.3 n.3, p.310-319, Sept. 1978
and availability of data stored in a database. A extensive               [5]   Bhattacharya, S., Brannon, K. W., Hsiao, H. and Narang, I., “Data
cover of research from authorization, to inference control, to                 Consistency in a Loosely Coupled Transaction Model”, IBM
multilevel secure databases, and to multilevel secure                          Research Report, RJ10232, (Feb 2002).
transaction processing, addresses primarily how to protect               [6]   Elisa Bertino , Elena Ferrari , Vijay Atluri, “The specification and
the security of a database, especially its confidentiality.                     enforcement of authorization constraints in workflow management
                                                                               systems”, ACM Transactions on Information and System Security
However, very limited research has been done on how to                         (TISSEC), v.2 n.1, p.65-104, Feb. 1999.
survive successful database attacks, which can seriously




                                                                   152                                 http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            Vol. . 9, No. 3, March 2011
[7]    Nygard, Greg ; Hammoudi, Faouzi, “Role-Based Access Control for                     SIGART-SIGMOD Symposium on Principles of Database Systems,
       Loosely Coupled Distributed Database Management Systems”,                           pages 135{141, March 1988
       Storming Media, ISBN-13: 9781423511045, pp 132.                                [28] B. S. Jajodia, P. Samarati, V. S. Subrahmanian, and E. Bertino., “ A
[8]    Gyanendra Kumar Gupta, A K Sharma, V Swaroop, “A permutation                        unified framework for enforcing multiple access control policies”, In
       Gigantic Issues in Mobile         Real Time Distributed Database:                   Proceedings of ACM SIGMOD International Conference on
       Consistency & Security”, IJCSE, Vol. 9 No. 3, March 2011.                           Management of Data, pages, May 1997, pp 474–485.
[9]    Suparna Bhattacharya, Karen W. Brannon, Hui-I Hsiao,                           [29] Michael J. Cahill, Uwe Rohm. Alan D Fekete, “ Serilazation isolation
       “Coordinating Backup/Recovery and Data Consistency Between                          for snapshot databse”, ACM Transactions on database Systems
       Database and File Systems”, Proceeding of the 2002 ACM SIGMOD                       (TODS), Vol 34 Issue 4, December 2009.
       international conference on Management of data, New York, NY,                  [30] R. Sandhu and F. Chen., “The multilevel relational (mlr) data model”,
       USA 2002                                                                            ACM Transactions on Information and Systems Security, 1(1), 1998.
[10]   S Bhattacharya, C Mohan etal, “Coordinating backup/recovery and
       data consistency between database and file systems”, Proceeding
                                                                                                                AUTHORS PROFILE
       SIGMOD '02 Proceedings of the 2002 ACM SIGMOD international
       conference on Management of data, ACM New York, NY, USA,
       2002.                                                                                      Gyanendra Kumar Gupta received his Master
[11]   Ji-Won Byun , Yonglak Sohn , Elisa Bertino, “Systematic control and                        degree in Computer Application in year 2001 and
       management of data integrity”, Proceedings of the eleventh ACM
       symposium on Access control models and technologies, June 07-09,                           M.Tech in Information Technology in year 2004.
       2006, Lake Tahoe, California, USA.                                                         He has worked as Faculty in different reputed
[12]   John B. Kam , Jeffrey D. Ullman, “A model of statistical database              organizations. Presently he is working as Asst. Prof. in
       their security”, ACM Transactions on Database Systems (TODS), v.2              Computer Science and Engineering Deptt. , KIT, Kanpur. He
       n.1, p.1-10, March 1977                                                        has more than 10 years teaching experience. His area of
[13]   David F. Ferraiolo , Ravi Sandhu , Serban Gavrila, Ramaswamy                   interest includes DBMS, Networks and Graph Theory. His
       Chandramouli, Proposed “NIST standard for role-based access
       control”, ACM Transactions on Information and System Security                  research papers related to Real Time Distributed Database
       (TISSEC), v.4 n.3, p.224-274, August 2000.                                     and Computer Network are published in several National,
[14]   Elisa Bertino, Ravi Sandhu, ”Database security - concepts,                     International Conferences and Journals. He is pursuing his
       approaches, and challenges”,eee Transactions On Dependable And                 PhD in Computer Science.
       Secure Computing (2005), Volume: 2, Issue: 1, Publisher: IEEE
       Computer Society, Pages: 2-19.
[15]   Elena Ferrari, “Database as a Service : Challenges and Solutions for
                                                                                                  Dr. A.K. Sharma received his Master degree in
       Privacy      and   Security“,      Computing(2009),      Pages: 46-51,                    Computer Science in year 1991 and PhD degree
       Services Computing Conference,2009, , p.232-241.                                          in 2005 from IIT, Kharagpur. Presently he is
[16]   Schneier, “Cryptography, Security, and the Future,” Communications                        working as Associate Professor in Computer
       of the ACM, v. 40, No. 1, January 1997, p. 138                                 Science and Engineering Department, Madan Mohan
[17]   B. Schneier, “Why Cryptography is Harder than it Looks,”                       Malaviya Engineering College, Gorakhpur. he has more
       Information Security Bulletin, v. 2, n. 2, March 1997, pp. 31-36
                                                                                      than 23 years teaching experience. His areas of interest
[18]   Gail-Joon Ahn , Ravi Sandhu, Role-based authorization constraints
       specification, ACM Transactions on Information and System Security             include Database Systems, Computer Graphics, Object
       (TISSEC), v.3 n.4, p.207-226, Nov. 2000                                        Oriented Systems. He has published several papers in
[19]   S. Srinivasan , Anup Kumar, “Database security curriculum in                   National & International conferences & Journals.
       InfoSec program”, Proceedings of the 2nd annual conference on
       Information security curriculum development, September 23-24,
       2005, Kennesaw, Georgia
                                                                                                 Vishnu Swaroop received his Master degree in
[20]   Bertino, E.; Sandhu, R.; “Database security - concepts, approaches,
                                                                                                 Computer Application in year 2002 presently he is
       and challenges”, Dependable and Secure Computing, IEEE                                    working as Computer Programmer in Computer
       Transactions on , Volume: 2 Issue:1, pp 2 - 19 , April 2005.                              Science and Engineering Department, Madan
[21]   H.T. Kung and J Robinon, “On optimistic Methods for Cuncurrency”,              Mohan Malaviya Engineering College, Gorakhpur. He has
       ACM Transaction on Datbase Systems, 6(2), Dec. 1981, pp 213-226.               more than 20 years teaching and professional experience. His
[22]   P.M. Bober and M.J. Carey. Multiversion query locking. In                      area of interest includes DBMS, & Networks s research
       Proceedings of the Conference on Very Large Databases, Morgan
       Kaufman pubs. (Los Altos CA) 18, Vancouver., August, 1992                      papers related to Mobile Real Time Distributed Database and
[23]   Ravishankar K. Iyer, “EEE Transactions on Dependable and Secure                Computer Network. He has published several papers in
       Computing”, IEEE Computer Society Press Los Alamitos, CA, USA,                 several National, International conferences and Journals. He
       Volume 2 Issue 1, January 2005, pp 1.                                          is pursuing his PhD in Computer Science.
[24]   D. Agrawal and S. Sengupta. “Modular synchronization in
       multiversion databases: Version control and concurrency control”,
       ACM SIGMOD Conf. on the Management of Data 89, (Portland OR),
       -Jun.., May 1989.
[25]   P. P. Griffiths and B. W. Wade, “An Authorization Mechanism for a
       Relational Database System”, ACM Transactions on Database
       Systems, 1(3), September 1976., pp 242–255.
[26]   Ravishankar K. Iyer, “EEE Transactions on Dependable and Secure
       Computing”, IEEE Computer Society Press Los Alamitos, CA, USA,
       Volume 2 Issue 1, January 2005, pp 1.
[27]   Thanasis Hadzilacos, “Serialization graph algorithms for multiversion
       concurrency control.”, In Proceedings of the ACM SIGACT-




                                                                                153                                 http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500