TS5352 � Enterprise Database Systems by eDgFkO

VIEWS: 0 PAGES: 9

									TS5352 – Enterprise Database Systems
          Mark Lindquist
         mrkcl@msn.com
        September 1st, 2009
        Unit 8 Assignment 1


 Draft Milestone Three
Quality Governance:

Data quality, defined as “data that is fit for their intended uses in operations, decision
making, and planning (Hoffer P. 524),” at MVCH is in need of major improvements. The
reasons for good quality data is-
     “Dirty” data can cause extra work and delays on IT projects as it introduces new
       risks associated with the data.
     Business decisions, that support the goals and objectives of MVCH, are hampered
       by bad data.
     Compliance with SOX requires that data be of good quality.
     Having good quality data helps MVCH enter new lines of business or improves
       existing ones.

So what is quality data that alleviates these concerns at MVCH? Data quality, when it is
of high caliber, supports the following eight characteristics.

           1. Uniqueness: Data entities must be unique within the database with a key
              that uniquely identifies only that entity. Also to ensure uniqueness,
              identity matching and steps to resolve duplicate data must exist.
           2. Accuracy: A data entity must exist for the purpose it was intended. Not
              having enough information about an entity and not describing this
              information in the database makes an entity valid but not accurate.
           3. Consistency: Data from one database must be consistent with related data
              from another database. Attributes such as weight within a database of a
              certain entity, for example, must match the attributes of size and material
              type of an entity within another database, row, or table.
           4. Completeness: “All data that must have an assigned value does have an
              assigned value (Hoffer P. 525).” Completeness also means that all the data
              that is needed is present.
           5. Timeliness: Data that is expected within a system at a certain time is
              available at that time. Also data representing the real world has a
              timeframe in which it does so.
           6. Currency: Data must be up to date so inaccuracies with outdated data are
              avoided. For example phone numbers must be up to date so customers can
              be reached.
           7. Conformance: Data must be presented in a format that the meta-data
              stipulates.
           8. Referential Integrity: A rule that maintains consistency among the rows
              of two relations (Hoffer P.208). Each foreign key in a relation must match
              a primary key in another relation or be null.

These eight characteristics of quality data should be in place at MVCH when introducing
the new data structure.

So how do we implement such a data quality program within MVCH to meet the eight
characteristics described above? First of all to ensure data quality within MVCH
everyone throughout the organization must be committed and participate in data quality.
If only a few members of MVCH support data quality initiatives data quality will never
be achieved.

Since there is no current data quality program at MVCH it becomes necessary first to
conduct a data quality audit to explore the irregularities and inconsistencies in the current
data structure. The audit entails conducting a statistical profile of all files. This profile
will uncover unexpected extremes of values for each field. Data can also be checked
against relevant business rules to ensure the controls are in place to enforce them.

At this point in our discussion of quality data it becomes necessary to introduce a data
quality governance program that contains a data stewardship program. Under the data
governance program data stewards are assigned to each business unit deciphering the
individual business unit’s data needs integrating them with the data needs of MVCH.
These stewards are experts on their individual business unit’s data characteristics as well
those of MVCH.

The data governance program that is to be initiated at MVCH controls these stewards of
quality data but also, in broader terms, supports the data architecture of MVCH, the data
integration process, and ensures a good data management program. Under a proper data
governance program data is made of high quality and is available, mitigates risks
associated with bad data, and reviews results of the data audit.

In achieving the eight quality ingredients of good data quality at MVCH, data will now
be accurate and available with continuous monitoring of the data to ensure it stays so.
With sound and cutting edge data that meets the needs of MVCH’s business decisions,
goals, and objectives the new information system at MVCH will reap the benefits both
short and long term as the data will dictate. The applications and databases that will make
up the new system at MVCH will, under quality data, be robust and efficient.

Security Governance:

Quality data at MVCH, as described in length above, is just one aspect of the new
information system that will make it perform efficiently or perform at all. Securing that
data is as important to MVCH as introducing good quality data. Data security is “the
protection of the data against accidental or intentional loss, destruction, or misuse
(Hoffer P. 569).” The new system at MVCH will be complex with a distributed
client/server database environment. Packets of data will traverse the local intranet as well
as the internet. With this type of environment security is of the utmost concern. Data in
this environment can be intercepted, misused, and error prone.

Within MVCH’s new information system data must be secure but from what or whom?
Threats to MVCH’s data environment include intentional and unintentional actions
against the data. Actions such as human error, software and hardware caused
irregularities, accidental loss, theft, fraud, loss of data quality, and unavailability all must
be brought under a security governance program initiated within MVCH.
A committee must manage two areas of interest within the new information system being
proposed; website security and client/server security. These two areas define the extent of
the new information system at MVCH. Security in a client/server environment becomes
more important because of eavesdropping, unauthorized access, and the flow of packets
of information through the network.

Because of this added need for security the client/server architecture must be protected in
a number of areas. “The servers, the network, the client workstations, related
components, and the users (Hoffer P. 571)” must all come under the security governance
program at MVCH. Servers should be physically located in secure areas accessible only
to administrators and supervisors. Passwords at the server interface can prevent
unwarranted invasion.

The network itself must be secure. Network security must protect the transport of packet
information between client and server. Network security is enabled by authenticating the
client as well as the server. Audit trails of attempted access also helps in identifying
unwanted authorizations.

Just as client/server architectures must be secured the web enabled database and web
pages that access the database must be secured. With activity throughout the web
environment being a convenient point of access for any type of user, securing this
environment is essential to securing the data. Because of the explosion of e-commerce on
the web, data especially needs protection or its intended use will be compromised.
Privacy is another concern when navigating the web environment. Confidentiality of
one’s personal or private data is essential in protecting the user’s privacy.

Securing the data:

We have discussed some ways of securing data above in the form of passwords, physical
security, and authentication but now we discuss actual procedures and actions that can be
taken to protect data at MVCH. Views, Assertions, Checks, Domains, and other Integrity
controls are all part of the DBMS software that puts into place security measures.

Views through SQL restrict access to certain data by creating a dynamically created,
restricted view of data from a base table. Assertions are constraints on a database query
where only queries with certain conditions are allowed to continue. Domains allow a user
controlled data type to exist where the Domain is created and other data types use this
domain to restrict the results. All of these constraints and security measures should be
initiated by the security governance plan at MVCH.

Other procedures, policies, and organizational structures are also available for enacting
secure data. Authorization Rules, like many other techniques, restrict access to users
allowing, for example, read only privileges on a record of interest. Authorization rules
use matrix tables, stored in the repository, that show the extent to which a user can access
data.
Encryption is another powerful tool in helping to secure data at MVCH. With no
protection of data currently at MVCH encryption techniques will be put into place
ubiquitously throughout the new information system. Encryption scrambles data into
unreadable packets by the use of a public key or the use of a private and a public key. A
public key works by assigning a “key” to the author of the data and the user who wants
access to the data. The public key encrypts the data for the author and decrypts the data
for the user. A private-public key works the same way but the user is given the private
key and uses it to decrypt the message.

The new information system at MVCH will require many security features and
protections be put into place as described above. Data is only as good as it is secure.
Once data becomes compromised it is of little value and is even detrimental to the
organization. Therefore it is imperative that a security governance program be established
at MVCH under the lead of the data administrator and the database administrator.
Securing the data that is vital to MVCH’s competitive advantage is no small task but with
these protection controls discussed the data can remain secure and provide MVCH with a
secure and useful system.

Business Continuity Planning (BCP):

BCP is essential for MVCH. BCP is a strategy for the organization that attempts to cover
all aspects of disaster. Whereas most organizations view BCP as an extra expense with
mediocre results BCP becomes not an activity that most organizations embrace. Since
911 organizations now see BCP as essential to their existence. For instance only 44% of
companies who go through fires exist after the disaster; and of these only 33% survived
longer than three years. The need for a BCP is clear.

At MVCH the BCP initiation must be supported by the entire organization. Executives
must be behind the plan as must managers and directors as it is they who can push the
ingredients of a BCP through to fruition.

At this point MVCH must develop the team who will design the actual BCP.
Management of the BCP should be set up as should a timeframe and initial costs of the
proposed system. Next the team must identify the risks and conduct risk assessments to
determine what risks the BCP will address. Risks such as “technical, economic, internal,
external, human or natural (Naef W.)” must be considered under the BCP.

Management of these risks is the next activity to consider. Preventive, detective, and
reactive measures, under the risk management assessment, should exist. For instance it
might be prudent to enlist the help of outside sources or implement safeguards and
controls to ensure an efficient BCP.

Next a business impact analysis will help recover from a disaster. Under management the
recovery phase with a good impact analysis will determine where steps must be taken to
recover from unexpected circumstances.
Now the BCP can be created at MVCH. Based on the above descriptions the BCP will
alleviate all areas of disaster and allow for a fast and complete recovery. Most
importantly the BCP must be tested in each area of concern where the tests reveal the
extent and efficiency of the BCP. "We see far too many Business Continuity Plans and or
Disaster Recovery Plans that whilst they have been tested were done so in unrealistic
ideal conditions and thus we do not truly recognize what really happens in a crisis
(Spinks D.)" The results of the tests and the tests themselves should align with the
business needs of MVCH. To keep MVCH a competitive hospital the BCP should cover
areas that MVCH deems most important to its success.

Once the plan is devised it is essential that it be revised on a continuing basis. MVCH
will undergo foreseen and unforeseen changes as it grows within the surrounding
community. As a result the BCP must reassess itself to meet the changing environment
and consequently the changing risks associated with MVCH.

Under the BCP, prevention of data and database loss, unavailability, and damage is an
important aspect of a fluent and efficient database. Database backup and recovery,
therefore, is essential at MVCH within its new information system. See appendix A for
detailed steps that will ensure the database is protected.

Finally it is the employees of MVCH whom will put into practice the BCP. Everyone
involved should be educated on the BCP because it is them who will react or prevent
incidents.

Also of note is the media aspect of the recovery within a BCP. The communication link
between employees, executives, patients, shareholders, and the general public with
regards to the recovery process and final results should be communicated to all. Having
recovered from a disaster is not sufficient enough. If the above mentioned concerned
citizens are not informed of the BCP or its recovery results then they will remain in the
dark and not be able to perform their usual functions.

MVCH’s adoption of a good BCP will ensure that the new information system performs
and continues to do so even in the event of disaster. Along with the information system
the hospital itself can continue its duties and feel content that a disaster of unforeseen
consequences will not bring it down.

Data Integration Strategy:

The current data structure at MVCH is inadequate as business units within the hospital
operate on data and use databases that perform for the individual departments that make
up MVCH. Currently each department runs a database, separate from the other
departments, to satisfy its own purposes. The consolidation of data into a single database
should be the ultimate goal of MVCH. This single, enterprise wide database, provides
each department with the data they need to perform their tasks and makes transparent the
use of that data to all functions within all departments for the hospital as a whole.
A more detailed and technical look at data integration is revealed through Enterprise
Application Integration (EAI) and Enterprise Information (EI). EAI is the conversion of
one data type to another data type, through middleware, so one application can share data
with another. Metadata reformatting is a key aspect of EAI. A key benefit of EAI is it
eliminates the need to redesign processes and systems to achieve data integration, leaving
existing systems alone. At MVCH this becomes an important factor as data processes and
systems will not have to be overhauled as EAI can integrate data without all the hassle of
redesigning the data structure completely.

However EAI does not take into account the high level business intelligence that results
from the enterprise database and, although EAI requires no redesign of current data
architecture within MVCH, the old separated data and architecture currently in use at
MVCH will, in fact, be redesigned to achieve client/server architecture.

EI would be a better choice to achieve integration of data at MVCH because it requires a
redesign of existing data structures, something, as mentioned, above MVCH will need to
do anyway. But also EI achieves a true centralized data approach. It does this by
establishing a common set of metadata which, not only centralizes data, but ensures that
functions of the application share data definitions as well. As a result data processes are
standardized producing data that is accurate. If an organization, like MVCH, can take on
the complex task of redesign, EI, which establishes a centralized database in which all
data is shared, is certainly worth the effort.



Appendix A: Database backup and recovery-

As mentioned above, databases can become damaged, unavailable, or lost. Many causes
are the culprit. Program errors, network failures, human error, hardware failure, incorrect
or invalid data, viruses, or natural catastrophe all participate in a database that is not fit
for use. The DBA must mitigate these events, in a timely manner, by adopting one of, or
a combination of, four procedures. Backup facilities, journalizing facilities, Checkpoint
facility, and Recovery manager are all steps that MVCH should adopt to ensure its
database is fit.


Backup Facilities: The DBMS should provide backup facilities that produce a backup
copy of the entire database including database files and journals on a daily basis. The
backup facilities should also produce a copy of the repository, indexes and other database
objects. Thus, in the event that the database is down a copy of the database is used to
restore it to its original state.

Because the database at MVCH will need to stay available it is recommended that a hot
backup be conducted over a cold backup. A hot backup of the database shuts down only
portions of a database thereby keeping most data use available. Incremental backups are
also recommended. Incremental backups only require changes made since the last full
backup, instead of a full backup. It also, as a result, creates a longer time between full
backups.

Finally Backup facilities can automatically make copies or Fallback copies of the
database in real time as the database is updated. Stored on separate disk drives, this
procedure is recommended at MVCH, and provides a dynamic copy of the database as
needed.

Journalizing Facilities: Journalizing facilities provide an audit trail of database changes
and transactions. Along with a complete backup, transaction logs, a record of essential
data changes for each transition, should be done a few times an hour. The database
change log, a before and after image of records that have been changed, can be used to
restore a record to the before image as the after image contains unfit data.

Checkpoint Facility: With this procedure all transactions against the database are
brought up to date in which the database then enters a quiet state where the transaction
logs are then synchronized with the database. The DBMS then writes a special record to
the log, called a checkpoint record, which is a snapshot of the database. This checkpoint
record contains all the information necessary to restart the database.

Recovery Manager: The recovery manager is a module of the DBMS that restores the
database to a correct condition when a problem arises in which it then resumes
transactions at the corrected point. Recovery manager uses the transaction logs and
backup facilities mentioned above to restore the database.

The procedures and policies that should be put into place at MVCH to mitigate a database
that goes down are disk mirroring, restore/rerun, maintaining transaction integrity,
backward recovery, and forward recovery.

Disk mirroring, in order to use the backup recovery facilities, establishes a copy of the
database that mirrors the original database. If a failure of the original database occurs the
mirrored database is used for subsequent transactions. This is the fastest type of recovery
and provides a high available rate for the database.

Restore/Rerun is a technique where reprocessing of the day’s transactions are done
against the copy of the database up until the point of failure. This procedure takes time to
reprocess the transactions and also reprocesses the transactions out of the original
sequence. It is therefore recommended that Restore/Rerun be used as a last resort.

In performing the various recovery activities mentioned above it becomes necessary to
maintain transaction integrity. Database transactions need to be free or errors and thus
downtime. Transaction integrity refers to the transactions that execute the business
decisions of the business. Four properties, when adopted for transaction processing,
ensure a reliable and error free transaction. They are referred to as the ACID properties of
a transaction.
Atomic: Here the transaction must be “committed” in its entirety or rollback occurs and
the transaction is aborted.

Consistent: Any constraints on the transactions before execution must be present after
execution. A formula to subtract a balance by the requested amount before the transaction
should also be present after the transaction ensuring that relevant balances reflect the
change.

Isolated: Here concurrency issues are involved. Under this property, a transaction is
unavailable to other users of the database until it is completed. Thus updates and some
read capabilities have to wait while the records in question are updated and committed
before subsequent transactions can BEGIN.

Durable: This property means that the transactions that are committed are final and any
failure of the database can not reverse the effect.

Backward Recovery simply means that any unwanted changes are rolled back to an
earlier time thereby reversing the effect. A before image is used of the record to restore it
to a previous state. Backward Recovery is used to reverse or UNDO effects of abnormal
transactions.

Forward Recovery simply uses after images of good transactions and applies it to earlier
versions of the database when trouble appears.




References:

Hoffer J. Prescott M. Topi H. (2009). “Modern Database Management” Chap 12, 13

Naef W. (2003). “Business Continuity Planning- a safety net for businesses”
http://www.iwar.org.uk/infocon/business-continuity-planning.htm
Retrieved August 2009.


MVCH Website (n.d.). “Data Driven Enterprise Integration”
http://media.capella.edu/CourseMedia/TS5352/mountainView/2b_dataDriven.aspx
Retrieved Aug. 2009.

								
To top