DB Integration by MAYURESH4U


More Info
									                                                                      1. INTRODUCTION

        The Integrated Database Environment is a concept which will support all type of database
And will require only a single licensed copy of the database software to be installed on the
server. This application will provide a means so that the licensed copy is shared among all the
clients on the network.

       The proposed system will provide many advantages over the existing          system which
provides the facility of either conversion or fetching.

       This application will be able to execute all the SQL commands. It will take the command
and then pass it to the server. This command will be executed on the server and the requested
data will be sent to the client and displayed on the application's GUI. In case if the database is
not compatible then it is converted to its specified format.

        The application will allow the user to use any type of database. A facility for conversion
of one type of database to another will also be provided.


    Similarly, it may be necessary to move from one database vendor to another, or to upgrade
the version of database software being used. In these cases a physical transformation process
may be required since the underlying data format can change significantly. This may or may not
affect behaviour in the applications layer, depending largely on whether the data manipulation
language or protocol has changed – but modern applications are written to be agnostic to the
database technology so that a change from Sybase, MySQL, DB2 or SQL Server to Oracle
should only require a testing cycle to be confident that both functional and non-functional
performance has not been adversely affected.

        Changing application vendor – for instance a new CRM or ERP platform – will
inevitably involve substantial transformation as almost every application or suite operates on its
own specific data model. Further, to allow the application to be sold to the widest possible
market, commercial off-the-shelf packages are generally configured for each customer using
metadata. Application programming interfaces (APIs) are supplied to protect the integrity of the
data they have to handle. Use of the API is normally a condition of the software warranty,
although a waiver may be allowed if the vendor's own or certified partner professional services
and all tools are used

        Data migration is the process of making a copy of data and moving it from one device or
system to another, preferably without disrupting or disabling active business processing. After
data is transferred, processing uses the new device or system. There are a variety of business
drivers that cause enterprises to undertake a data migration, including:

      Deployment of a new operating or application system
      Take-on of new users or businesses onto an existing system (mergers, acquisitions, etc.)
      Server or storage technology replacement or upgrade
      Server or storage consolidation
      Changes to database schemas and structure
      Relocation of the data center
      Server or storage equipment maintenance
      Workload balancing or other performance-related tuning

    In most organizations, data migrations are routine operations – according to a 2005 survey by
Softek, more than 60% of respondents migrate data quarterly or more often - with 19% migrating
weekly. So they have to be done effectively and in a controlled environment to assure data
integrity, compatibility, low downtime, and security as well as a seamless migration process.


    Migration addresses the possible obsolescence of the data carrier, but does not address the
fact that certain technologies which run the data may be abandoned altogether, leaving migration

      Time consuming – migration is a continual process, which must be repeated every time a
medium reaches obsolescence, for all data objects stored on a certain media.

       Costly - an institution must purchase additional data storage media at each migration.


    Database Migration is a critical job that involves the migration of the data from the older
application to the upgraded version. It is a challenging job with a lot of risks in order to maintain
the standards of both the applications.

    Binary offers a complete spectrum of services for database migration. We understand and
plan your database migration needs, conduct the migration, test it and deliver it successfully to
you. We suggest you the best process and work on a plan that is cost effective and is rapidly
completed in the shortest time span.
We realize the importance of adopting a flexible attitude and are open to better ideas and changes
at any point of time. The combination of latest technologies with our automated tools creates a
quicker turnaround thus helping you achieve your objectives faster.

        It has become an unavoidable necessity in the IT industry today to shift legacy systems to
their latest versions to enable you to face the toughest of the challenges with a smooth process.
We help you move ahead with the latest technologies to support your system and at the same
time keeping your data safe.
                                                            2. THE LITEATURE STUDY

        There are technologies that run parallel to the proposed system. These technologies are:
2.1.1 Database management System (ORACLE, SQL, and ACCESS):
        The Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is an
object-relational database management system (ORDBMS) produced and marketed by Oracle
        Users of the Oracle databases refer to the server-side memory-structure as the SGA
(System Global Area). The SGA typically holds cache information such as data-
buffers, SQL commands, and user information. In addition to storage, the database consists of
online redo logs (or logs), which hold transactional history. Processes can in turn archive the
online redo logs into archive logs (offline redo logs), which provide the basis (if necessary)
for data recovery and for some forms of data replication. If the Oracle database administrator has
implemented Oracle RAC (Real Application Clusters), then multiple instances, usually on
different servers, attach to a central storage array. This scenario offers advantages such as better
performance, scalability and redundancy. However, support becomes more complex, and many
sites do not use RAC. In version 10g, grid computing introduced shared resources where an
instance can use (for example) CPU resources from another node (computer) in the grid.
         The Oracle DBMS can store and execute stored procedures and functions within
itself. PL/SQL (Oracle Corporation's proprietary procedural extension to SQL), or the object-
oriented language Java can invoke such code objects and/or provide the programming structures
for writing them.
       The Oracle RDBMS stores data logically in the form of tablespaces and physically in the
form of data files ("datafiles"). Table spaces can contain various types of memory segments,
such as Data Segments, Index Segments, etc. Segments in turn comprise one or more extents.
Extents comprise groups of contiguous data blocks. Data blocks form the basic units of data
        There is also a partitioning feature available on newer versions of the database, which
allows tables to be partitioned based on different set of keys. Specific partitions can then be
easily added or dropped to help manage large data sets.
        Oracle database management tracks its computer data storage with the help of
information stored in the SYSTEM tablespace. The SYSTEMtablespace contains the data
dictionary—and often (by default) indexes and clusters. A data dictionary consists of a special
collection of tables that contains information about all user-objects in the database. Since version
8i, the Oracle RDBMS also supports "locally managed" tablespaces which can store space
management information in bitmaps in their own headers rather than in the SYSTEM tablespace
(as happens with the default "dictionary-managed" tablespaces). Version 10g and later
introduced the SYSAUX tablespace which contains some of the tables formerly in
the SYSTEM tablespace.
       Microsoft SQL Server is a relational database server, developed by Microsoft: It is a
software product whose primary function is to store and retrieve data as requested by other
software applications, be it those on the same computer or those running on another computer
across a network (including the Internet). There are at least a dozen different editions of
Microsoft SQL Server aimed at different audiences and for different workloads (ranging from
small applications that store and retrieve data on the same computer, to millions of users and
computers that access huge amounts of data from the Internet at the same time).
        Microsoft SQL Server also allows user-defined composite types (UDTs) to be defined
and used. It also makes server statistics available as virtual tables and views (called Dynamic
Management Views or DMVs). In addition to tables, a database can also contain other objects
including views, stored procedures, indexes and constraints, along with a transaction log. A SQL
Server database can contain a maximum of 231 objects, and can span multiple OS-level files with
a maximum file size of 220 TB. The data in the database are stored in primary data files with an
extension .mdf. Secondary data files, identified with a .ndf extension, are used to store optional
metadata. Log files are identified with the .ldf extension.
        SQL Server ensures that any change to the data is ACID-compliant, i.e. it
uses transactions to ensure that the database will always revert to a known consistent state on
failure. Each transaction may consist of multiple SQL statements all of which will only make a
permanent change to the database if the last statement in the transaction (a COMMIT statement)
completes successfully. If the COMMIT successfully completes the transaction is safely on disk.
        SQL Server also includes an assortment of add-on services. While these are not essential
for the operation of the database system, they provide value added services on top of the core
database management system. These services either run as a part of some SQL Server
component or out-of-process as Windows Service and presents their own API to control and
interact with them.
        Microsoft Office Access, previously known as Microsoft Access, is a relational database
management system from Microsoft that combines the relational Microsoft Jet Database
Engine with a graphical user interface and software-development tools. It is a member of
the Microsoft Office suite of applications, included in the Professional and higher editions or
sold separately. In May 12 2010, the current version of Microsoft Access 2010 was released by
Microsoft in Office 2010; Microsoft Office Access 2007 was the prior version.MS Access stores
data in its own format based on the Access Jet Database Engine. It can also import or link
directly to data stored in other applications and databases.
Access stores all database tables, queries, forms, reports, macros, and modules in the Access Jet
database as a single file.
        For query development, Access offers a "Query Designer", a graphical user interface that
allows users to build queries without knowledge of the SQL programming language. In the
Query Designer, users can "show" the datasources of the query (which can be tables or queries)
and select the fields they want returned by clicking and dragging them into the grid. One can set
up joins by clicking and dragging fields in tables to fields in other tables. Access allows users to
view and manipulate the SQL code if desired. Any Access table, including linked tables from
different data sources, can be used in a query.
        Access also supports the creation of "pass-through queries". These snippets of SQL code
can address external data sources through the use of ODBC connections on the local machine.
This enables users to interact with data stored outside the Access program without using linked
tables or Jet.Users construct the pass-through queries using the SQL syntax supported by the
external data source.
        When developing reports that are linked to queries placing or moving items in the design
view of the report, Access runs the linked query in the background on any placement or
movement of an item in that Report. If the report is linked to a query that takes a long time to
return records this means having to wait until the query has run before you can add/edit or move
the next item in the report (this feature cannot be turned off).
       Non-programmers can use the macro feature to automate simple tasks through a series of
drop-down selections. Macros allow users to easily chain commands together such as running
queries, importing or exporting data, opening and closing forms, previewing and printing reports,
etc. Macros support basic logic (IF-conditions) and the ability to call other macros. Macros can
also contain sub-macros which are similar to subroutines. In Access 2007 enhanced macros with
the inclusion of error-handling and of support for temporary variables. Access 2007 also
introduced embedded macros that are essentially properties of an object's event. This eliminated
the need to store macros as individual objects. Macros however, are limited in their functionality
by a lack of programming loops and of advanced coding logic. Most professional Access
developers use the VBA programming language for a richer and more powerful development
2.1.2. Database Conversions (Intelligent converters, DRPU database)
        Data conversion is the conversion of computer data from one format to another.
Throughout a computer environment, data is encoded in a variety of ways. For
example, computer hardware is built on the basis of certain standards, which requires that data
contains, for example, parity checks. Similarly, the operating system is predicated on certain
standards for data and file handling. Furthermore, each computer program handles data in a
different manner. Whenever any one of these variable is changed, data must be converted in
some way before it can be used by a different computer, operating system or program. Even
different versions of these elements usually involve different data structures. For example, the
changing of bits from one format to another, usually for the purpose of application
interoperability or of capability of using new features, is merely a data conversion. Data
conversions may as simple as the conversion of a text file from one character encoding system to
another; or more complex, such as the conversion of office file formats, or the conversion of
image and audio file formats.
There are many ways in which data is converted within the computer environment. This may be
seamless, as in the case of upgrading to a newer version of a computer program. Alternatively,
the conversion may require processing by the use of a special conversion program, or it may
involve a complex process of going through intermediary stages, or involving complex
"exporting" and "importing" procedures, which may converting to and from a tab-delimited or
comma-separated text file. In some cases, a program may recognize several data file formats at
the data input stage and then is also capable of storing the output data in a number of different
formats. Such a program may be used to convert a file format. If the source format or target
format is not recognized, then at times third program may be available which permits the
conversion to an intermediate format, which can then be reformatted using the first program.
       Different software such as DRPU database ,Intelligent converters are available.
We can enter the data into oracle application. Most of us can guess three different ways as:

      The Data can be entered using the application Screens.
      The data can be entered using Oracle's Open System Interface.
      The data can be stored in the database table directly

But those who works in some complex business environment may figure out some of more like:

      3rd Party Software (for the third options)
      Taviz (formerly SmartDB) which is EAI tool.
      Crossroads
      See Beyond (formerly STC)
      Vitria
      Data Loader: They have macro enabled spreadsheet kind of tool
      More4apps

And there are many more, but most of these are used for master data, and few cases for
transaction data via Open interface if available.

Importance of data conversion/migration and interfaces within any ERP implementation project
can't be ignored. Since ERP mostly deals with data which finally leads into Information , thus it
is equally important to understand the aspect how "data" is important in any ERP system
specailly in implementation phase, no matter how simple and unified operation is. Since I been
involved in some big transformation oracle apps project thus It is a absolute a good cause to
share some information about integration touch point, conversion/migration and interface
development to someone who is very new to ERP world as well as Oracle application.

Let's start with some common situation, we have three cases,

   1. The Customer is running there some home grown IT application which cater most of the
      company need. Now management has decided to go for any ERP solutions, then the
      question what will happen for data which is already in the existing application?
   2. Another            situation          is           already            using          ERP
      a. They want to upgrade to higher version…presuming the structure of some table get
      changed?               Lets            say              10.7             to            11i
      b. The company is acquired or merged with some other company, and the whole data
      need      to     move        into    the      parent      or     child      company      .
      c. They want to enable some additional modules within existing application.
   3. There are few data interacting with both the cases irrespective of database technology for
      where data is coming and going based out of need.

The answer of the 1 is data migration and 2 is more pronounced as data conversion where as
thirds is popularly known as Interface. The ways these are working haven't much difference but
it is more important to understand definition and need. I never found any big difference between
migration/conversion unless there is huge transformation of data, the only things figure out is
conversion might required some less steps to perform, as assumption is that set up related stuff
been already been taken care before execution of activity.

Let's understand like this: Data Migration as a process of moving required (and most often very
large) volumes of data from our clients' existing systems to new systems. Existing systems can
be anything from custom-built IT infrastructures to spreadsheets and standalone databases.Data
conversion can be defined as a process of converting data from one structural form to another to
suit the requirements of the system to which it is migrated.

Lets take a deep drive to understand better:

Why conversion/Migration is more important in ERP?

Before Go-Live in the production environment the required master data, the open transaction
data and the historical transaction data need to be imported from the old legacy applications to
Oracle Applications. Since data structure and data design in legacy systems are different from
those of Oracle Applications, data need to be messaged/ converted satisfying the business rules
to suite the Oracle requirement. Initial data can be migrated by any other means as discussed
above depending upon cetain paramater like Volumn, usage, complexity , business rule etc..

How we Define Data Conversion

      Process where existing data from the client's old system is extracted, cleansed, formatted,
       and installed into a new system.
      These can be manual or automated.
      The big difference is that these are One-time only process that requires extensive testing
       and preparation.
      They must be executed and performed before a system goes into production.

What Is An Interface then

      These are programs for connection between Two Systems In Order To Synchronize the
      They can be Manual, Batch or Real-Time.
      Used Repeatedly and Should Therefore Be Designed and Constructed In the Most
       Efficient Manner Possible.
      These can Be Triggered by an Event (Such As Running A Concurrent Program) Or It
       Can Be Scheduled To Run At A Certain Time.
      Can Be Very Costly To Construct And Maintain.

conversion/migration/interface have Life Cycle

migration or conversion has life cycle because they have a significant efforts required in
development and design and implementation

      Functional Designer works with business owners to determine the data mapping and
       complete the functional design using the Design Templates.
      If the interface/conversion is automated, the Technical Designer converts functional
       requirements into technical specs for the construction of the interface programs.
      The developer uses the functional and technical designs to build and test the
       interface/conversion programs.
      More rounds of testing are done until the interface/conversion is migrated to the
       production environment for deployment.

Conversion is assumed as one time activity but never looks like small activity which can be
performed with couple of days.

How conversion and interface differ?

There are good numbers of parameter on which they can be categorize. Take few of them:

      Frequency
          o conversions are a one time event
          o interfaces are ongoing
      Occurrence in the project timeline
          o conversions executed before production
          o interfaces executed during production
      Manner of execution
          o conversions are batch
          o interfaces may be batch or real time
      Complexity
          o Conversion does have very complex, its totally depends upon the data mapping
          o coordinating with other systems make interfaces more complex
      Maintenance
      Maintence of interface is bit cost intensive task.

Interface                                                                                Type
You have learned how interface is differ from Conversion/Migration. Now lets take few types of
Normally in any system , there are two kinds of interface as:

Inbound Interfaces

                   An inbound interface receives data from one system (legacy) and inserts into
                    Oracle open interface tables.
                   A typical inbound interface would follow these steps:
                        1. Extract data from legacy system into a flat file.
                        2. Use SQL*Loader or equivalent tool to upload information into a
                           temporary table.
                        3. Write a PL/SQL program to take data from the temp table and insert
                           into the Open Interface Tables.
                        4. Through the concurrent manager in Oracle Applications, run the
                           standard Oracle Interface program to transform interface tables into
                           Oracle data.

Outbound Interfaces

o An outbound interface takes data from Oracle tables and inserts it into an external system (via
tables                          or                              flat                        file).
o      A      typical     outbound        interface       would       follow     these     steps:
- Write a PL/SQL program to extract data from Oracle base tables into a flat file.
- Use a custom program to read that data and post it into the legacy system
Do we have some other standard way to do interface?

      Open Interface is a table based interface registered as a concurrent program
           o process records in batches.
           o spawned(Pro-C) or PL/SQL based programs.
      API (Application Program Interface) is a parameter based stored procedure
           o directly impacts base database tables.
           o may be called from Oracle open interfaces,Forms, Reports.
      EDI      (Electronic     Data      Interchange)      uses     industry    standard data
       definitions(US/ANSI/X.12) for transmission of documents such as PO's, Invoices, Sales
       Order, etc.Oracle provides some EDI transactions through EDI Gateway.(
      Enterprise Application Integration (EAI) solutions are often used when complex
       integration requirements exist.

What Is An Open Interface Table (OIT)?

      For inbound interfaces, the interface table is the intermediary table where data from the
       source application temporarily resides until it is validated and processed into an Oracle
       base table through a standard import concurrent program.
      Open Interface Tables are standard Oracle tables.
          o Oracle uses OITs to provide a simple interface to Oracle base tables.
          o Oracle has list of all the open interface that oracle offered in there product.

Oracle Interface Program

      Most Oracle modules have standard import programs (concurrent processes) to facilitate
       custom inbound interfaces. The specific processing performed varies by application.
      These programs pull data from the open interface tables, validate the data, and then insert
       into one or more Oracle base tables.
      Upon successful completion of processing, the program deletes the processed rows from
       the interface table or marks them as completed.
      Depending on the import, errors can be viewed in various ways (exception reports, error
       tables, forms, etc…).
      Examples of standard import programs:

                       GL: Journal Import
                       AP: Payables Open Interface
                       AR: Customer Interface
                       INV : Item Import
                       AR - Autoinvoice

   Ok, thats is all about Conversion and Interfaces briefing. I will write some more for Tools
   used for Conversion/Interface and will discuss some granular details about a
   conversion/migration project and share some information about how and where AIM's
   documents fits into conversion and Migration projects. So watch out this space for some
   more stuff for conversions.. Till than..your comment and requset you to share some
   information relaeted to these areas.

Extra Information : i.e Negative Points

   1. Putting Validation either from table or Views , does not make any difference.Moreover
      Its depends upon how comfortable you are in recalling table or view name with there
      corresponding column.

       Just   for example….validation for CCID either from base   table

       Its depends upon how you are using. Moreover there are some instances when views will
       nor serve the purpose, as they are multiorg enabled where not need to set the environment
       for OU.

       Moreover Lookups are only part of validation, and best practice is to use base table rather
       than view.

       Another trick , is that if you are not sure what column and table details, better to navigate
       the Oracle form and try to get the information from the last menu attached with that form.

       Conversion        or   migration,    the     more     you     stress     should      be
       1. rewind processs :how you are going to rewind the whole process as you are dealing of
       hell lot of bulk data

       2.Expection handling : your code should be smart enough in term of reporting so that
       exception should be easily identified and you can fix it.

       3.Putting data Loading in Batches : this is more important. Lets say you are dealing with
       7 years Transaction data, its not recommeded to load in one batch. design your program
       from easiness and range of parameter so that other factors can be automatically taken

       4.If you are retriving CCID, in code recomended to use CCID Function rather putting All
       segment value, but again this is choice , and Volumn of data in a batch.
2.2 Advantages
2.2.1 Advantages of SQL server:
       • SMP support
       • Cluster support
       • Cross-platform support
       • Transaction control
       • Replication
       • A rich SQL dialect
       • An extensive SQL function library
       • A sophisticated query optimizer
       • Row level locking
       • Triggers

2.2.2 Advantages of Microsoft Access:
             Users can create tables, queries, forms and reports, and connect them together
       with macros. Advanced users can use VBA to write rich solutions with advanced data
       manipulation and user control.
                The original concept of Access was for end users to be able to "access" data from
       any source. Other uses include: the import and export of data to many formats
       including Excel, Outlook, ASCII, dBase, Paradox, FoxPro, SQL Server, Oracle, ODBC,
       etc. It also has the ability to link to data in its existing location and use it for viewing,
       querying, editing, and reporting. This allows the existing data to change while ensuring
       that Access uses the latest data. It can perform heterogeneous joins between data sets
       stored across different platforms. Access is often used by people downloading data
       from enterprise level databases for manipulation, analysis, and reporting locally.
               There is also the Jet Database format (MDB or ACCDB in Access 2007) which
       can contain the application and data in one file. This makes it very convenient to
       distribute the entire application to another user, who can run it in disconnected
             One of the benefits of Access from a programmer's perspective is its relative
       compatibility   with SQL (structured query     language)    —     queries    can     be
       viewed graphically or edited as SQL statements, and SQL statements can be used directly
       in Macros and VBA Modules to manipulate Access tables. Users can mix and use both
       VBA and "Macros" for programming forms and logic and offers oriented possibilities.
       VBA can also be included in queries.
              Microsoft Access offers parameterized queries. These queries and Access tables
       can be referenced from other programs like VB6 and .NET through DAO or ADO. From
       Microsoft Access, VBA can reference parameterized stored procedures via ADO.
               The desktop editions of Microsoft SQL Server can be used with Access as an
       alternative to the Jet Database Engine. This support started with MSDE (Microsoft SQL
       Server Desktop Engine), a scaled down version of Microsoft SQL Server 2000, and
       continues with the SQL Server Express versions of SQL Server 2005 and 2008.
        Microsoft Access is a file server-based database. Unlike client–server relational database
        management systems (RDBMS), Microsoft Access does not implement database
        triggers, stored procedures, or transaction logging. Access 2010 includes table-level
        triggers and stored procedures built into the ACE data engine. Thus a Client-server
        database system is not a requirement for using stored procedures or table triggers with
        Access 2010. Tables, queries, Forms, reports and Macros can now be developed
        specifically for web base application in Access 2010. Integration with Microsoft
        SharePoint 2010 is also highly improved
2.2.3 Conversion softwares (Intelligent Database, DRPU database):

       Work with all versions of MS SQL starting from 7.0
       Convert indexes with all necessary attributes
       Command line support
       Convert password protected MS Access databases
       Convert MS Access databases protected with user-level security
       Merge MS Access data into an existing MS SQL database
       Convert individual tables
       Easy-to-use wizard-style interface
       Full install/uninstall support

2.3 Disadvantages

2.3.1 Disadvantages of Using Access

    While there are many advantages to choosing Microsoft Access as your database
    management system, there are also a few disadvantages that you might want to consider.

    Concurrent Users
    Even though Microsoft has stated that Access is able to support 255 concurrent users, it is a
    more practical choice to select Access when the database will be used by only 15 to 20
    simultaneous users. If more than 20 users try to gain access to the database, performance
    and response time may begin to suffer.
    Multiple Windows Operating Systems
    Whenever various Windows operating systems like Windows 98, Windows ME, Windows
    2000, Windows XP, or Windows Vista are used to access the same database, the database
    may experience corruption issues. It is recommended that all Access users are operating
    with the identical operating system.
    Novice User – Design and Development Limitations
    A user who is new to software development may find that they are a bit overwhelmed when
    using Microsoft Access. In order to maximize the potential of Access, a thorough
    understanding of relational database design along with VBA coding is highly recommended.
    As with any software development tool, the ease of screen use, database performance,
    automation of data input, and report quality relies on the skill level and experience of the
      Sensitive Data Needs
      If you have sensitive data needs like storing health and medical records, financial
      information, academic records or employment data, for example, you will require more
      extensive database security than Access can offer. One suggestion would be to use SQL
      Server as the database engine with an Access front-end. We can help. Please contact us to
      discuss your needs. We have a solution that will work for you.
      Internet Limitations
      Microsoft Access does have a few limitations. Access was not designed to be used with
      applications that are web based since its forms and reports are only functional in a Windows
      environment and are not compatible with an internet browser like Internet Explorer. The
      Access Jet database that is used by Microsoft Access is a file based system and does not
      contain the server features that are available in SQL Server, Oracle or other ODBC
      compliant databases. Access databases are more suited for web based solutions when the
      number of users is small or when the users are just viewing data rather than editing or
      Remote Access
      In the past, record locking activities in Access caused an increase in response time which
      resulted in slow performance on a VPN (Virtual Private Network) or WAN (Wide Area
      Network) when accessing the Access Jet database. However, this problem has been
      eliminated as a result of quicker broadband connections. It is important to note that
      performance can also be enhanced if a continuous connection is maintained to the back end
      database throughout the session rather than opening and closing the sessions every time a
      table is requested. If you find that Access database performance over a VPN or WAN
      connection continues to suffer, then perhaps Microsoft Terminal Services is an effective
      alternative. We have found that Access databases linked to a SQL Server or Access Data
      Projects work quite well over VPN and WAN connection. Please contact usand let us know
      how we can help.
      Access on a Mac
      Unfortunately, Microsoft Access is not designed to operate on Apple Macintosh computers.
      Since Access is commonly used in many businesses that use PC’s, Microsoft does not have
      any future plans to expand the database to be functional on a Mac.

2.3.2 Disadvantages of Oracle:
      Complex
      Strictly typed.
      Not as user friendly.

2.4      Limitations

       Each software provides a very limited fuction.
       No software provides the fuction of conversion and accessing the database in
        one package.
                                                           3. SYSTEM DEVELOPMENT

        The overall goal of this has been to provide a literature review of some of the key areas in
migration research and then to discuss the relevance of that research to a specific case study. The
amount of literature dealing with data migration on such topics as schema translation, migration
tools, and reverse engineering is extensive. There is also a great deal of literature specific to
migrating from legacy applications as more and more organizations are either upgrading or
changing their applications to provide simpler GUI or web-based support.

        In addition, there are also numerous case studies that outline approaches that users have
applied in migrating from one system or platform to another. Specific technologies that have
been developed with the intent to assist with a particular problem in database migration. For the
most part, the literature is quite theoretical and useful primarily to researchers wishing to extend
their own areas of research.

        Finally, the research tends to focus on very large applications – which is understandable,
since these, as the most complex and most expensive projects, may by applying current research
to an organization in order to reduce its overall labor investment and financial burden when
undertaking a large migration project.

        This case study is one of many on the subject, but it differs in that its purpose was not to
demonstrate a singular problem in database migration or to demonstrate a new technology, but
rather to show how the available literature might be considered in handling a successful
migration. Thus, this literature review can serve others to find the best approach for their own
specific projects, in determining which characteristics are important in a migration project and
how the literature can be used to identify those characteristics. For example, users might consider
the overall size and complexity of the project to determine how best to approach the planning
        The general introductory literature will provide users with the principal issues such as
data cleansing and accuracy, legacy migration, transformation methods, schema development,
code analysis, automation, and security. It will help them to determine where their own
theoretical and practical shortcomings may lie, and how to fill in those gaps. It may also,
business and industry, or any other area. Although some of the issues remain the same between
domains, case studies can provide a useful and more tangible approach into a topic by letting the
user approach the topic from a familiar specialty.

        The proposed “INTEGRATED DATABASE ENVIRONMENT SYSTEM” will provide
access to different types of database on the same server. Only a single licensed copy of the
DBMS software is needed, and it has to be installed on the server. The application will act as an
interface between the client and the software. It will allow the client to access the database stored
on the server without the DBMS software installed on the client side.
       The client may also work on different types of databases provided that the database
software is present on the server. The client will have to select the type of database
to be used.
        There will be three level of security provided. The user will have to use a password to
login into the application (Here the application is installed on the client side). The user will then
have to fill the username and password to logon to the server where the database is stored. Then
the user will have to deal with the username and password that is assigned to each database
stores on the server (The username and password is different for each database).User access
rights will be provided, so that the user only has access to database which he/she has the right to.
This will further enhance the security.
        The software will provide an application which will allow the user to convert one type of
database into other (e.g. Oracle to Access etc.). Thus the user can work with any type of database
with which she/he is comfortable and then later convert it into another type. A backup of small
part of database or the whole database can be taken in excel document for the windows version
of it.
        A special client side application will be provided which can work on Android based
mobile phones so that the user can access the database from whenever and wherever he needs to.
This application will provide the basic tasks of editing and modifying the existing database on
the server. A feature to control the server operations through the same application can also be
provided. This feature will be only accessible to the administrator or user having the same
       The system will be developed using JAVA as frontend. The backend database will
include Oracle, SQL, and Access. The use of JAVA will make the application platform
   3.3.1   J2SE:
                Java        Platform,        Standard          Edition or JavaSE is        widely
           used platform for programming in the Java language. It is the Java Platform used to
           deploy portable applications for general use. In practical terms, Java SE consists of
           a virtual machine, which must be used to run Java programs, together with a set
           of libraries (or    "packages")     needed       to    allow      the    use    of file
           systems, networks, graphical interfaces, and so on, from within those programs.
                Java SE was known as Java 2 Platform, Standard Edition or J2SE from version
           1.2 until version 1.5. The "SE" is used to distinguish the base platform from Java
           EE and Java ME. The "2" was originally intended to emphasize the major changes
           introduced in version 1.2, but was removed in version 1.6. The naming convention
           has been changed several times over the Java version history. Starting with J2SE 1.4
           (Merlin), Java SE has been developed under the Java Community Process. JSR 59
           was the umbrella specification for J2SE 1.4 and JSR 176 specified J2SE 5.0 (Tiger).
           Java SE 6 (Mustang) was released under JSR 270.
                Java Platform, Enterprise Edition is a related specification which includes all of
           the classes in Java SE, plus a number which are more useful to programs which run
        on servers as opposed to workstations. Java Platform, Micro Edition is a related
        specification intended to provide a certified collection of Java APIs for the
        development of software for small, resource-constrained devices such as cell
        phones, PDAs and set-top boxes.
            The Java Runtime Environment (JRE) and Java Development Kit (JDK) are the
        actual files that are downloaded and installed on a computer in order to run or
        develop java programs, respectively.
            One of the primary uses of J2SE is the development of Java applications for
        individual computers. Web-based activity sometimes lives and dies by the successful
        integration     of     applets   into    e-commerce and       other    Web-specific
        functions. J2SE applets and other applications make these functions run smoothly;
        without them, many transactions and other Internet interactions would not take place.
        In this way, J2SE is a tremendous enabler of Web activity.
            Another important functionality made possible by J2SE is JavaBeans. These are
        reusable applications that can be developed and assembled easily in order to create
        more sophisticated applications. Basically, they are the building blocks of
        personalized J2SEapplications. Java may be the base technology, but JavaBeans are
        what makes J2SEfunctionality and individuality really click.

3.3.2   JDBC – ODBC:
             Java's JDBC API builds on that foundation and provides you with a shared
        language through which your applications can talk to database engines. Following in
        the tradition of its other multi-platform APIs, such as the AWT, JDBC provides you
        with a set of interfaces that create a common point at which database applications and
        database engines can meet.
            JDBC is in a SQL-level API that allows you to embed SQL statements as
        arguments to methods in JDBC interfaces. To enable you to do this in a database-
        independent fashion, JDBC requires database vendors (such as those mentioned
        earlier in this chapter) to furnish a runtime implementation of its interfaces. These
        implementations route your SQL calls to the database in the proprietary fashion it
        recognizes. As the programmer, though, you do not ever have to worry about how it
        is routing SQL statements. The façade provided by JDBC gives you complete
        freedom from any issues related to particular database issues; you can run the same
        code no matter what database is present. JDBC allows you to construct SQL
        statements and embed them inside Java API calls. In short, you are basically using
        SQL. But JDBC lets you smoothly translate between the world of the database and
        the world of the Java application. Your results from the database, for instance, are
        returned as Java objects, and access problems get thrown as exceptions.
            Open DataBase Connectivity (ODBC). ODBC was developed to create a single
        standard for database access in the Windows environment. ODBC, JDBC is heavily
        influenced by existing database programming APIs, such as X/OPEN SQL Call Level
        Interface (CLI). JDBC attempts to remain as simple as possible while providing
        developers with maximum flexibility. A key criterion employed by Sun is simply
        asking whether database access applications read well. The simple and common tasks
        use simple interfaces, while more uncommon or bizarre tasks are enabled through
specialized interfaces. For example, three interfaces handle a vast majority of
database access. JDBC nevertheless provides several other interfaces for handling
more complex and unusual tasks.
    JDBC accomplishes its goals through a set of Java interfaces, each implemented
differently by individual vendors. The set of classes that implement the JDBC
interfaces for a particular database engine is called a JDBC driver. In building a
database application, you do not have to think about the implementation of these
underlying classes at all; the whole point of JDBC is to hide the specifics of each
database and let you worry about just your application.The types of drivers:

Type 1
These drivers use a bridging technology to access a database. The JDBC-ODBC
bridge that comes with JDK 1.2 is a good example of this kind of driver. It provides a
gateway to the ODBC API. Implementations of this API in turn do the actual
database access. Bridge solutions generally require software to be installed on client
systems, meaning that they are
not good solutions for applications that do not allow you to install software on the
Type 2
Type 2 drivers are native API drivers. This means that the driver contains Java code
that calls native C or C++ methods provided by the individual database vendors that
perform the
database access. Again, this solution requires software on the client system.
Type 3
Type 3 drivers provide a client with a generic network API that is then translated into
database-specific access at the server level. In other words, the JDBC driver on the
client uses sockets to call a middleware application on the server that translates the
client requests into an API specific to the desired driver. As it turns out, this kind of
        driver is extremely flexible, since it requires no code installed on the client and a
        single driver can actually provide access to multiple databases.
        Type 4
        Using network protocols built into the database engine, type 4 drivers talk directly to
        the database using Java sockets. This is the most direct pure Java solution. Because
        these network protocols are almost never documented, this type of driver will almost
        always come only from the database vendor.

3.3.3   Socket programming:
            A network socket is a lot like an electrical socket. Various plugs around the
        network have a standard way of delivering their payload. Anything that understands
        the standard protocol can “plug in” to the socket and communicate. With electrical
        sockets, it doesn’t matter if you plug in a lamp or a toaster; as long as they are
        expecting 60Hz, 115-volt electricity, the devices will work. Think how your electric
        bill is created. There is a meter somewhere between your house and the rest of the
        network. For each kilowatt of power that goes through that meter, you are billed. The
        bill comes to your “address.” So even though the electricity flows freely around the
        power grid, all of the sockets in your house have a particular address.
            The same idea applies to network sockets, except we talk about TCP/IP packets
        and IP addresses rather than electrons and street addresses. Internet Protocol (IP) is a
        low-level routing protocol that breaks data into small packets and sends them to an
        address across a network, which does not guarantee to deliver said packets to the
        destination. Transmission Control Protocol (TCP) is a higher-level protocol that
        manages to robustly string together these packets, sorting and retransmitting them as
        necessary to reliably transmit your data. A third protocol, User Datagram Protocol
        (UDP), sits next to TCP and can be used directly to support fast, connectionless,
        unreliable transport of packets. You often hear the term client/server mentioned in the
        context of networking. It seems complicated when you read about it in corporate
        marketing statements, but it is actually quite simple. A server is anything that has
        some resource that can be shared. There are compute servers, which provide
        computing power; print servers, which manage a collection of printers; disk servers,
        which provide networked disk space; and web servers, which store web pages. A
        client is simply any other entity that wants to gain access to a particular server. The
        interaction between client and server is just like the interaction between a lamp and an
        electrical socket. The power grid of the house is the server, and the lamp is a power
        client. The server is a permanently available resource, while the client is free to
        “unplug” after it is has been served. In Berkeley sockets, the notion of a socket allows
        a single computer to serve many different clients at once, as well as serving many
        different types of information. This feat is managed by the introduction of a port,
        which is a numbered socket on a particular machine. A server process is said to
        “listen” to a port until a client connects to it. A server is allowed to accept multiple
        clients connected to the same port number, although each session is unique. To
        manage multiple client connections, a server process must be multithreaded or have
        some other means of multiplexing the simultaneous I/O.
3.5 Feasibility Study
      3.5.1 Technical Feasibility:-
               The subject is Integrating the Database and is within the Windward
       development. The subject is well suited for data center use and will reasonable
       support as much as space with a power and a utility to configure Server .The
       development is staggering, an enterprise data centers. The purpose of this study was to
       determine the feasibility of developing a data center on the subject to Integrate data at
       server and provide service to clients. In particular to address the following questions:
       • Is sufficient functionality available from the existing utility?
       • Are there sufficient number and type of communications providers in the area to be
       attractive to an end user?
       • How much space can be reasonable supported on the server?
       The following activities were performed:
       • Review the features of the Application (IDE) and permitted uses,
       • Develop an outline statement of requirements,

      3.5.2 Operational Feasibility:-
               For purposes of this study, we utilized design requirements that we have found
       typical of many largeEnterprise data center users:
       • Redundant components with concurrent maintenance.
       • Maintenance of Data Centered at Server side.
       • High level efficiency of server to handle ample requests from multiple clients.
       In addition we considered subjective criteria such as visibility from client to server
       respective data, etc.

      3.5.3 Economical Feasibility:-
        • Develop a preliminary design including a plan, sizing of features and functionality,
        outline specifications, etc sufficient for budgetary pricing .
3.4    System Requirement Specification:

      3.4.1    Purpose

               The purpose of this document is to provide the software requirement specification
        report for the Integrated Database Environment.
3.4.2 Document Conventions

           IDE              Integrated Database Environment.

           DB               Database

3.4.3 Intended Audience and Reading Suggestion

       This project is the college level project and is implementing under the guidance of
 college professors. This project is useful to everyone using network (client-server).

3.4.4 Project Scope

        The purpose of the IDE is to create convenient and easy-to-use application that
  helps to convert data from one database to another. In addition, it provides data fetching
  from server by client. The system is based on a relational database with centralized data
  management. We will have a database supporting ample of clients. Above all, we hope
  to provide a comfortable user experience along with the best pricing available.

3.4.5 Overall Description


        A Integrated database Environment does the following functions as shown below .

       Application (IDE) It includes DATABASE CONVERTER which converts the
  data from one form to another form.
       Client fires queries, if needed converts data from one form to another and sends
  request to server for data.
       Server It includes fetching the client’s request by firstly arranging them according
  to their priority.

3.4.6 User class and characterstics

       Users of the system should be able to retrieve information from the database.. The
  system will support two types of user privileges, Client and Server. Clients will have
  access to client functions i.e., IDE and the server will have access to both client and
  database. The client should be able to do the following functions:
       Fire queries.
       Convert data from one format to another if necessary.
       Send request to server.
       Provide feedback to server on receiving requested data.

3.4.7   Operating environment

  Server functions:-

            o   Authenticate clients.
            o   Fetch requests according to priority.
            o   Process request.
            o   Send data to client.

3.4.8 System features:-

       Description and priority -

       The Integrated Database Environment maintains information on behalf of server.
  Of course this project has high priority because it is very important to preserve time and
  space complexity.

        Integrated database implies that a single application should be able to operate
  transparently on data that is requested across a variety of different databases
  and connected by a communication network as shown in below figure.

  3.4.9. External Interface Requirements
   User interfaces

              Frontend software:    Java.
              Backend software:      SQL, Oracle, etc.

   Hardware interfaces

              RAM.
              Network.

3.4.10 Software Interface

  Software used                        description
  Operating system                     We have chosen Windows 7 operating
                                       system for its best support.
  Database                             To save data on server.
  Java                                 To implement the project we have chosen
                                       Java language for its more interactive

3.4.11 Communication Interfaces

        This project supports different types of database conversions.
                             4.UML DIAGRAMS

Fig 4.1: - Usecase Diagram
Fig 4.2: - Class Diagram
Fig 4.3: - Package Diagram
Fig 4.4: - Sequence Diagram
Fig 4.5: - Interaction Diagram
Fig 4.6: - Activity Diagram
Fig 4.7: - Component Diagram
Fig 4.8: - Deployment Diagram
                                                                         5. CONCLUSION

       The Proposed system will help to save query processing time.
        It will help to convert database from one form to another.
         There will be no need to install licensed copy of software on clients, it is only needed to
be installed on server only.
        Thus the proposed system will efficiently use the query processing engine and increase
the performance.
                                              6. PLANNING AND SCHEDULING

The business driver for effective data migrations is to strike the optimal balance between data
accuracy, migration speed, low or no downtime, and minimum costs. We will see how to
formulate such a strategy.

Expectations (Out-of-scope)

To a large extent the enterprise’s storage technology, storage management tools, and storage
network capabilities define the range of possibilities for data migration strategies. Selection of
such is out of scope of this note; however, we will introduce a methodology for data migration
strategy that can work under most technology environments.

Analyze phase

The first stage of data migration is data classification. It is important to know how and where
data is stored, backed up, and archived. Data structures must be well understood, and there
should be visibility into the data’s usage, capacity, and growth patterns. Various interfaces and
reports utilizing the to-be-migrated data must be considered. It is also important to understand
the network connections between data points (especially required bandwidth and controls). Data
classification also describes conditions for data access, retention requirements and control &
security measures such as encryption. Next, migration requirements are determined. Beyond the
“top three” requirements of what must be migrated, how much downtime is acceptable, and how
much budget is available, there are other needs that must be considered. These can include
quality requirements, new or modified service-level agreements, expectations for the new storage
infrastructure, and objectives such as reduced management costs, reduced storage expenditures,
greater insight into expenditure, a simplified vendor model or greater technical flexibility or
stability. If the organization has standards or policies concerning data, these must be factored in.
Finally, analysts must take into account historical data to be migrated (which is often forgotten
until it is late in the game).

Then there are technology considerations, such as:

      How old is the operating system(s) under which data is to be migrated? Some migration
       tools do not support legacy operating systems.
      Which storage tiers and devices are involved?
      Can identical data structures be used for the target data (this will reduce the time and
       complexity of migration)?
      What staging area requirements are present, given current technologies and data
       migration requirements?
      Do you need or want the option to recover quickly from the source disk, or to fall back to
       the original storage device as a fail-over? There are both procedural and technological
       ways of accomplishing this.
      Is a central console needed to manage data migrations across multiple servers?
       Is there a need to control the data migration from a local server – or a remote server? If
        remote, which protocols must be supported?
       Is there a requirement to throttle or control data flows between servers?
       Must data integrity be checked during (not just after) migration?
       Which staff or consultants are available to assist with migration analysis, design, and

Data may be migrated in several ways:

   1.   All at once
   2.   In logical sections (by application, operating system, database, business function, etc.)
   3.   Via a pilot migration or parallel run for “proof of concept” or risk mitigation.
   4.   In phases (first critical data, then less critical or historical data).

Acceptance Test Considerations

The Analyze Phase is complete when data has been classified, data migration requirements are
clear, and technology considerations have been considered. For organizations with an established
data classification program (that typically includes government entities, regulated firms, and
enterprises complying with quality standards) this can be done in a week or two. If a full data
classification must be done as well, that can add another few weeks.

Key analysis milestones

Milestones in the Analysis Phase typically include the following:

       Legacy data and report analysis and profiles has been completed.
       There is an updated data classification document.
       The data migration strategy document has been written.
       Data exception report requirements are clear.
       Audit and reconciliation report requirements are defined.
       Data migration requirements (business and technology) are documented.

Design phase

The design phase involves taking the data classification and requirements defined in the analysis
phase and puts definition around how these will be realized:

       The responsibilities, roles and tasks for each individual, department, or external
        consultant involved in migration are determined.
       Data elements are mapped from source to target.
       A plan for freezing physical data structures during the migration is put together.
       Any tools to be used in the migration (data transformation tools, migration options, data
        mapping tools, CASE or other data modeling tools, simple spreadsheets, etc.) are
        identified or acquired.
       Any software to facilitate data extraction or checking is developed.
      Clear acceptance criteria are determined and agreed.

Where economically viable (large volumes of data to migrate, frequent mission-critical
migrations, complex data structures, high data quality requirements) data transformation tools
can offer the following advantages over simple spreadsheets:

      They provide flexible reporting of data elements and rules.
      They can generate migration code or scripts directly from the mapping rules.
      They can detect data integrity violations.

Acceptance test considerations

The Design Phase is complete when migration staffing has been confirmed, data mapping is
complete, a plan for freezing data structures is in place, tools have been identified, and
acceptance criteria have been agreed. The length of the design phase depends mainly on the need
to acquire additional migration tools and the need to develop extraction, loading, and quality
checking software. If such tools and software are in place and simply need to be customized, the
design phase can take place in a few weeks; otherwise, this process can take a few months.

Key design milestones

Milestones in the Design Phase include the following:

      Roles, responsibilities, and task assignments are clear and accepted.
      Data elements are mapped from source to target.
      A plan for freezing physical data structures during the migration is put together.
      Migration tools are identified.
      Migration software is developed.
      A migration strategy is chosen.
      Acceptance criteria are agreed.
      Funding for staff, consultants, and tools has been secured.

Deploy phase

The first step in the design stage is to put together a project plan and structure. As part of this
process there should be close analysis of any dependencies in data migrations; where possible,
such dependencies and complexities should be reduced to better manage deployment risk.
During the deploy phase the following occurs:

      Physical data structures are frozen on source and target.
      Interfaces and processing on source and target are brought down where required.
      Data is staged from the source location.
      Quality reports are run and any data errors or inconsistencies identified.
      Data quality issues are fixed in the staging area.
      A preliminary reconciliation takes place in the staging area, and any reconciling items are
       investigated and resolved.
      Data is migrated to the target location.
      Reconciliation reports are run.
      Acceptance criteria are check; if reconciliation errors or other criteria are not met, the
       system is rolled back to the original data source. Otherwise, interfaces and processing
       from the source is discontinued and then activated on the target.
      The above may be conducted in phases, or as part of a parallel run or pilot depending
       upon the migration approach chosen.

Acceptance Test Considerations

Testing and implementation are inseparable in a data migration. Physical errors are typically
syntactical in nature and can be easily identified and resolved through syntax corrections in the
migration scripts. Logical errors are due to problems with the data mapping. Looking into these
requires asking questions like:

      How many records did we expect this script to create?
      Did the correct number of records get created? If not, why?
      Has the data been loaded into the correct fields?
      Is the data load complete – or are certain fields missing?
      Has the data been formatted correctly?
      Are any post-migration clean-up tasks in order?

The goal of a successful data migration is to keep the length of the deploy phase(s) to a
minimum. What is acceptable depends upon the organization – some will want this phase to last
no more than a day (or less), others can tolerate a deployment that lasts a few days if availability
is not a major concern.

Key deployment milestones

Milestones in the Deploy Phase include the following:

      Project plan in place and agreed.
      Project team is formed.
      A test plan is written.
      The migration takes place.
      Reconciliation / data checking reports are run.
      Acceptance criteria and performance metrics are evaluated.
      A go / no-go decision takes place.
      Data issues may be resolved post-migration.
                                                               6. REFERENCES

a. Abraham Silberschatz, Henry F. Korth, S. Sudarshar,”Database System Concepts”, Fifth
b. Herbert Schildt, “The Complete Reference – Java2”, Fifth Edition.
c. George Reese, “Database programming with JDBC and JAVA”, Second Edition.
d. http://www.sql-programmers.com/DisadvantagesofAccess.aspx
e. http://en.wikipedia.org/wiki/Data_migration#Database_migration
f. http://www.intelligent-converters.com/software.htm
g. http://en.wikipedia.org/wiki/Oracle_Database
h. http://en.wikipedia.org/wiki/Microsoft_Access
i. http://en.wikipedia.org/wiki/Database_management_system
j. http://www.oracle.com/technetwork/java/javase/download-138376.html
k. http://en.wikipedia.org/wiki/Java_Platform,_Standard_Edition
l. https://sites.google.com/site/advancedjavabooksandppts/Home/ajp-books

To top