Learning Center
Plans & pricing Sign in
Sign Out



International Journal of Emerging Trends & Technology in Computer Science (IJETTCS) Web Site: Email:, Volume 2, Issue 6, November – December 2013 ISSN 2278-6856, Impact Factor: 2.524 ISRA:JIF

More Info
									    International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: Email:,
Volume 2, Issue 6, November – December 2013                                    ISSN 2278-6856

    A Comparative analysis of ETL and Hyper ETL
                                            A. Prema1, N.Sujatha 2, A.Pethalakshmi 3
                                        Ph.D Research scholar, Department of Computer Science,
                                             Bharathiar University, Coimbatore, Tamil Nadu, India.

                                        Assistant Professor, Raja doraisingam Govt Arts College,
                                                     Sivagangai, Tamilnadu, India.

                                         Associate Professor, Department of Computer Science,
                                             MVM Government Arts College for Women,
                                                     Dindigul, Tamilnadu, India.
Abstract: The term ETL which           stands for Extraction.        monitoring processes. The purpose of using ETL tools is
Transformation, and Loading is a batch or scheduled data             to save time and make the whole process more reliable.
integration process that includes extracting data from their         The ETL tools are customized to provide the functionality
operational or external data sources, transforming the data          to meet the enterprise requirements. Hence, many of them
into an appropriate format, and loading the data into a data         choose to build their own data warehouse themselves.
ware house repository. Hyper- ETL reduce an execution time
and to remove the mismanagement of metadata in existing
                                                                     Section 2 of this paper deals wiyth related work fone n the
ETL process. This paper first analyzes the troubles of existing
ETL tools, and compare the parameter of Hyper ETL with
                                                                     Extract, Transformation and Loading into data
existing ETL. This Hyper ETL tool broadens the aggregation           warehouse. Section 3 explains an actual process of Hyper
method, conveys information intelligently and is useful for an       Extract, transform and Load. Section4 explains the
effective decision making. ETL rules are designed toe                Comparative analysis. In section 5, Experimental analysis
eliminate the negligence of metadata in ETL processes and            and results are given, and finally, section 6 presents a
improve an effectiveness of an ETL process. This Hyper ETL           conclusion of this paper.
reduces the transformation time, maintenance cost and
increase the throughput and reliability than an existing one.        2. RELATED WORKS
                                                                     Data Mart can hold information which addresses both
Keywords: -ETL, Hyper ETL,                   Decision making,        strategic and tactical information needs and provides
metadata and Data Warehouse.                                         information which allows key operating functions to
                                                                     effectively manage performance. It unifies information
1. INTRODUCTION                                                      from various database into a single database. Over the
As we live in a period of scientific advancement,                    years, data warehouse technology has been used for
technical innovation and quicker development in modern               analysis and decision making in enterprises [7]. Different
computing, great interest and utmost care is taken in this           varieties of approaches for the integration of ETL tool in
paper to help aspring students and enthusiastic                      data warehouse have been proposed. Shaker H. Ali El-
researchers. This ETL tool is used to simply the process             Sappagh tries to navigate through the efforts done to
of migrating data, standardize the method of data                    conceptualize abbreviations for ETL, DW, DM, OLAP,
migration. Store all data transformation logos as Meta               ion-line analytical processing. DS, ODS, and DSA [8]. A
data which enable the users, managers and architects to              data warehouse gives a set of numeric values that are
understand, review, and modify the various interfaces and            based on a set of input values in the form dimensions [5].
reduce the cost and effort associated with building                  Li, Jain, conquered the weak points of traditional Extract,
interfaces. Extraction is the process of reading data from           Transform and Load tool’s architecture and processed a
a specified source database and extracting a desired subset          three layers architecture based on metadata. That built
of data. Transformation phase applies a chain of rules or            ETL process more flexible, multipurpose and efficient
functions to the extracted data to derive the data to be             and finally they designed and implemented a new ETL
loaded. Three forms of transformations are utilized, that            tool for the drilling data warehouse [10]. A systematic
is, Subset of tables, Formatting data and Primary Keys               review method was proposed to identify, extract and
and Indexes. Subset are created to remove personally                 analyze the main proposals on modeling conceptual ETL
individual information. All tables except the reference              process for Data warehouse.
table are transferred to the Data warehouse using an ETL             The main proposals were identified and compared based
process. Primary Keys are created to make sure                       on the features, activities and notation of ETL processes
uniqueness within a tables and to facilitate the fusion of           and concluded the study by reflecting on the approaches
tables. Indexes are created to expedite queries. Loading is          being studied and providing an update skeleton for future
the process of writing the data into the target database.            study [6].
The ETL process includes designing a target,
transforming data for the target, scheduling and
Volume 2, Issue 6 November – December 2013                                                                           Page 305
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: Email:,
Volume 2, Issue 6, November – December 2013                                    ISSN 2278-6856

A concrete ETL service framework was proposed and             Lior sapir et al., This paper “A methodology for the
talked about metadata management service, metadata            design of a fuzzy data warehouse” a data ware house is a
definition services, ETL transformation rules service,        special database used for storing business oriented
process definition service etc [2]. Two heuristic             information for future analysis and decision making. In
algorithms with greedy characteristics were proposed to       business scenario, where some of the data or the business
reduce the execution cost of an ETL workflow [9]. Sabir       attributes are fuzzy, it may be useful to construct a ware
Asadullaev talked about centralized Extract, Transform        house that can support the analysis of fuzzy data and also
and Load with similar Data warehouse and Data Mart,           outlined the Kimball’s methodology for the design of a
applications of data mart, data warehouse with                data warehouse can be extended to the construction of a
integration bus and recommended data warehouse                fuzzy data ware house. A case study demonstrates the
architecture [3]. Numeric values of a classical data          visibility of the methodology most commonly used
warehouse can be difficult to understand for business         methodology today is Kimball’s . It describes the process
users, or may be interpreted incorrectly. Therefore, for a    of translating business data and process into a
more accurate interpretation of numeric values, business      dimensional model. It has several advantages, such as
users require an interpretation in meaningful non-            users can make more intuitive and easy to understand
numeric terms. However, if the transition between terms       queries in a natural language. Defining fuzzy dimensional
is crisp, true values cannot be measured and smooth,          allows the user to describe the facts with abstract human
transition between classes cannot take place[1]. At last,     concepts which are actually more realistic [16]. The fuzzy
definition method and related algorithms of ETL rules are     dimensional also allow more flexible and interesting to
designed and analyzed. A data mart contains data from a       filtering of the facts. We have demonstrated that fuzzy
particular business area and multiple data marts can form     measures used with fuzzy aggregation operators allow the
a data warehouse [4].                                         user to understand his business and the data warehouse
                                                              measures better.
Radhakrishna and Sreekanth , proposed a web based
framework model for representing the extraction of data       Daniel Fasel demonstrates the users a fuzzy data ware
from one or more data sources and use transformation          house approach to support the fuzzy analysis of the
business logic and load the data within the data ware         customer performance measurement. The potential of the
house. This is a good starting point for gathering            fuzzy data ware house approach is illustrated using a
information in the existing documentation for the system      concrete example of a customer performance
and also research for ETL phase in web based scenario         measurement of a hearing instrument manufacture. A few
modeling in distributed environment a provide the             for combining fuzzy concepts with the hierarchies of the
effective decision results for various organization [18].     data ware house have been proposed. A method of
The models of the entire ETL process using UML because        summary can be guaranteed using this approach and the
these structural and dynamic properties of an information     data ware house concepts retained flexibility. Using a
system at the conceptual level are more natural than the      fuzzy approach in data ware house concepts improves
naïve approaches. It is more flexible and is used to          information quality for the company. It provides broader
support trading corporation, banks, financial and human       possibilities     to create indicators for customer
resource management system of an organization at              performance measurement as in the example given of a
various levels. The future direction of this paper includes   hearing instrument manufacturer. The proposed approach
analyzing multimedia information sources automating           does not include fuzzy linguistic concepts directly in to
mechanisms for ETL process.                                   the hierarchical structure of dimension or into fact tables
                                                              of the data ware house model and also explains how the
Owen Kaser et al., “The Lito Project data ware houses         fuzzy concepts can be aggregated over dimensions
with Literature “describes to apply the business              without having to redefined the fuzzy sets in every degree
intelligence techniques of data warehousing and OLAP to       of granularity [13]. Visualization should provide easily
the domain of text processing. A literary data ware –         understand the results for fuzzy queries in the fuzzy data
house is the conventional corpus but its data stored and      ware house.
organized in multidimensional stages, in order to promote
efficient end user queries. This work improves the query      D. Ashok Kumar and M.C. Loraine explained modern
engine, ETC process and the user interfaces. The extract,     electronic health records are designed to capture and
transform, load stage retains the information which are       render vast quantities of clinical data during the health
build by the data ware house. We believe the overall idea     care prone. Utilization of data analysis and data mining
of applying OLAP to literary data is promising. The           methods medicine and health care is sparse. Medical data
initial custom engine is slow for production use but until    is one of the heavily and categorical type data. A
more optimization is attempted, its promise is unclear        Dichotomous variable is type of categorical which is
[17].                                                         binary with categorical zero and one. Binary data are the
                                                              simplest form of data used for medical database in which
                                                              close ended questions can be used. It is very efficient

Volume 2, Issue 6 November – December 2013                                                                    Page 306
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: Email:,
Volume 2, Issue 6, November – December 2013                                    ISSN 2278-6856

based on computations efficiency and memory capacity to         outcome of the more resources. It is also used to achieve
represent categorical type data. Data mining technique          powerful results in a short amount of time that is useful to
called clustering is involved here for capacity to represent    users and fulfills the core requirement of effective
categorical type data. Data mining techniques called            visibility in to their complex business data.
clustering is involved here for dichotomous medical data
due to its high dimensional and data scarcity. Usually the      3.HYPER EXTRACT, TRAN FORM AND
binary data clustering is done by using 0 and 1 as              LOAD(HYPER ETL):
numerical value. The clustering is performed after              Hyper ETL[19] rules are designed and analyzed to
transforming the binary data into real by wiener                remove the mismanagement of metadata in ETL
transformation. The proposed algorithm in this paper can        processes and also improve the ETL efficiency. This
be usable for large medical and health binary data bases        Hyper ETL tool broadens the aggregation method,
for determining the correction are the health disorders         conveys information intelligently and is useful for an
and symptoms observed [12].                                     effective decision making. This Hyper ETL reduces the
                                                                transformation time, maintenance cost and increase the
Kari Richardon and Eric Rossland describes the hands-on         throughput and reliability than an existing one.
work shop will give users a basic tour through the
functionality of SAS ETL studio health to build a small
data mart. The participants in this workshop will use SAS
ETL studio to define necessary library definitions also
source and target table definitions. Participants will create
a process flow diagram using a simple transformation and
load the target table. In the last step, participants will
create 2 reports using target table [15]. Finally, this
hands-on workshop provides an overview of SAS ETL
studio and how it can be used to create a data mart.

Christ Sophie et al., focus that in the field of human
resources there is a growing trend towards moving from
activity based functions to a more strategic, business                     Figure:1. Process of Hyper ETL[20]
oriented role. The data mart defines on the HR
information needs is the best solution to meet the              The steps for designing the Hyper ETL are given below.
objectives [11]. The main purpose of this paper is to
explain how the SAS system can be used in top of SAS                1. Extract the data from operational data source. Data
R/3 HR, and obtained real business benefits on a very                  extraction is one of the three main functionalities
short time. It is also based on the practical experience at            of the ETL tools. A main consideration to assess is
the Belgain Gas and electricity provider. The structure of             the product’s ability to extract from a variety of
this paper first explained the business of the short                   various data sources.
comings and discussed the business objectives for the data          2. Create tables with relevant attributes based on user
mart. Finally this paper explains the project approach and             requirements.
focuses on the specific attention points when building a            3. Transform it to fit operational needs. Generated the
data mart. It provides end to end solution and the data                XML document file for the collected data.
management facilities possible to deliver quick result to           4. Construct the Meta-Data for XML document File.
the end-users.                                                      This Research work will be implements three the
                                                                       XML document file for the Oracle Database, XML
Jeremy Andeas et al., describes in this paper about                    Data File and JDBC. The Protocol will be part of
building powerful data marts that requires minimal                     the url attribute of the target or source node. Every
administration and are simple to change. This may seem                 transformation will have a source and target.
like an impossible goal to anyone who has been involved             5. Eliminate the inconsistent data.
in the usual complexity but there are a number of simple            6. Split the table.
practical concepts and methodologies that have been                 7. Assign the data.
employed and tested over many years of successful data              8. Load it into the end target (ie) Pump the data into
ware house implementation that are repeatable and are                  Oracle data ware house. The loading phase is the
easy to understand [14]. For the purpose of data ware                  last step of the ETL process. The information from
housing ETL is used to pull data from business systems                 data sources are loaded and stored in a form of
into a database that is designed for analysis and reporting.           tables. There are two types of tables in the database
Building data mart and ETL processes involves large                    structure: Facts tables and Dimensions tables. Once
volumes of complex business data and the easiest outcome               the fact and dimension tables are loaded, it is time
is complexity. Lack of results are expected the easiest

Volume 2, Issue 6 November – December 2013                                                                       Page 307
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: Email:,
Volume 2, Issue 6, November – December 2013                                    ISSN 2278-6856

        to improve the performance of the Business                                   Table: 4
        Intelligence data by creating Aggregates.
     9. Audit Reports.
     10. Publish is the manual step to recap the position
        of the previous day’s
     11. Archive
     12. Clean up (giving out data to advance its
1. Input Tables:
(Campaign, Product, Customer)

                        Table: 1

                                                                                     Table: 5

                        Table: 2

                                                                                     Table: 6

                        Table: 3

                                                            Result will be displayed based on the location.

                                                            4. PROPOSED WORK:
                                                            This paper presents the comparative study of Hyper ETL
                                                            and existing ETL. This paper first study the troubles of
                                                            existing ETL tool, and proposed Hyper ETL for
Output Tables[19]:                                          managing metadata structure and improve an

Volume 2, Issue 6 November – December 2013                                                                    Page 308
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: Email:,
Volume 2, Issue 6, November – December 2013                                    ISSN 2278-6856

effectiveness of an ETL process. Hyper ETL reduces the            Figure: 5
transformation time than an existing one. This
enrichment become wider the aggregation method,
convey information intelligently and is useful for an
effective decision making. We have taken about 15
essential parameters of ETL.

Depending on the attributes we have apply the rating.

                  Rating      Meaning
                                                                  Figure: 6
                    6         Very high
                    5         High
                    3         Medium
                    1         Low

  Parameters                  Existing    Hyper
                              ETL         ETL
  Scalability                 1           6
  Manageability               3           3
  Disk I/ o utilization       6           5
  CPU utilization             5           6                       Figure: 7
  Comparability               1           6
  Reliability                 1           6
  Size (GB)                   1           6
  Throughput                  3           6
  Traceability                3           3
  Modifiability               3           3
  Maintenance cost            6           5
  Verifiability               5           5
  Speed                       1           6
  Correctness                 5           6
  Consistency                 5           5                       Figure: 8

The worth of this Hyper ETL tool was checked through
some sample sales records and we sited that, it obtained
58 minutes for nearly one million records for which
transformation time was fewer than the existing ETL tool.
A comparative analysis of existing and Hyper ETL
attributes are given in the form of graph. Figure: 2 to
figure 10 shows overall difference of Existing ETL and
Hyper ETL.

                                                                  Figure: 9

Volume 2, Issue 6 November – December 2013                                           Page 309
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: Email:,
Volume 2, Issue 6, November – December 2013                                    ISSN 2278-6856

                      Figure: 10                           Figure : 14 Comparison of Existing ETL tool and
                                                                          Hyper ETL tool

                      Figure: 11

                                                           6. CONCLUSION:
                                                           We have presented the comparative study of Existing
                                                           ETL and proposed Hyper ETL. We have taken about 15
                                                           essential parameters and we have given the difference of
                                                           exiting and proposed Hyper ETL. Based on the study,
                                                           Scalability, CPU utilization, throughput, reliability,
                                                           execution speed are high and maintenance cost is low
                                                           than Existing ETL. Both ETL provide the same level of
                                                           manageability, traceability, modifiability and consistency.

5. EXPERIMENTAL ANALYSIS AND                               [1] D. Fasel and D. Zumstein.,, A fuzzy data warehouse
RESULT:                                                        approach for web analytics. “In MD. Lytras, E.
We have presented a Hyper ETL for increasing the               Damiani, J. M. Carroll, R.D. Tennyson, D. Avison,
performance of the ETL which is different from the             A. Naeve, A. Dale, P. Lefrere, F. Tan, J. Sipior, and
traditional ETL tool. Overall comparative study is given       G. Vossen, editors, Visioning and Engineering the
in the form of figure 12. This hyper ETL efficiently is        knowledge Society – A web Science Perspective,
tested through some sales records and implemented              volume 5736 of Lecture Notes in Computer science,
through operation research techniques such as assignment       Pages 276-285.Springer, 2009
problem. The result of assignment problem is               [2] Munoz L., Mazon, J., Trujillo, J-, “Systematic review
given(figure 12) in the form of histogram analysis.            and comparison of modeling ETL processes in data
                                                               warehouse”, Iberian Conference on information
                                                               Systems and Technologies, June 2010.
                      Figure: 12
                                                           [3] Sabir Asadullaev, Data Warehouse Architectures –
                                                               III SWG IBM EE/ A 03.11.20069
                                                           [4] Shaker H. Ali EL - Sappagh a, Abdeltawab M.
                                                               Ahmed Hendawi b, Ali Hamed El Bastawissy b” A
                                                               proposed model for data warehouse ETL processes”.
                                                           [5] Simitsis, A vassiliadis, P.: Sellis, T.-, “State –space
                                                               optimization of ETL workflow”, IEEE Transactions
                                                               on Knowledge and Data Engineering , Vol 17, Issue
                                                               10, Oct 2005.
   Figure : 13 Transformation time of Existing             [6] Inmon, William (2000-07-18). “Data Mart Does Not
       ETL tool and Hyper ETL tool[19]                         Equal Data Warehouse”.
                                                           [7] Hariprasad T, “ETL testing fundamentals” on March
                                                               29, 2012.
                                                           [8] Human Wang. “An ETL Services Framework Based
                                                               on Metadata”, 2nd International Workshop on
                                                               Intelligent Systems and Applications, May 2010.
                                                           [9] W.H. Inmon, “ Building the Data Warehouse”.
                                                               Wiley Publishing, Inc., 4 edition, 2005.

Volume 2, Issue 6 November – December 2013                                                                 Page 310
   International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
       Web Site: Email:,
Volume 2, Issue 6, November – December 2013                                    ISSN 2278-6856

[10] Li Jian, Xu Bihua, ETL tool research and
     implementation based on drilling data warehouse”,
     Seventh International Conference on Fuzzy Systems
     and Knowledge Discovery, Aug 2010.
[11] Christ D’haevloose and Sophie Debaets, “Building a
     data mart on top of SAP”.
[12] D. Ashok Kumar and M.C. Loraine Charlet Annie, “
     Decision making on Dichotomous Medical Data
     using Novel Clustering approach,” National
     conference on Signal and Image processing
[13] 13. Daniel Fasel, “A fuzzy data warehouse approach
     for the customer performance measurement for a
     hearing instrument manufacturing company”, Sixth
     International conference on fuzzy systems and
     knowledge discovery, 2009.
[14] Jeremy, Jean King and Andreas schindler,
     “Understanding of Business Intelligence: ETL and
     Data mart Practises”.
[15] Kari Richardson and Eric Rossland , “A quick Tour
     of SAS ETL to build a data mart” 131-130.
[16] Lior Sapir and Armin Shmilovice., “ A methodology
     for the design of a fuzzy data warehouse”2008.
[17] Owen Kaser, Steven Keith and Daniel Lomire,” Data
     warehousing with literature”, September 11, 2006.
[18] Radha Krishna and SreeKanth, “An object Oriented
     modeling and Implementation of web based ETL
     process” in IJCSNS, International Journal of
     computer Science and Network Security, vol. 10 no.
     2, February 2010.
[19] A.Prema and A.Pethalakshmi”Novel approach in
     ETL”in IEEE Xplore digital library,feb2013.
[20] A.Prema and A.Pethalakshmi” An Improved
     Enhancement of Decision- making Analysis to
     increase the sales promotion using Hyper ETL in
     Data Mart In IJCII, vol 2,Mar 2013

Volume 2, Issue 6 November – December 2013                                          Page 311

To top