International Journal of Computer Science Volume 8 April 2010 by ijcsiseditor

VIEWS: 6,841 PAGES: 360

More Info
									     IJCSIS Vol. 8 No. 1, April 2010
           ISSN 1947-5500

International Journal of
    Computer Science
      & Information Security

                   Message from Managing Editor
International Journal of Computer Science and Information Security (IJCSIS)
provides a major venue for rapid publication of high quality computer science research,
including multimedia, information science, security, mobile & wireless network, data
mining, software engineering and emerging technologies etc. IJCSIS has continued to
make progress and has attracted the attention of researchers worldwide, as indicated by
the increasing number of both submissions and published papers, and also from the
web statistics.. It is included in major Indexing and Abstracting services.

We thank all those authors who contributed papers to the April 2010 issue and the
reviewers, all of whom responded to a short and challenging timetable. We are
committed to placing this journal at the forefront for the dissemination of novel and
exciting research. We should like to remind all prospective authors that IJCSIS does
not have a page restriction. We look forward to receiving your submissions and to
receiving feedback.

IJCSIS April 2010 Issue (Vol. 8, No. 1) has an acceptance rate of 35%.

Special thanks to our technical sponsors for their valuable service.

Available at
IJCSIS Vol. 8, No. 1, April 2010 Edition
ISSN 1947-5500 © IJCSIS 2010, USA.
Indexed by (among others):
Dr. Gregorio Martinez Perez
Associate Professor - Professor Titular de Universidad, University of Murcia
(UMU), Spain

Dr. M. Emre Celebi,
Assistant Professor, Department of Computer Science, Louisiana State University
in Shreveport, USA

Dr. Yong Li
School of Electronic and Information Engineering, Beijing Jiaotong University,
P. R. China

Prof. Hamid Reza Naji
Department of Computer Enigneering, Shahid Beheshti University, Tehran, Iran

Dr. Sanjay Jasola
Professor and Dean, School of Information and Communication Technology,
Gautam Buddha University

Dr Riktesh Srivastava
Assistant Professor, Information Systems, Skyline University College, University
City of Sharjah, Sharjah, PO 1797, UAE

Dr. Siddhivinayak Kulkarni
University of Ballarat, Ballarat, Victoria, Australia

Professor (Dr) Mokhtar Beldjehem
Sainte-Anne University, Halifax, NS, Canada

Dr. Alex Pappachen James, (Research Fellow)
Queensland Micro-nanotechnology center, Griffith University, Australia
                                 TABLE OF CONTENTS

1. Paper 29031048: Buffer Management Algorithm Design and Implementation Based on Network
Processors (pp. 1-8)

Yechang Fang, Kang Yen, Dept. of Electrical and Computer Engineering, Florida International University,
Miami, USA
Deng Pan, Zhuo Sun, School of Computing and Information Sciences, Florida International University,
Miami, USA

2. Paper 08031001: Multistage Hybrid Arabic/Indian Numeral OCR System (pp. 9-18)

Yasser M. Alginaih, Ph.D., P.Eng. IEEE Member, Dept. of Computer Science, Taibah University, Madinah,
Kingdom of Saudi Arabia
Abdul Ahad Siddiqi, Ph.D., Member IEEE & PEC, Dept. of Computer Science, Taibah University,
Madinah, Kingdom of Saudi Arabia

3. Paper 30031056: Attribute Weighting with Adaptive NBTree for Reducing False Positives in
Intrusion Detection (pp. 19-26)

Dewan Md. Farid, and Jerome Darmont, ERIC Laboratory, University Lumière Lyon 2, Bat L - 5 av.
Pierre Mendes, France, 69676 BRON Cedex, France
Mohammad Zahidur Rahman, Department of Computer Science and Engineering, Jahangirnagar
University, Dhaka – 1342, Bangladesh

4. Paper 30031053: Improving Overhead Computation and pre-processing Time for Grid Scheduling
System (pp. 27-34)

Asgarali Bouyer, Mohammad javad hoseyni, Department of Computer Science, Islamic Azad University-
Miyandoab branch, Miyandoab, Iran
Abdul Hanan Abdullah, Faculty Of Computer Science And Information Systems, Universiti Teknologi
Malaysia, Johor, Malaysia

5. Paper 20031026: The New Embedded System Design Methodology For Improving Design Process
Performance (pp. 35-43)

Maman Abdurohman, Informatics Faculty, Telecom Institute of Technology, Bandung, Indonesia
Kuspriyanto, STEI Faculty, Bandung Institute of Technology, Bandung, Indonesia
Sarwono Sutikno, STEI Faculty, Bandung Institute of Technology, Bandung, Indonesia
Arif Sasongko, STEI Faculty, Bandung Institute of Technology, Bandung, Indonesia

6. Paper 30031060: Semi-Trusted Mixer Based Privacy Preserving Distributed Data Mining for
Resource Constrained Devices (pp. 44-51)

Md. Golam Kaosar, School of Engineering and Science, Victoria University, Melbourne, Australia
Xun Yi, Associate Preofessor, School of Engineering and Science, Victoria University, Melbourne,

7. Paper 12031005: Adaptive Slot Allocation And Bandwidth Sharing For Prioritized Handoff Calls
In Mobile Netwoks (pp. 52-57)

S. Malathy, Research Scholar, Anna University, Coimbatore
G. Sudha Sadhasivam, Professor, CSE Department, PSG College of Technology, Coimbatore.
K. Murugan, Lecturer, IT Department, Hindusthan Institute of Technology, Coimbatore
S. Lokesh, Lecturer, CSE Department, Hindusthan Institute of Technology, Coimbatore

8. Paper 12031009: An Efficient Vein Pattern-based Recognition System (pp. 58-63)

Mohit Soni, DFS, New Delhi- 110003, INDIA.
Sandesh Gupta, UIET, CSJMU, Kanpur-208014, INDIA.
M.S. Rao, DFS, New Delhi-110003, INDIA
Phalguni Gupta, Professor, IIT Kanpur, Kanpur-208016, INDIA.

9. Paper 15031013: Extending Logical Networking Concepts in Overlay Network-on-Chip
Architectures (pp. 64-67)

Omar Tayan
College of Computer Science and Engineering, Department of Computer Science, Taibah University, Saudi
Arabia, P.O. Box 30002

10. Paper 15031015: Effective Bandwidth Utilization in IEEE802.11 for VOIP (pp. 68-75)

S. Vijay Bhanu, Research Scholar, Anna University, Coimbatore, Tamilnadu, India, Pincode-641013.
Dr.RM.Chandrasekaran, Registrar, Anna University, Trichy, Tamilnadu, India, Pincode: 620024.
Dr. V. Balakrishnan, Research Co-Supervisor, Anna University, Coimbatore.

11. Paper 16021024: ECG Feature Extraction Techniques - A Survey Approach (pp. 76-80)

S. Karpagachelvi, Mother Teresa Women's University, Kodaikanal, Tamilnadu, India.
Dr. M.Arthanari, Tejaa Shakthi Institute of Technology for Women, Coimbatore- 641 659, Tamilnadu,
M. Sivakumar, Anna University – Coimbatore, Tamilnadu, India

12. Paper 18031017: Implementation of the Six Channel Redundancy to achieve fault tolerance in
testing of satellites (pp. 81-85)

H S Aravinda *, Dr H D Maheshappa**, Dr Ranjan Moodithaya ***
* Department of Electronics and Communication, REVA ITM, Bangalore-64, Karnataka, India.
** Director & Principal, East Point College of Engg, Bidarahalli, Bangalore-40, Karnataka, India.
*** Head, KTMD Division, National Aerospace Laboratories, Bangalore-17, Karnataka, India.

13. Paper 18031018: Performance Oriented Query Processing In GEO Based Location Search
Engines (pp. 86-94)

Dr. M. Umamaheswari, Bharath University, Chennai-73, Tamil Nadu,India,
S. Sivasubramanian, Bharath University, Chennai-73,Tamil Nadu,India,

14. Paper 20031027: Tunable Multifunction Filter Using Current Conveyor (pp. 95-98)

Manish Kumar, Electronics and Communication, Engineering Department, Jaypee Institute of Information
Technology, Noida, India
M.C. Srivastava, Electronics and Communication, Engineering Department, Jaypee Institute of
Information Technology, Noida, India
Umesh Kumar, Electrical Engineering Department, Indian Institute of Technology, Delhi, India

15. Paper 17031042: Artificial Neural Network based Diagnostic Model For Causes of Success and
Failures (pp. 95-105)

Bikrampal Kaur, Chandigarh Engineering College, Mohali, India
Dr. Himanshu Aggarwal, Punjabi University, Patiala-147002, India
16. Paper 28031045: Detecting Security threats in the Router using Computational Intelligence (pp.

J. Visumathi, Research Scholar, Sathyabama University, Chennai-600 119
Dr. K. L. Shunmuganathan, Professor & Head, Department of CSE, R.M.K. Engineering College, Chennai-
601 206

17. Paper 31031091: A Novel Algorithm for Informative Meta Similarity Clusters Using Minimum
Spanning Tree (pp. 112-120)

S. John Peter, Department of Computer Science and Research Center, St. Xavier’s College, Palayamkottai,
Tamil Nadu, India
S. P. Victor, Department of Computer Science and Research Center, St. Xavier’s College, Palayamkottai,
Tamil Nadu, India

18. Paper 23031032: Adaptive Tuning Algorithm for Performance tuning of Database Management
System (pp. 121-124)

S. F. Rodd, Department of Information Science and Engineering, KLS’s Gogte Institute of Technology,
Belgaum, INDIA
Dr. U. P. Kulkarni, Department of Computer Science and Engineering, SDM College of Engineering and
Technology, Dharwad, INDIA

19. Paper 26031038: A Survey of Mobile WiMAX IEEE 802.16m Standard (pp. 125-131)

Mr. Jha Rakesh, Deptt. Of E & T.C., SVNIT, Surat, India
Mr. Wankhede Vishal A., Deptt. Of E & T.C., SVNIT, Surat, India
Prof. Dr. Upena Dalal, Deptt. Of E & T.C., SVNIT, Surat, India

20. Paper 27031040: An Analysis for Mining Imbalanced Datasets (pp. 132-137)

T. Deepa, Faculty of Computer Science Department, Sri Ramakrishna College of Arts and Science for
Women, Coimbatore, Tamilnadu, India.
Dr. M. Punithavalli, Director & Head, Sri Ramakrishna College of Arts & Science for Women, Coimbatore,
Tamil Nadu, India

21. Paper 27031039: QoS Routing For Mobile Adhoc Networks And Performance Analysis Using
OLSR Protocol (pp. 138-150)

K.Oudidi, Si2M Laboratory, National School of Computer Science and Systems Analysis, Rabat, Morocco
A. Hajami, Si2M Laboratory, National School of Computer Science and Systems Analysis, Rabat, Morocco
M. Elkoutbi, Si2M Laboratory, National School of Computer Science and Systems Analysis, Rabat,

22. Paper 28031047: Design of Simple and Efficient Revocation List Distribution in Urban Areas for
VANET’s (pp. 151-155)

Ghassan Samara , National Advanced IPv6 Center, Universiti Sains Malaysia, Penang, Malaysia
Sureswaran Ramadas, National Advanced IPv6 Center, Universiti Sains Malaysia, Penang, Malaysia
Wafaa A.H. Al-Salihy, School of Computer Science, Universiti Sains Malaysia, Penang, Malaysia

23. Paper 28031044: Software Process Improvization Framework For Indian Small Scale Software
Organizations Using Fuzzy Logic (pp. 156-162)

A. M. Kalpana, Research Scholar, Anna University Coimbatore, Tamilnadu, India
Dr. A. Ebenezer Jeyakumar, Director/Academics, SREC, Coimbatore, Tamilnadu, India

24. Paper 30031052: Urbanizing the Rural Agriculture - Knowledge Dissemination using Natural
Language Processing (pp. 163-169)

Priyanka Vij (Author) Student, Computer Science Engg. Lingaya‟s Institute of Mgt. & Tech, Faridabad,
Haryana, India
Harsh Chaudhary (Author) Student, Computer Science Engg. Lingaya‟s Institute of Mgt. & Tech,
Faridabad, Haryana, India
Priyatosh Kashyap (Author) Student, Computer Science Engg. Lingaya‟s Institute of Mgt. & Tech,
Faridabad, Haryana, India

25. Paper 31031073: A New Joint Lossless Compression And Encryption Scheme Combining A
Binary Arithmetic Coding With A Pseudo Random Bit Generator (pp. 170-175)

A. Masmoudi * , W. Puech **, And M. S. Bouhlel *
* Research Unit: Sciences and Technologies of Image and Telecommunications, Higher Institute of
Biotechnology, Sfax TUNISIA
** Laboratory LIRMM, UMR 5506 CNRS University of Montpellier II, 161, rue Ada, 34392

26. Paper 15031012: A Collaborative Model for Data Privacy and its Legal Enforcement (pp. 176-182)

Manasdeep, MSCLIS, IIIT Allahabad
Damneet Singh Jolly, MSCLIS, IIIT Allahabad
Amit Kumar Singh, MSCLIS, IIIT Allahabad
Kamleshwar Singh, MSCLIS, IIIT Allahabad
Mr Ashish Srivastava, Faculty, MSCLIS, IIIT Allahabad

27. Paper 12031010: A New Exam Management System Based on Semi-Automated Answer Checking
System (pp. 183-189)

Arash Habibi Lashkari, Faculty of ICT, LIMKOKWING University of Creative Technology,
CYBERJAYA, Selangor,
Dr. Edmund Ng Giap Weng, Faculty of Cognitive Sciences and Human Development, University Malaysia
Sarawak (UNIMAS)
Behrang Parhizkar, Faculty of Information, Communication And Technology, LIMKOKWING University
of Creative Technology, CYBERJAYA, Selangor, Malaysia
Siti Fazilah Shamsudin, Faculty of ICT, LIMKOKWING University of Creative Technology, CYBERJAYA,
Selangor, Malaysia
Jawad Tayyub, Software Engineering With Multimedia, LIMKOKWING University of Creative Technology,
CYBERJAYA, Selangor, Malaysia

28. Paper 30031064: Development of Multi-Agent System for Fire Accident Detection Using Gaia
Methodology (pp. 190-194)

Gowri. R, Kailas. A, Jeyaprakash.R, Carani Anirudh
Department of Information Technology, Sri Manakula Vinayagar Engineering College, Puducherry – 605

29. Paper 19031022: Computational Fault Diagnosis Technique for Analog Electronic Circuits using
Markov Parameters (pp. 195-202)

V. Prasannamoorthy and N.Devarajan
Department of Electrical Engineering, Government College of Technology, Coimbatore, India
30. Paper 24031037: Applicability of Data Mining Techniques for Climate Prediction – A Survey
Approach (pp. 203-206)

Dr. S. Santhosh Baboo, Reader, PG and Research department of Computer Science, Dwaraka Doss
Goverdhan Doss Vaishnav College, Chennai
I. Kadar Shereef, Head, Department of Computer Applications, Sree Saraswathi Thyagaraja College,

31. Paper 17021025: Appliance Mobile Positioning System (AMPS) (An Advanced mobile
Application) (pp. 207-215)

Arash Habibi Lashkari, Faculty of ICT, LIMKOKWING University of Creative Technology,
CYBERJAYA, Selangor, Malaysia
Edmund Ng Giap Weng, Faculty of Cognitive Sciences and Human Development, University Malaysia
Sarawak (UNIMAS)
Behrang Parhizkar, Faculty of ICT, LIMKOKWING University of Creative Technology, CYBERJAYA,
Selangor, Malaysia
Hameedur Rahman, Software Engineering with Multimedia, LIMKOKWING University of Creative
Technology, CYBERJAYA, Selangor, Malaysia

32. Paper 24031036: A Survey on Data Mining Techniques for Gene Selection and Cancer
Classification (pp. 216-221)

Dr. S. Santhosh Baboo, Reader, PG and Research department of Computer Science, Dwaraka Doss
Goverdhan Doss Vaishnav College, Chennai
S. Sasikala, Head, Department of Computer Science, Sree Saraswathi Thyagaraja College, Pollachi

33. Paper 23031033: Non-Blind Image Watermarking Scheme using DWT-SVD Domain (pp. 222-228)

M. Devapriya, Asst.Professor, Dept of Computer Science, Government Arts College, Udumalpet.
Dr. K. Ramar, Professor & HOD, Dept of CSE, National Engineering College, Kovilpatti -628 502.

34. Paper 31031074: Speech Segmentation Algorithm Based On Fuzzy Memberships (pp. 229-233)

Luis D. Huerta, Jose Antonio Huesca and Julio C. Contreras
Departamento de Informática, Universidad del Istmo Campus Ixtepéc, Ixtepéc Oaxaca, México

35. Paper 30031058: How not to share a set of secrets (pp. 234-237)

K. R. Sahasranand , Nithin Nagaraj, Department of Electronics and Communication Engineering, Amrita
Vishwa Vidyapeetham, Amritapuri Campus, Kollam-690525, Kerala, India.
Rajan S., Department of Mathematics, Amrita Vishwa Vidyapeetham, Amritapuri Campus, Kollam-690525,
Kerala, India.

36. Paper 30031057: Secure Framework for Mobile Devices to Access Grid Infrastructure (pp. 238-

Kashif Munir, Computer Science and Engineering Technology Unit King Fahd University of Petroleum
and Minerals HBCC Campus, King Faisal Street, Hafr Al Batin 31991
Lawan Ahmad Mohammad, Computer Science and Engineering Technology Unit King Fahd University of
Petroleum and Minerals HBCC Campus, King Faisal Street, Hafr Al Batin 31991

37. Paper 31031076: DSP Specific Optimized Implementation of Viterbi Decoder (pp. 244-249)

Yame Asfia and Dr Muhamamd Younis Javed, Department of Computer Engg, College of Electrical and
Mechanical Engg, NUST, Rawalpindi, Pakistan
Dr Muid-ur-Rahman Mufti, Department of Computer Engg, UET Taxila, Taxila, Pakistan

38. Paper 31031089: Approach towards analyzing motion of mobile nodes- A survey and graphical
representation (pp. 250-253)

A. Kumar, Sir Padampat Singhania University, Udaipur , Rajasthan , India
P.Chakrabarti, Sir Padampat Singhania University, Udaipur , Rajasthan , India
P. Saini, Sir Padampat Singhania University, Udaipur , Rajasthan , India

39. Paper 31031092: Recognition of Printed Bangla Document from Textual Image Using Multi-
Layer Perceptron (MLP) Neural Network (pp. 254-259)

Md. Musfique Anwar, Nasrin Sultana Shume, P. K. M. Moniruzzaman and Md. Al-Amin Bhuiyan
Dept. of Computer Science & Engineering, Jahangirnagar University, Bangladesh

40. Paper 31031081: Application Of Fuzzy System In Segmentation Of MRI Brain Tumor (pp. 261-

Mrigank Rajya, Sonal Rewri, Swati Sheoran
CSE, Lingaya’s University, Limat, Faridabad India, New Delhi, India

41. Paper 30031059: E-Speed Governors For Public Transport Vehicles (pp. 270-274)

C. S. Sridhar, Dr. R. ShashiKumar, Dr. S. Madhava Kumar, Manjula Sridhar, Varun. D
ECE dept, SJCIT, Chikkaballapur.

42. Paper 31031087: Inaccuracy Minimization by Partioning Fuzzy Data Sets - Validation of
Analystical Methodology (pp. 275-280)

Arutchelvan. G, Department of Computer Science and Applications Adhiparasakthi College of Arts and
Science G. B. Nagar, Kalavai , India
Dr. Srivatsa S. K., Dept. of Electronics Engineering, Madras Institute of Technology, Anna University,
Chennai, India
Dr. Jagannathan. R, Vinayaka Mission University, Chennai, India

43. Paper 30031065: Selection of Architecture Styles using Analytic Network Process for the
Optimization of Software Architecture (pp. 281-288)

K. Delhi Babu, S.V. University, Tirupati
Dr. P. Govinda Rajulu, S.V. University, Tirupati
Dr. A. Ramamohana Reddy, S.V. University, Tirupati
Ms. A.N. Aruna Kumari, Sree Vidyanikethan Engg. College, Tirupati

44. Paper 27031041: Clustering Time Series Data Stream – A Literature Survey (pp. 289-294)

V.Kavitha, Computer Science Department, Sri Ramakrishna College of Arts and Science for Women,
Coimbatore, Tamilnadu, India.
M. Punithavalli, Sri Ramakrishna College of Arts & Science for Women, Coimbatore ,Tamil Nadu, India.

45. Paper 31031086: An Adaptive Power Efficient Packet Scheduling Algorithm for Wimax
Networks (pp. 295-300)

R Murali Prasad, Department of Electronics and Communications, MLR Institute of technology,
P. Satish Kumar, professor, Department of Electronics and Communications, CVR college of engineering,
46. Paper 30041037: Content Base Image Retrieval Using Phong Shading (pp. 301-306)

Uday Pratap Singh, LNCT, Bhopal (M.P) INDIA
Sanjeev Jain, LNCT, Bhopal (M.P) INDIA
Gulfishan Firdose Ahmed, LNCT, Bhopal (M.P) INDIA

47. Paper 31031090: The Algorithm Analysis of E-Commerce Security Issues for Online Payment
Transaction System in Banking Technology (pp. 307-312)

Raju Barskar, MANIT Bhopal (M.P)
Anjana Jayant Deen,CSE Department, UIT_RGPV, Bhopal (M.P)
Jyoti Bharti, IT Department, MANIT, Bhopal (M.P)
Gulfishan Firdose Ahmed, LNCT, Bhopal (M.P)

48. Paper 28031046: Reduction in iron losses In Indirect Vector-Controlled IM Drive Using FLC (pp.

Mr. C. Srisailam , Electrical Engineering Department, Jabalpur Engineering College, Jabalpur, Madhya
Mr. Mukesh Tiwari, Electrical Engineering Department, Jabalpur Engineering College, Jabalpur, Madhya
Dr. Anurag Trivedi, Electrical Engineering Department, Jabalpur Engineering College, Jabalpur, Madhya

49. Paper 31031071: Bio-Authentication based Secure Transmission System using Steganography (pp.

Najme Zehra, Assistant Professor, Computer Science Department, Indira Gandhi Institute of Technology,
GGSIPU, Delhi.
Mansi Sharma, Scholar, Indira Gandhi Institute of Technology, GGSIPU, Delhi.
Somya Ahuja, Scholar, Indira Gandhi Institute of Technology, GGSIPU, Delhi.
Shubha Bansal, Scholar, Indira Gandhi Institute of Technology, GGSIPU, Delhi.

50. Paper 31031068: Facial Recognition Technology: An analysis with scope in India (pp. 325-330)

Dr.S.B.Thorat, Director, Institute of Technology and Mgmt, Nanded, Dist. - Nanded. (MS), India
S. K. Nayak, Head, Dept. of Computer Science, Bahirji Smarak Mahavidyalaya, Basmathnagar, Dist. -
Hingoli. (MS), India
Miss. Jyoti P Dandale, Lecturer, Institute of Technology and Mgmt, Nanded, Dist. - Nanded. (MS), India

51. Paper 31031069: Classification and Performance of AQM-Based Schemes for Congestion
Avoidance (pp. 331-340)

K.Chitra Lecturer, Dept. of Computer Science D.J.Academy for Managerial Excellence Coimbatore, Tamil
Nadu, India – 641 032
Dr. G. Padamavathi Professor & Head, Dept. of Computer Science Avinashilingam University for Women,
Coimbatore, Tamil Nadu, India – 641 043
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 8, No. 1, 2010

         Buffer Management Algorithm Design and
        Implementation Based on Network Processors
                Yechang Fang, Kang Yen                                                     Deng Pan, Zhuo Sun
        Dept. of Electrical and Computer Engineering                        School of Computing and Information Sciences
                Florida International University                                     Florida International University
                        Miami, USA                                                               Miami, USA
                 {yfang003, yenk}                                                  {pand, zsun003}

Abstract—To solve the parameter sensitive issue of the                 network QoS, and also the key method to solve the network
traditional RED (random early detection) algorithm, an                 congestion problem. Queue management consists of buffer
adaptive buffer management algorithm called PAFD (packet               management and packet scheduling. Generally the buffer
adaptive fair dropping) is proposed. This algorithm supports           management is applied at the front of a queue and
DiffServ (differentiated services) model of QoS (quality of            cooperates with the packet scheduling to complete the queue
service). In this algorithm, both of fairness and throughput are       operation [2, 3]. When a packet arrives at the front of a
considered. The smooth buffer occupancy rate function is               queue, the buffer management decides whether to allow the
adopted to adjust the parameters. By implementing buffer               packet coming into the buffer queue. From another point of
management and packet scheduling on Intel IXP2400, the                 view, the buffer management determines whether to drop the
viability of QoS mechanisms on NPs (network processors) is             packet or not, so it is also known as dropping control.
verified. The simulation shows that the PAFD smoothes the
                                                                          The control schemes of the buffer management can be
flow curve, and achieves better balance between fairness and
                                                                       analyzed from two levels, data flow and data packet. In the
network throughput. It also demonstrates that this algorithm
                                                                       data stream level and viewed form the aspect of system
meets the requirements of fast data packet processing, and the
                                                                       resource management, the buffer management needs to
hardware resource utilization of NPs is higher.
                                                                       adopt certain resource management schemes to make a fair
   Keywords-buffer management; packet dropping; queue                  and effective allocation of queue buffer resources among
management; network processor                                          flows through the network nodes. In the data packet level
                                                                       and viewed from the aspect of packet dropping control, the
                      I.   INTRODUCTION                                buffer management needs to adopt certain drop control
   Network information is transmitted in the form of data              schemes to decide that under what kind of circumstances a
flow, which constitutes of data packets. Therefore, different          packet should be dropped, and which packet will be dropped.
QoS means different treatment of data flow. This treatment             Considering congestion control response in an end-to-end
involves assignment of different priority to data packets.             system, the transient effects for dropping different packets
Queue is actually a storage area to store IP packets with              may vary greatly. However, statistics of the long-term
priority level inside routers or switches. Queue management            operation results indicates that the transient effect gap is
algorithm is a particular calculation method to determine the          minimal, and this gap can be negligible in majority of cases.
order of sending data packets stored in the queue. Then the            In some specific circumstances, the completely shared
fundamental requirement is to provide better and timely                resource management scheme can cooperate with drop
services for high priority packets [1]. The NP is a dedicated          schemes such as tail-drop and head-drop to reach effective
processing chip to run on high speed networks, and to                  control. However, in most cases, interaction between these
achieve rapid processing of packets.                                   two schemes is very large. So the design of buffer
                                                                       management algorithms should consider both of the two
   Queue management plays a significant role in the control
                                                                       schemes to obtain better control effects [4, 5].
of network transmission. It is the core mechanism to control

                                                                                                 ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   Vol. 8, No. 1, 2010
    II.   EXISTING BUFFER MANAGEMENT ALGORITHMS                       that QoS of service flows with poor transmission conditions
   Reference [6] proposed the RED algorithm for active                cannot be guaranteed. Packet scheduling algorithms usually
queue management (AQM) mechanism [7] and then                         use generalized processor sharing (GPS) as a comparative
standardized as a recommendation from IETF [8]. It                    model of fairness. During the process of realization of
introduces congestion control to the router's queue                   packet scheduling algorithms based on GPS, each service
operations. RED uses early random drop scheme to smooth               flow has been assigned a static weight to show their QoS.
packet dropping in time. This algorithm can effectively               The weight φi actually express the percentage of the service
reduce and even avoid the congestion in network, and also             flow i in the entire bandwidth B. φi will not change with
solve the TCP protocol global synchronization problem.                packet scheduling algorithms, and meet
   However, one concern of the RED algorithm is the                                          ∑ φi = 1                                    (1)
                                                                                             i =1
stability problem, i.e., the performance of the algorithm is
very sensitive to the control parameters and changes in               where N expresses the number of service flows in the link.
network traffic load. During heavy flow circumstances, the            And the service volume is described by
performance of RED will drop drastically. Since RED
algorithm is based on best-effort service model, which does                                giinc =         B                             (2)
                                                                                                      ∑ φj
not consider different levels of services and different user                                          j∈B

flows, it cannot provide fairness. In order to improve the            where i, j denotes two different service flows. In GPS based
fairness and stability, several improved algorithms have              algorithms, the bandwidth allocation of different service
been developed, including WRED, SRED, Adaptive-RED,                   flows meets the requirement Bi/φi = Bj/φj, where Bi is the
FRED, RED with In/Out (RIO) [9, 10] etc. But these                    allocated bandwidth of the service flow i. By assigning a
algorithms still have a lot of problems. For example, a large         smaller weight φ to an unimportant background service flow,
number of studies have shown that it is difficult to find a           the weight of service flow with high priority φhigh will be
RIO parameter setting suitable for various and changing               much larger than φlow, so that the majority of the bandwidth
network conditions.                                                   is accessed by high-priority service flows.

               III.   THE PAFD ALGORITHM                              A. Algorithm Description
   In this paper, we propose a new buffer management                     In buffer management algorithms, how to control the
algorithm called PAFD (Packet Adaptive Fair Dropping).                buffer space occupation is very key [11]. Here we define
This algorithm will adaptively gain balance between
congestion and fairness according to cache congestion                                        Ci C j                                      (3)
                                                                                             Wi W j
situation. When there is minor congestion, the algorithm will
tend to fairly drop packets in order to ensure all users access
                                                                      where Ci is the buffer space occupation, and Wi expresses
the system resources to their scale. For moderate congestion,
                                                                      the synthetic weight of the service flow i. When the cache is
the algorithm will incline to drop the packet of low quality
                                                                      full, the service flow with the largest value of Ci /Wi will be
service flows by reducing its sending rate using scheduling
                                                                      dropped in order to guarantee fairness. Here the fairness is
algorithm to alleviate congestion. In severe congestion, the
                                                                      reflected in packets with different queue length [12, 13].
algorithm will tend to fairly drop packets, through the upper
                                                                      Assume that ui is the weight, and vi is the current queue
flow control mechanism to meet the QoS requirements, and
                                                                      length of the service flow i. The synthetic weight Wi can be
reduces sending rate of most service flows, in order to speed
                                                                      calculated as described by
up the process of easing the congestion.

   In buffer management or packet scheduling algorithms,                               Wi = α × ui + (1 − α ) × vi                       (4)
it will improve the system performance to have service
                                                                      where α is the adjust parameter of the two weighting
flows with better transmission conditions reserved in
                                                                      coefficients ui and vi . α can be pre-assigned, or determined
advance. But this operation will make system resources such
                                                                      in accordance with usage of the cache. ui is related to the
as buffer space and bandwidth be unfairly distributed, so

                                                                                                  ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 1, 2010
service flow itself, and different service flows are assigned          cycling times is related to the ratio between the longest and
with different weight values. As long as the service flow is           the shortest packets. At this moment, the time complexity
active, this factor will remain unchanged. vi is time varying,         overhead is still small based on practices.
which reflects dropping situation of the current service flow.
                                                                          In Step 2, α, a function of shared buffer, is a parameter
   Suppose a new packet T arrives, then the PAFD                       for adjusting proportion of the two weighting coefficients u
algorithm process is described as follows:                             and v. For a large value of α, the PAFD algorithm will tend
                                                                       to fairly select and drop packets according to the synthetic
   •   Step 1: Check whether the remaining cache space
                                                                       weight W. Otherwise, the algorithm tends to select and drop
        can accommodate the packet T, if the remaining
                                                                       the service flow with large queue length. A reasonable value
        space is more than or equal to the length of T, add T
                                                                       for α can be used to balance between fairness and
        into the cache queue. Otherwise, drop some packets
                                                                       performance. Here we introduce an adaptive method to
        from the cache to free enough storage space. The
                                                                       determine the value of α. This adaptive method will
        decision on which packet will be dropped is given in
                                                                       determine α value based on the congestion situation of the
        the following steps.
                                                                       cache, and this process does not require manual intervention.
   •   Step 2: Calculate the weighting coefficients u and v
                                                                          When there is a minor congestion, the congestion can be
        for each service flow, and the parameter α. Then get
                                                                       relieved by reducing the sending rate of a small number of
        the values of new synthetic weights W for each flow
                                                                       service flows. The number of service flows in wireless
        according to (4).
                                                                       network nodes is not as many as that in the wired network.
   •   Step 3: Choose the service flow with the largest                So the minor congestion can be relieved by reducing the
        weighted buffer space occupation (Ci/Wi), if the               sending rate of any one of service flows. We hope this
        service flow associated to the packet T has the same           choice is fair, to ensure that all user access to the system
        value as it, then drop T at the probability P and              resources according to their weights.
        returns. Otherwise, drop the head packet of the
                                                                          When there is a moderate congestion, the congestion can
        service flow with the largest weighted buffer space
                                                                       not be relieved by reducing the sending rate of any one of
        occupation at probability 1−P, and add T into the
                                                                       service flows. Reducing the rate of different service flows
        cache queue. Here Probability P is a random number
                                                                       will produce different results. We hope to reduce the rate of
        generated by the system to ensure the smoothness
                                                                       service flows which are most effective to the relief of
        and stability of the process.
                                                                       congestion. That is, the service flow which current queue
   •   Step 4: Check whether the remaining space can                   length is the longest (The time that these service flow
        accommodate another new packet, if the answer is               occupied the cache is also the longest). This not only
        yes, the packet will be transmitted into the cache.            improves system throughput, but also made to speeds up the
        Otherwise, return to Step 3 to continuously choose             congestion relief.
        and drop packets until there is sufficient space.
                                                                          When there is a severe congestion, it is obvious that
   If all packet lengths are the same, the algorithm only              reducing the sending rate of a small portion of the service
needs one cycle to compare and select the service flow with            flows cannot achieve the congestion relief. We may need to
the largest weighted buffer space occupation. Therefore, the           reduce the rate of a lot of service flows. Since the TCP has a
time complexity of the algorithm is O(N). In this case, we             characteristic of additive increase multiplicative decrease
also need additional 4N storage space to store the weights.            (AIMD), continuous drop packets from one service flow to
Taking into account the limited capacity of wireless network,          reduce the sending rate would adversely affect the
N is usually less than 100. So in general the algorithm's              performance of the TCP flow. While the effect on relieving
overhead on time and space complexity are not large. On the            system congestion will become smaller, we gradually
other hand, if packet lengths are different, then it is                increase the values of parameters, and the algorithm will
necessary to cycle Step 3 and Step 4 until the cache has               choose service flows to drop packet fairly. On one hand, at
enough space to accommodate the new packet. The largest                this point the "fairness" can bring the same benefits as in the

                                                                                                 ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                               Vol. 8, No. 1, 2010
minor congestion system; on the other hand this is to avoid                         In the DiffServ model, we retain the implement process
continuously dropping the longer queue service flow.                             of PAFD, and only modify (4) into

    Congestion is measured by the system buffer space
                                                                                                Wi = (α × u i + (1 − α ) × vi ) × β               (6)
occupation rate. α is a parameter relevant to system
congestion status and its value is between 0 to 1. Assume                        where β is a new parameter used to adjust the fairness
that the current buffer space occupation rate is denoted by                      among service flows of different service levels. As
Buffercur, and Buffermedium, Buffermin, and Buffermax represent                  mentioned above, we can set the value of parameter α
threshold value of the buffer space occupation rate for                          different from that shown in Figure 1 to satisfy different
moderate, minor, and severe congestion, respectively.                            requirements. α is the parameter which balances fairness and
    When Buffercur is close to Buffermin, the system enters a                    transmission conditions. For high-priority services, the curve
state of minor congestion. When Buffercur reaches Buffermax,                     in Figure 1 is reasonable. The fairness is able to guarantee
the system is in a state of severe congestion. Buffermedium                      the QoS for different service flows, and also is required to
means moderate congestion. If we value α by using linear                         relief congestion quickly. For high-priority services which
approach, the system will have a dramatic oscillation.                           have no delay constraints and high fairness requirements, a
Instead we use high order nonlinear or index reduction to get                    higher throughput is more practical. Therefore, we can get
smooth curve of α as shown in Figure 1.                                          the value of the parameter α for low-priority services, which
                                                                                 is slightly less than that for high-priority services as shown
                                                                                 in Figure 2.

                  Fig.1. An adaptive curve of α

    The value of α can also be calculated as below

    ⎧0, if Buffercur < Buffermin
                   2          2

    ⎪        2          2
    ⎪ Buffercur − Buffermin
α = ⎨1−     2            2
                                       2            2          2
                            , if Buffermin ≤ Buffercur ≤ Buffermax     (5)
    ⎪ Buffermax − Buffermin
    ⎪1, if Buffer 2 < Buffer 2
    ⎩            cur        max

B. DiffServ Model Support
    In the PAFD algorithm, we can adopt the DiffServ model
                                                                                        Fig.2. Values of α for different priority services
to simplify the service flows by dividing them into
high-priority services such as assurance services and                               Now we check the effects of the parameter β. For
low-priority services such as best-effort services. We use the                   high-priority services, β is a constant with value 1. For
queuing method for the shared cache to set and manage the                        low-priority services, the value of β is less than 1, and
cache. When a new packet arrives at the cache, first the                         influenced by the network load. When network load is low,
service flow is checked to see whether it matches the service                    β equals to 1. In this case, different level service flows have
level agreement (SLA). If it does, then this new packet                          the same priority to share the network resources. As network
enters the corresponding queue. Otherwise, the packet is                         load increases, in order to guarantee the QoS of high-priority
assigned to low-priority services, and then enters the                           services, low-priority services gradually give up some
low-priority queue.                                                              transmission opportunities, so the value of β decreases. The

                                                                                                           ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 8, No. 1, 2010
higher network load is, the smaller the values of β and W are.        channel transmission condition will give higher priority and
Therefore, the probability of a low-priority packet being             result effective throughput.
dropped is higher. Values of β are shown below.

      Fig.3. Values of β for different priority services

                IV.   SIMULATION RESULTS

A. Simulation for Commen Services                                                Fig.4. Throughputs of RED and PAFD
   We compare the PAFD algorithm with two commonly
used buffer management algorithms RED and tail drop (TD).
We choose two common packet scheduling algorithms Best
Channel First (BCF) and Longest Queue First (LQF) to
work with PAFD, RED and TD. Here the LQF uses the
weighted queue length for packet scheduling. So there are 6
queue management algorithm combinations, which are
and TD-LQF. The performance comparisons of these
algorithms are carried out with respect to throughput
effectiveness, average queuing delay, and fairness.

   We use K1297-G20 signaling analyzer to simulate                       Fig.5. Average Queuing Delay for TD, RED and PAFD
packet sending, and the operation system for K1297-G20 is
Windows NT 4.0. ADLINK 6240 is used as the NP blade.                     From Figure 5, we find that RED has better performance
Based on the simulation configuration, there are 8 different          on the average queuing delay due to the capability of early
packet length configurations for the data source. They are            detection of congestion and its drop mechanism. BCF has
fixed length of 64 bytes, fixed length of 65 bytes, fixed             better performance on queuing delay than that of LQF. As
length of 128 byte, fixed length of 129 bytes, fixed length of        the load increases, the average queuing delay of PAFD first
256 bytes, random length of 64-128 bytes, random length of            increases, then decreases. This is because RAFD does not
64-256 bytes, and random length of 64-1500 bytes.                     use tail drop, and instead searches a service flow with the
                                                                      largest weighted buffer space occupation to drop the head
   Figure 4 shows that all the algorithms have similar
                                                                      packet to reduce the average queuing time.
throughputs for low network load. When the load increases,
the throughput effectiveness of BCF is higher than that of               Both TD and RED use shared cache instead of flow
other scheduling algorithms. This figure shows that                   queuing so that they fail to consider the fairness. Here the
PAFD-BCF provides significant higher throughput than the              fairness index F is given by
other algorithms. PAFD does not randomly drop or simply
                                                                                                Gi 2
tail drop packets, but fully considers fairness and                                             (∑)
                                                                                               Wi 1                                    (7)
transmission conditions. In this way, service flows under                                 F= N
                                                                                            N ∑ ( i )2
poor transmission condition receive high probability of                                       1 Wi

packet dropping, thus a relatively short virtual queue. When          where Gi is the effective throughput of service flow i, and N
BCF is working with PAFD, the service flow under better

                                                                                                ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 8, No. 1, 2010
is the total number of service flows. It is not difficult to
prove that F∈(0, 1). When F has a lager value, the fairness
of the system is better. If the value of F equals to 1, the
system resource is completely fair. We can use (7) to
calculate the fairness index and compare the fairness of
different algorithms. In ON-OFF model with the assumption
that there are 16 service flows, the ON average rate of flows
1-8 is twice of that of 9-16. That is, Wi : Wj = 2 : 1, where
i∈[1, 8] and j∈[9, 16]. Using round robin algorithms
without considering W, we can calculate the reference value
of fairness index F = 0.9. Table I gives the fairness index of
TD, RED and PAFD which are combined with packet                                   Fig.6. Throughputs of RED and DS-PAFD
scheduling algorithms.

                    TABLE I.    FAIRNESS INDEX

           Algorithms                       Fairness
             TD-BCF                          0.8216

             TD-LQF                          0.9162

            RED-BCF                          0.8855

            RED-LQF                          0.9982

           PAFD-LQF                          0.9988

           PAFD-BCF                          0.8902

   The table indicates that the fairness index of BCF is
lower when combined with TD and RED. Since PAFD takes                      Fig.7. Average Queuing Delay of RED and DS-PAFD
the fairness into consideration, the fairness index of PAFD is
                                                                           Table II gives the comparison of fairness index of theses
higher than that of TD when there are congestions. The
                                                                        algorithms. Comparing these numbers with those shown in
combination of PAFD and LQF has higher throughput and
                                                                        Table I, we can draw a similar conclusion. However, the
more fair distribution of cache and bandwidth resources. By
                                                                        difference in values is that the fairness index of low-priority
changing the value of parameter α, we can conveniently
                                                                        services is slightly lower than that of high-priority services
balance the system performance and fairness based on the
                                                                        as a result of different values of parameter α selected.
                                                                                    TABLE II.    COMPARISON OF FAIRNESS INDEX
B. Simulation for DiffServ Model
   In this section we adopt the same environment as                                                           TD-BCF              TD-LQF

described in the previous section to test the PAFD                                    Flow                     0.8346              0.9266

performance based on the DiffServ model. The only                                                          DSPAFD-BCF          DSPAFD-LQF
difference is that half of the services are set to high-priority,          High-priority Service Flow          0.8800              0.9922
and another half to low-priority.                                                                          DSPAFD-BCF          DSPAFD-LQF

                                                                            Low-priority Service Flow          0.8332              0.9488
   Figures 6 and 7 show the throughput and average
queuing delay of those algorithms. The only difference in
these two tests is that the value of parameter α for half of the           As shown in Figures 2-3, 6 and 7, when network load is
service flows used in the second simulation is slightly lower           light, the throughputs are similar for different priority
than the one in the first simulation. So the curves in Figures          services. This means different priority services have the
7 and 8 are very similar to those shown in Figures 4 and 5.             same priority to share network resources. As network load

                                                                                                    ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No. 1, 2010
increases, the throughput gradually decreases. However,              are 1024 queues in total. As we adopt the SRAM structure, it
even in the case of heavy load, the PAFD algorithm still             is very easy to enqueue.
allocates small portion of resources to low-priority services
                                                                           The dequeuing operation is similar to the enqueuing
to meet the fairness requirement. And this operation will
                                                                     operation. In order to maintain the performance of the
prevent high-priority services from fully occupying the
                                                                     system, micro engine threads of NPs must operate in strict
network resources.
                                                                     accordance with the predetermined sequence. This is
 V.      IMPLEMENTATION BASED ON NETWORK PROCESSORS                  controlled by internal thread semaphore. When a queue
                                                                     changes from empty to non-empty in an enqueuing
      Here we adopt NP Intel IXP2400 to implement the
                                                                     operation, or from non-empty to empty in a dequeuing
PAFD algorithm. Intel IXP2400 provides us with eight
                                                                     operation, the buffer manager of PAFD will send a message
micro-engines, and each micro-engine can support up to
                                                                     to packet scheduling module through the adjacent loop.
eight hardware threads. When the system is running, each
micro-engine deals with one task. During the thread                                           VI.     CONCLUSIONS
switching, there is no need for protection, each hardware
                                                                           Buffer management algorithm is the core mechanism to
thread has its own register, so the switching speed is very
                                                                     achieve network QoS control. It also plays an important role
fast. Also Intel IXP2400 is appropriate for DiffServ model.
                                                                     in network resource management. In this paper, a novel
      The PAFD Algorithm executes enqueuing and dequeuing            buffer management algorithm called PAFD is proposed
operations in the transmission, which are implemented using          based on NPs. The PAFD algorithm takes into account the
chained list of the SRAM of IXP2400. The buffer manager              impact of transmission environment on packets. It can
of PAFD receives enqueuing request from the functional               adaptively balance between queue congestion and fairness
pipeline, and accepts dequeuing request through the micro            according to cache congestion. PAFD also supports the
engines of NPs. In the PAFD algorithm, Q-Array in the                DiffServ model to improve network QoS based on NPs. The
SRAM controller is used to the chained list, and a queue             simulation results show that the throughput and fairness are
descriptor is stored in the SRAM. The buffer manager uses            better balanced after this algorithm is applied. Finally, the
content associative memory (CAM) to maintain queue                   PAFD algorithm is implemented based on IXP2400, which
buffer of the descriptor. When enqueuing request arrives, the        means that the hardware resource utilization of NPs is
buffer manager will check CAM to see if the queue                    higher.
descriptor is in the local buffer. If so, PAFD will be run to
                                                                           The future network has two development requirements:
decide whether the new packets should enter the queue. If
                                                                     high-speed bandwidth and service diversification. Research
not, the descriptor is excluded from the Q-Array, and then
                                                                     on buffer management algorithms is able to suit for these
stored in the SRAM. Therefore, another specified queue
                                                                     requirements. In the future, buffer management will become
descriptor is read into the Q-Array, and then PAFD is run to
                                                                     more complex. Therefore, the requirements for NPs and
decide whether to drop the new packets. When a queue
                                                                     other hardware will be more stringent. It is very important to
enters a queue, Q-Array logic moves the first four bits to the
                                                                     consider the comprehensive performance of the algorithms
SRAM controller. Q-Array can buffer 64 queue descriptors
                                                                     while pursuing simplicity and easy implementation.
in each SRAM channel. The PAFD algorithm only reserves
16 entrances for the buffer manager, and the rest are for free                               ACKNOWLEDGEMENTS
idle chained list and SRAM loops. The current count of                     This work was supported by Presidential Fellowship
packets is stored in the local memory. This operation needs          2007-2009 and Dissertation Year Fellowship 2009-2010,
16 bits, and each bit represents the number of packets               Florida International University.
through the 16 entrances. The packet counter is initialed                                           REFERENCES
when entrances are read into the Q-Array, and then it                [1]   Intel Corporation, “Intel internet exchange architecture software
                                                                           building blocks developer’s manual [M/ CD],” Document Number:
executes the operation of plus one or minus one base on the                278664 - 010: 279-289, 73-86, 2006.
response. The implemented system we designed supports 64             [2]   F. Buccafurri et. al., “Analysis of QoS in cooperative services for real
                                                                           time applications,” Data & Knowledge Engineering, Vol.67, No.3,
virtual ports, and each port supports 16 queues. Thus, there               2008.

                                                                                                     ISSN 1947-5500
                                                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                            Vol. 8, No. 1, 2010
[3]    Yoshihiro Ito, Shuji Tasaka, “Feasibility of QoS control based on QoS                                                    Deng Pan received his Ph.D. and M.S.
       mapping over IP networks,” Computer Communications, Vol.31,
       No.10, 2008.                                                                                                             degree in Computer Science from State
[4]    Anunay Tiwaria and Anirudha Sahoo, “Providing QoS in OSPF based                                                          University of New York at Stony Brook in
       best effort network using load sensitive routing,” Simulation                                                            2007 and 2004. He received M.S. and B.S.
       Modelling Practice and Theory, Vol.15, No.4, 2007.                                                                       in Computer Science from Xi'an Jiaotong
[5]    Daniel A. Menascéa, Honglei Ruana, and Hassan Gomaa, “QoS                                                                University, China, in 2002 and 1999,
       management in service-oriented architectures,” Performance
       Evaluation, Vol.64, No.7, 2007.                                                                                          respectively. He is currently an Assistant
[6]    S. Floyd, V. Jacobson, “Random Early Detection Gateways for                                                              Professor in the School of Computing and
       Congestion Avoidance”, IEEE/ACM Transactions on Networking                                                               Information Sciences, FIU. He was an
       (TON), August, 1993.
                                                                                                Assistant Professor in School of Computing and Information Sciences, FIU
[7]    Nabeshima, Masayoshi, “Improving the performance of active buffer
                                                                                                from 2007 to 2008. His research interests include high performance routers
       management with per-flow information,” IEEE Communications
       Letters, Vol.6, No.7, July, 2002.                                                        and switches, high speed networking, quality of service, network processors
[8]    RFC: Recommendations on Queue Management and Congestion                                  and network security.
       Avoidance in the Internet.
[9]    W. Feng, Kang G. Shin, D.D. Kandlur, and D. Saha, “The Blue active                                                       Zhuo Sun received her BS degree in
       queue management algorithms,” IEEE/ACM Transactions on
       Networking Vol.10, No.4, pp.513–528, 2002.                                                                               computer  science   from  Guangxi
[10]   C.V. Hollot, V. Misra, D. Towsley, and W. Gong, “Analysis and                                                            University, Nanning, China, in 2002, and
       design of controllers for AQM routers supporting TCP flows,” IEEE                                                        the MS degree in software engineering
       Transactions on Automatic Control, Vol.47, No.6, pp.945−959, 2002.                                                       from Sun Yat-sen University, Guangzhou,
[11]   M. Ghaderi and R. Boutaba, Call admission control for voice/data                                                         China, in 2005. Then she worked at Nortel
       integration in broadband wireless networks, IEEE Transactions on
       Mobile Computing, Vol.5, No.3, 2006.                                                                                     Guangzhou R&D, Guangzhou, China. She
[12]   G. Ascia, V. Catania, D. Panno, “An Efficient Buffer Management                                                          is currently a second year Ph.D student in
       Policy Based On an Integrated Fuzzy-Ga Approach,” Proc. of IEEE                                                          Florida   International   University.   Her
       INFOCOM 2002, New York, USA, 23-27 Jun. 2002.
                                                                                                                                research interests are in the areas of
[13]   Ellen L. Hahne, Abhijit K. Choudhury, “Dynamic queue length
                                                                                                                                high-speed network.
       thresholds for multiple loss priorities,” IEEE/ACM Transactions on
       Networking, Vol.10, No.3, June 2002.

                           AUTHORS PROFILE

                                   Yechang Fang received his M.S. in
                                   Electrical       Engineering          from   Florida
                                   International University (FIU), Miami,
                                   USA in 2007. From 2006 to 2007, he
                                   served as an IT specialist at IBM China
                                   to work with Nokia, Motorola and
                                   Ericsson. He is currently a Ph.D.
                                   candidate        with     a     Dissertation      Year
                                   Fellowship        in      the        Department     of
                                   Electrical and Computer Engineering,
                                   FIU.       His     area         of     research     is
telecommunication. Besides, his research interests also include computer
networking, network processors, fuzzy Logic, rough sets and classification.

                                   Kang K. Yen received the M.S. degree
                                   from University of Virginia in 1979 and
                                   Ph.D. degree from Vanderbilt University
                                   in 1985. He is currently a Professor and
                                   Chair of the Electrical Engineering
                                   Department, FIU. He is also a registered
                                   professional engineer in the State of
                                   Florida.     He     has       been      involved    in
                                   theoretical works on control theory and on
parallel simulation algorithms development for real-time applications in the
past several years. In the same periods, he has also participated in several
industry    supported   projects   on     real-time        data     processing        and
microprocessor-based control system designs. Currently, his research
interests are in the security related issues and performance improvement of
computer networks.

                                                                                                                              ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 1, 2010

                          Multistage Hybrid Arabic/Indian
                              Numeral OCR System
Yasser M. Alginaih, Ph.D., P.Eng. IEEE Member                         Abdul Ahad Siddiqi, Ph.D., Member IEEE & PEC
            Dept. of Computer Science                                              Dept. of Computer Science
                 Taibah University                                                     Taibah University
         Madinah, Kingdom of Saudi Arabia                                       Madinah, Kingdom of Saudi Arabia

Abstract— The use of OCR in postal services is not yet                numeral OCR systems for Postal services have been used
universal and there are still many countries that process             in some countries, but still there are problems in such
mail sorting manually. Automated Arabic/Indian numeral                systems, stemming from the fact that machines are unable
Optical Character Recognition (OCR) systems for Postal                to read the crucial information needed to distribute the
services are being used in some countries, but still there are        mail efficiently. Historically, most civilizations have
errors during the mail sorting process, thus causing a                different symbols that convey numerical values, but the
reduction in efficiency. The need to investigate fast and             Arabic version is the simplest and most widely
efficient recognition algorithms/systems is important so as to
correctly read the postal codes from mail addresses and to
                                                                      acceptable. In most Middle Eastern countries both the
eliminate any errors during the mail sorting stage. The               Arabic         (0,1,2,3,4,5,6,7,8,9)    and        Indian
objective of this study is to recognize printed numerical             (۰,۱,۲,۳,٤,٥,٦,۷,۸,۹) numerals are used. The objective of
postal codes from mail addresses. The proposed system is a            this work is to develop a numeral Arabic/Indian OCR
multistage hybrid system which consists of three different            system to recognize postal codes from mail letters
feature extraction methods, i.e., binary, zoning, and fuzzy           processed in the Middle Eastern countries. A brief history
features, and three different classifiers, i.e., Hamming Nets,        on the development of postal services is qouted from [1].
Euclidean Distance, and Fuzzy Neural Network Classifiers.             “The broad development of mechanization in postal
The proposed system, systematically compares the                      operations was not applied until the mid-1950s. The
performance of each of these methods, and ensures that the
numerals are recognized correctly. Comprehensive results
                                                                      translation from mechanization to automation of the U.S.
provide a very high recognition rate, outperforming the               Postal Services (USPS) started in 1982, when the first
other known developed methods in literature.                          optical character reader was installed in Los Angeles.
                                                                      The introduction of computers revolutionized the postal
                                                                      industry, and since then, the pace of change has
   Keywords-component; Hamming Net; Euclidean Distance;               accelerated dramatically [1].”
Fuzzy Neural Network; Feature Extration; Arabic/Indian
Numerals                                                                In the 1980s, the first OCRs were confined to reading
                                                                      the Zip Code. In the 1990s they expanded their
                     I.   INTRODUCTION                                capabilities to reading the entire address, and in 1996, the
   Optical Character Recognition (OCR) is the electronic              Remote Computer Reader (RCR) for the USPS could
translation of images of printed or handwritten text into             recognize about 35% of machine printed and 2% of
machine-editable text format; such images are captured                handwritten letter mail pieces. Today, modern systems
through a scanner or a digital camera. The research work              can recognize 93% of machine-printed and about 88% of
in OCR encompasses many different areas, such as                      handwritten letter mail.        Due to this progress in
pattern recognition, machine vision, artificial intelligence,         recognition technology the most important factor in the
and digital image processing. OCR has been used in                    efficiency of mail sorting equipment is the reduction of
many areas, e.g., postal services, banks, libraries,                  cost in mail processing. Therefore, a decade intensive
museums to convert historical scripts into digital formats,           investment in automated sorting technology, resulted in
automatic text entry, information retrieval, etc.                     high recognition rates of machine-printed and handwritten
                                                                      addresses delivered by state-of- the-art systems [1 – 2]
The objective of this work is to develop a numerical OCR
system for postal codes.         Automatic Arabic/Indian

                                                                                                 ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 8, No. 1, 2010

  According to the postal addressing standards [3], a                       been implemented. Many OCR systems are available in
standardized mail address is one that is fully spelled out                  the market, which are multi font and multilingual.
and abbreviated by using the postal services standard                       Moreover, most of these systems provide high recognition
abbreviations. The standard requires that the mail                          rate for printed characters. The recognition rate is
addressed to countries outside of the USA must have the                     between 95% - 100%, depending on the quality of the
address typed or printed in Roman capital letters and                       scanned images, fed into the systems, and the application
Arabic numerals. The complete address must include the                      it is used for [9]. The Kingdom of Saudi Arabia has also
name of addressee, house number with street address or                      initiated its efforts in deploying the latest technology of
box number/zip code, city, province, and country.                           automatic mail sorting. It is reported in [10], that Saudi
Examples of postal addresses used in the Middle East are                    Post has installed an advanced Postal Automation System,
given in table 1.                                                           working with a new GEO-data based postal code system,
                                                                            an Automatic Letter Sorting Machine, and an OCR for
   TABLE 1: Examples of postal addresses used in the Middle East            simultaneous reading of Arabic and English addresses. It
                                                                            comprises components for automatic forwarding,
  Address with Arabic numerals       Address with Indian Numerals
    Mr. Ibrahim Mohammad                      ‫ﺍﻟﺴﻴﺪ ﻣﺤﻤﺪ ﻋﻠﻲ‬                sequencing, and coding
        P.O. Box 56577                        ٥۲۱۰٦ :‫ﺹ. ﺏ‬
        RIYADH 11564                         ۱۲۳٤٥ :‫ﺍﻟﺮﻳﺎﺽ‬                      In his in-depth research study, Fujisawa, in [11]
       SAUDI ARABIA                       ‫ﺍﻟﻤﻤﻠﻜﺔ ﺍﻟﻌﺮﺑﻴﺔ ﺍﻟﺴﻌﻮﺩﻳﺔ‬          reports on the key technical developments for Kanji
                                                                            (Chinese character) recognition in Japan. Palumbo and
   Standards are being developed to make it easy to                         Srihari [12] described a Hand Written Address
integrate newer technologies into available components                      Interpretation (HWAI) system, and reported a throughput
instead of replacing such components, which is very                         rate of 12 letters per second. An Indian postal automation
costly; such standards are the OCR/Video Coding                             based on recognition of pin-code and city name, proposed
Systems (VCS) developed by the European Committee                           by Roy et al in [13] uses Artificial Neural Networks for
for standardization.     The OCR/VCS enables postal                         the classification of English and Bangla postal zip codes.
operators to work with different suppliers on needed                        In their system they used three classifiers for the
replacements or extensions of sub-systems without                           recognition. The first classifier deals with 16-class
incurring significant engineering cost [1] [4].                             problem (because of shape similarity the number is
                                                                            reduced from 20) for simultaneous recognition of Bangla
  Many research articles are available in the field of                      and English numerals. The other two classifiers are for
automation of postal systems. Several systems have been                     recognition of Bangla and English numerals, individually.
developed for address reading, such as in USA [5], UK                       Ming Su et al. [14], developed an OCR system, where the
[6], Japan [7], Canada [8], etc. But very few countries in                  goal was to accomplish the automatic mail sorting of
the Middle East use automated mail-processing systems.                      Chinese postal system by the integration of a mechanized
This is due to the absence of organized mailing address                     sorting machine, computer vision, and the development of
systems, thus current processing is done in post offices                    OCR. El-Emami and Usher [15] tried to recognize postal
which are limited and use only P.O. boxes. Canada Post                      address words, after segmenting these into letters. A
is processing 2.8 billion letter mail pieces annually                       structural analysis method was used for selecting features
through 61 Multi-line Optical Character Readers                             of Arabic characters. On the other hand, U.Pal,
(MLOCRs) in 17 letter sorting Centers. The MLOCR –                          [16], argues that under three-language formula, the
Year 2000 has an error rate of 1.5% for machine print                       destination address block of postal document of an Indian
reading only, and the MLOCR/RCR – Year 2003 has an                          state is generally written in three languages: English,
error rate of 1.7% which is for print/script reading. Most                  Hindi and the State official language. Because of inter-
of these low read errors are on handwritten addresses and                   mixing of these scripts in postal address writings, it is
on outgoing foreign mail [9].                                               very difficult to identify the script by which a pin-code is
                                                                            written. In their work, they proposed a tri-lingual
  The postal automation systems, developed so far, are                      (English, Hindi and Bangla) 6-digit full pin-code string
capable of distinguishing the city/country names, post and                  recognition, and obtained 99.01% reliability from their
zip codes on handwritten machine-printed standard style                     proposed system whereas error and rejection rates were
envelopes. In these systems, the identification of the                      0.83% and 15.27%, respectively. In regards to
postal addresses is achieved by implementing an address                     recognizing the Arabic numerals, Sameh [17],
recognition strategy that consists of a number of stages,                   described a technique for the recognition of optical off-
including pre-processing, address block location, address                   line handwritten Arabic (Indian) numerals using Hidden
segmentation, character recognition, and contextual post                    Markov Models (HMM). Features that measure the image
processing. The academic research in this area has                          characteristics at local, intermediate, and large scales
provided many algorithms and techniques, which have                         were applied. Gradient, structural, and concavity features

                                                                                                       ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No. 1, 2010

at the sub-regions level are extracted and used as the               (0, 1, 2, 3…) and Indian (۰, ۱, ۲, ۳, ٤….) numerals.
features for the Arabic (Indian) numeral. The achieved               Therefore, this system can be considered a bi-numeral
average recognition rate reported was 99%.                           recognition system.

  Postal services are going to remain an integral part of                The proposed system includes more than one feature
the infrastructure for any economy. For example, recent              extraction and classification methods. As a result, the
growth in e-commerce has caused a rise in international              hybrid system will help reduce the misclassification of
and domestic postal parcel traffic. To sustain the role of           numerals. The system can be used specifically in the
mail as one of most efficient means of business                      Middle East and countries which use Arabic and Indian
communication, postal services have to permanently                   numerals in their documents. The proposed design
improve their organizational and technological                       methodology includes a character recognition system,
infrastructure for mail processing and delivery [4].                 which goes through different stages, starting from
Unfortunately, as explained above the character                      preprocessing, character segmentation, feature extraction
recognition process is not perfect, and errors often occur.          and classification. The main building blocks of a general
                                                                     OCR system are shown in Figure 2 and the design of the
   A simplified illustration of how an OCR system is                 proposed hybrid system is shown in Figure 3.
incorporated into postal services is shown in Figure 1.
This figure, in no way reflects the current technology used
in available mail processing systems. Typically, an OCR
system is developed for the application of postal services
in order to improve the accuracy of mail sorting by
recognizing the scanned Arabic and Indian numerical
postal codes from addresses of mail letters.

                                                                                      Figure 2: A General OCR System

                                                                        Figure 2, represents the stages a general OCR system
                                                                     goes through. The process here ignores all the steps
                                                                     before the OCR step and assumes the availability of the
                 Figure 1: OCR in Postal Services                    mail document as a grey-level bitmap graphic file. The
                                                                     proposed OCR system in Figure 3 show the
   The proposed method combines different feature                    preprocessing, feature extraction, and classification
extraction and classification algorithms to produce a high           stages. It also shows stage for comparison to produce the
recognition rate in such application. The proposed hybrid            output recognized numeral. After the preprocessing
system is explained in section II of this paper, which               stage, features are extracted using the first two feature
explains, the different feature extraction, training and             extraction methods, namely feature1 and feature2, then
classification techniques, where as section III of this              these two feature vectors are passed through classifiers,
paper presents the results and observations and finally the          namely classifier1 and classifier2 respectively. The
concluding remarks are stated in section IV.                         output from both classifiers is compared, if there is a
                                                                     match then the output is accepted and no further
           II.    PROPOSED HYBRID OCR SYSTEM                         processing is required for this numeral, otherwise the
                                                                     third feature is calculated, and then passed through
   The significance of this research project is in                   classifier3. The output from classifier3 is then compared
recognizing and extracting the most essential information            with both outputs of classifier1 and classifier2. If there is
from addresses of mail letters, i.e., postal zip codes. This         a match with the output of classifier3 with either outputs
system will have a profound effect in sorting mail and               of classifier1 and classifier2, then the output is accepted,
automating the postal services system, by reading the
postal codes from letter addresses, which contain Arabic

                                                                                                ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No. 1, 2010

otherwise the output is rejected and the postal letter needs         locating the postal code, the characters are segmented so
to go through either post-processing or manual sorting.              that each can be processed individually for proper
  In the next subsections of this paper, the preprocessing,          recognition. At this point, all numerals were normalized
feature extraction, training and classification techniques           to a size of 25 x 20, which was decided experimentally
used in this system are explained in details.                        according to a 12-font size numeral scanned at a
                                                                     resolution of 300 dpi. The normalization step aims to
                                                                     remove the variations of printed styles and obtain
                                                                     standardized data.

                                                                     B. Feature Extraction
                                                                         The proposed hybrid OCR system, Figure 3, is based
                                                                     on the feature extraction method of character recognition.
                                                                     Feature extraction can be considered as finding a set of
                                                                     vectors, which effectively represents the information
                                                                     content of a character. The features were selected in such
                                                                     a way to help in discriminating between characters. The
                                                                     proposed system uses a combination of three different
                                                                     methods of feature extraction, which are extracted from
                                                                     each normalized numeral in the postal code, these features
                                                                     are: the 2D array of the pixel values after the conversion
                                                                     of the address image into binary, the array of black pixel
                                                                     distribution values from square-windows after dividing
                                                                     each normalized character into a 5x5 equal size windows
                                                                     [19], and finally the maximized fuzzy descriptive
                                                                     features, [20 – 21], are obtained using equation (1).
                                                                               N1     N2
                                                                       S ij = max(max( w[i − x, j − y ] f xy ))         − − − − > (1)
                                                                               x =1   y =1
                                                                                       for i = 1 to N1 , j = 1 to N 2
             Figure 3: Proposed Hybrid OCR System                    S ij gives the maximum fuzzy membership pixel value
                                                                     using the fuzzy function, w[m, n] , equation (2). Where
A. Preprocessing
                                                                      f xy is the ( x, y ) binary pixel value of an input pattern
    Postal mail images were assumed to be free of noise
with a skew angle not exceeding ± 2 o . The preprocessing            (0 ≤ f xy ≤ 1) . N1 and N 2 are the height and width of
tasks performed are: localization of the address,                    the character window.
conversion from grey scale images to binary images,
localization of the postal code on the image, and character
                                                                       w[m, n] = exp(− β 2 (m 2 + n 2 ))     − − − − > (2)
segmentation. The first step in pre-processing locates the              Through exhaustive search, β = 0.3 is found to be the
address to be processed, such as the incoming/outgoing               most suitable value to achieve higher recognition rate.
addresses, as long as the address is in the proper standard          This maximized membership fuzzy function, equation (2),
format there will not be a problem in specifying its                 was used in the second layer of the Fuzzy Neural Network
location. Following the localization of the postal code,             presented in [20 – 21], which will be used as one of the
thresholding was used to convert the image into binary. If           classifiers of the proposed system. S ij gives a 2D fuzzy
the pixel value was above a threshold value then it
becomes      white     (background)      otherwise    black          feature vector whose values are between 0 and 1, and has
(foreground) [18]. Here, the average of the pixels in the            the same size as the normalized image window of the
image was taken to be the threshold value. At this stage,            numeral. It is known from the fuzzy feature vector
most of the noise was eliminated using thresholding and              method, that the features which resemble the shape of the
only slight distortion to characters was observed, which             character will be easily recognized due to this
suggests that pixels were either lost or added to the                characteristic of the descriptive fuzzification function.
characters during the thresholding process. Isolated noise           Therefore, the feature values closer to the boundary of the
was removed during the character segmentation process.               character will result in higher fuzzy membership value.
Then the zip or postal code was located according to the             Similarly, further from the boundary of the character will
location according to the postal services standards. After           result in lower fuzzy membership value.

                                                                                                ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 8, No. 1, 2010

                                                                               Arial, Times New Roman, Lucida Console, and New
C. Training                                                                    Courier. Each typeset contained 20 numerals for both
   A bitmap image file containing the sets of the numbers                      Arabic and Indian with 5 different font sizes (12, 14, 16,
from 0 – 9 Arabic and from ۰ – ۹ Indian was used in the                        18, 20), providing us with a total of 100 numerals for each
training process to calculate the prototype features. The                      typeset and 400 numerals for the complete training set.
training sets from which the feature prototypes were                           Figure 4 shows a full set for one typeset with all the font
calculated contained four different typesets, these are:                       sizes, note the figure is not to scale.

                             Figure 4: The training sets used to extract the prototype features. (Figure not to scale)

   The prototype numerals were then normalized to a size                       The prototype feature file for binary features contained 80
of 25 x 20. The three different features explained above                       feature vectors, each having a vector size of 25x20
were calculated from the normalized characters, and then                       features. Figure 6 shows an example of a 32-feature
stored in a separate file as prototypes to be compared with                    vector for a normalized numeral. As explained in the
the features extracted from the images under test. Figure                      feature extraction section, each of these features
5(a) shows a normalized image for the Arabic numeral (1)                       represents the black pixel distribution in a window size
and Figure 5(b) shows a normalized image for the Indian                        5x5. The features from font size 12 for all sets of Arabic
numeral (٤). From the image above, 1 represents the                            and Indian numerals were used as the prototype features
foreground and 0 represents the background of the                              to be passed to the Euclidean Distance Classifier. The file
numeral. Here, only the features for the normalized                            contained 80 vectors, each containing 32 features. Figure
characters with font size 12 were used as the prototype                        6 shows an example of a 32-feature vector for an Arabic
features to be passed to the Hamming Net classifier since                      numeral.
size 12 is considered as a standard size.

                                             (a)                                               (b)
                               Figure 5: Normalized Images showing Arabic Numeral 1 and Indian Numeral 4.

                                             Figure 6: Zoning features for a Normalized numerals.

                                                                                                               ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 1, 2010

Figure 7 shows the fuzzy features for the Arabic numeral                    numeral, the higher the fuzzy feature value and, the
1. The highlighted area resembles the shape of the                          further from the boundary of the numeral, the lower the
numeral, which shows the fuzzy feature value equals to 1.                   fuzzy feature values.
It is from Figure 7 that, the closer to the boundary of the

                                Figure 7: Fuzzy features for the normalized numeral 1 (Arabic) – Size 25 x 20

   The prototype features were calculated from the                          numerals, respectively, providing us with a total of 80
normalized characters. For each font, the prototypes of the                 prototype feature vectors each containing 25 x 20 features
five font sizes for each numeral in both Arabic and Indian                  as shown in Figure7. Many Arabic/Indian numeral sets
were averaged by adding them then dividing the sum by                       for the 4 typesets were scanned at different resolutions
5. This resulted in 20 prototype feature vectors for each                   and were used during the testing process. Figure 8 shows
typeset, 10 for Arabic numerals and 10 for Indian                           some examples of some numeral sets used for testing.

                                Figure 8: The results of testing some complete Arabic and Indian numeral sets

D. Classification                                                           d = ( p0 − q0 ) 2 + ( p1 − q1 ) 2 +  + ( p N −1 − q N −1 ) 2
A multistage OCR system with three-feature extraction                               N
and three classification algorithms is employed to
maintain the accuracy in the recognition of the postal
                                                                              =    ∑( p − q )
                                                                                   i =1
                                                                                           i    i
                                                                                                           − − − − − − − − − − − − − − > (3)

codes. The first classifier used is the Euclidean distance
which provides the ordinary distance between two points.                    Where
To recognize a particular input numeral feature vector, the
system compares this feature vector with the feature                         p = [p0      p1           p N −1 ]T and
vectors of the database of feature vectors of normalized                    q = [q 0      q1    q N −1 ]T
numerals using the Euclidean distance nearest-neighbor
classifier [22]. If the feature vector of the input is q and
                                                                            and N is the size of the vector containing the features.
that of a prototype is p, then the Euclidean distance
                                                                            Here, the match between the two vectors is obtained by
between the two is defined as:
                                                                            minimizing d.

                                                                               The second classifier is the Hamming Net classifier,
                                                                            [23 – 24] shown in Figure 9 below.

                                                                                                            ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 1, 2010

   x is the input vector.                                                                    1     − 0.2         − 0.2
   O is the output of the Maxnet and it is
   y is the input to the Maxnet                                                            − 0.2                  − 0.2          k
                                                                           o = wM y =                                                = wM net k − − > (8)
   c is the encoded class prototype vector                                                                                    
   M is the number of classes.                                                             − 0.2 − 0.2                  1       k
                                                                                      k                        k
                                                                                   net1                  f (net1 )
                                                                                                           
                                                                                                                                     0      when net j < 0
                                                                           net =
                                                                                   net k     and   O = f (net k )       f (net j ) = 
                                                                                       j                      j
                                                                                                                                      net j when net j ≥ 0
                                                                                                           
                                                                                   net M                 f (net M ) k

                                                                               The third classifier used in this work is the Fuzzy
                                                                           Neural Network, FNN, developed by Kwan and Cai, [20].
                                                                           It uses the fuzzy descriptive features explained in the
                                                                           feature extraction section. Figure 10 shows the structure
                                                                           of the network which is a four-layer FNN. The first layer
                                                                           is the input layer; it accepts patterns into the network
                                                                           which consists of the 2D pixels of the input numeral. The
                                                                           second layer of the network is a 2D layer of MAX fuzzy
      Figure 9: Hamming net with Maxnet as the second layer
                                                                           neurons whose purpose is to fuzzify the input patterns
                                                                           through the weighted function w[m, n], equation (2). The
    The algorithm designed for the minimum Hamming                         third layer produces the learned patterns. The fourth
distance classifier which was adopted from [23] is as                      layer is the output layer which performs defuzzification
follows:                                                                   and provides non-fuzzy outputs; it chooses the maximum
                                                                           similarity as the activation threshold to all the fuzzy
Step1: initialize the weight matrix wj and the biases:                     neurons in the fourth layer (Refer [20 - 21] for details on
                                                                           the FNN). After passing through the different stages of
               c ji
          w ji                          − − − −− > (4)                     the classifier, the character is identified and the
                2                                                          corresponding class is assigned. In the post-processing
                 n                                                         step, recognized postal codes will be compared against
          bj =                          − − − −− > (5)
                 2                                                         valid postal codes stored in a database.           In the
         i = 1,2, . . ., n;          j = 1, 2, … , M                       classification phase, feature vectors of an unknown
                                                                           character are computed and matched with the stored
Step 2: For each input vector x, do steps 3 to 5.                          prototypes. Matching is done by calculating distance
Step 3: Computer the netj, j = 1, 2, … , M:                                (dissimilarity) measure between the character and stored
         net j = b j +  ∑xi w ji
                                         − − − −− > (6)                    prototypes.

                                                                                The proposed system, shown in Figure 3 suggests that
         i = 1,2, . . ., n;              j = 1, 2, … , M
                                                                           if the Hamming Net Classifier and the Euclidean Distance
                                                                           Classifier did not provide a match then the fuzzy features
                                                                           are calculated and passed through the FNN classifier. The
Step 4: Initialize the activation yj for the Maxnet, the
                                                                           FNN classifier is different from a traditional Neural
                                                                           Network because the function of each fuzzy neuron is
       layer of the network which represents the
                                                                           identified and its semantics is defined. The function of
                                                                           such networks is the modeling of inference rules for
                                                                           classification. The outputs of a FNN provide a
               y j = net j                  − − − −− > (7)
                                                                           measurement of the realization of a rule, i.e. the
Step5: Maxnet compares the outputs of the netj and                         membership of an expected class. Typically, a FNN is
enforces                                                                   represented as special four-layer feed-forward neural
       the largest one as the best match prototype, while                  network, in which the first layer corresponds to the input
       suppressing the rest to zero.                                       variables, the second layer symbolizes the fuzzy rules, the
Step6: Recurrent processing of the Maxnet:                                 third layer produces the learned patterns and the fourth

                                                                                                           ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 1, 2010

layer represents the output variables. It is trained by                                    III.      RESULTS AND OBSERVATIONS
means of a data-driven learning method derived from                             The authors presented the initial results of this research
neural network theory. Therefore, the result of the FNN                      study in [25], in which only one font was used and no
classifier is compared to both other classifiers and if there                thorough testing of the system was conducted. The
is a match found between the FNN’s result and any of the                     proposed system can handle small amount of skew in the
previously calculated classifier results the numeral is                      range of –2 to +2 degrees. The system supports BMP
accepted, otherwise it is rejected, Figure 3.                                image formats; with image scan resolution of 100 – 300
                                                                             dpi and above. The documents used were of multiple
                                                                             fonts with multiple sizes. The fonts used in the system for
                                                                             testing were: Arial, New Times Norman, Lucida Console
                                                                             and New courier. Font sizes of 10 – 20, with font styles
                                                                             normal, and bold were incorporated in the system.

                                                                                Extensive testing of the proposed OCR system has
                                                                             been done on approximately 200 mail address images of
                                                                             different quality printed documents with different
                                                                             resolutions, font styles and sizes. Figure 11 shows an
                                                                             example of a processed mail address.

                                                                                The proposed hybrid system produced successful
                                                                             results in recognizing Arabic and Indian numerals from
                                                                             postal letters. The proposed hybrid system provided a
                                                                             100% recognition rate with no misclassification of
                                                                             numerals and a rejection rate of less than 1%. When
                                                                             combining the recognition rate for all images at different
                                                                             resolutions, the average recognition rate considering the
                                                                             rejected numerals as misclassified the recognition rate
                                                                             was 99.41% for all the images which varied in
                                                                             resolutions, typesets and brightness. This recognition rate
                                                                             assumed that the rejected characters were misclassified.
                                                                             This shows the effectiveness of the system in providing
                                                                             high recognition rate using 4 different fonts and suggests
                                                                             that more fonts could be applied, if desired.

            Figure 10: Four-Layer Feed forward FNN.

                                Figure 11: A processed envelope containing a postal code written in Indian numerals.

                                                                                                            ISSN 1947-5500
                                                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                                             Vol. 8, No. 1, 2010

   Future work will use more fonts, and will incorporate                                                                  TABLE 5: Recognition rate including rejected characters
                                                                                                                                    for the proposed hybrid method
a post-processing step to check the availability of the
postal codes so as to ensure the character recognition of                                                                                           Proposed Hybrid Method
Middle Eastern countries' addresses for mail sorting and                                                               Resolution           100%          200%      300%        400%
proper distribution of mail according to postal zip codes                                                            Recognition Rate       99.27         99.31     99.33        100
and cities. Tables 2 – 5 show the recognition rates for the
proposed hybrid method and the three methods used                                                                                         IV. CONCLUSION
separately. The proposed hybrid method outperformed
the other three methods, if used separately, as shown in                                                            In this work, a hybrid numeral OCR system for
Table 2. The recognition rate calculated in Table 1 did                                                         Arabic/Indian postal zip codes was successfully developed
not include any of the rejected numerals. It can also be                                                        and thoroughly tested. The system used three different
observed that, the higher the resolution, the better the                                                        feature extraction methods and three different classifier
recognition rate.                                                                                               techniques in order to guarantee the accuracy of any
                                                                                                                numeral processed through it. Over 200 letter images
TABLE 2: Recognition rate for all methods using images with different                                           were used where the postal code was localized, and then
                            resolutions                                                                         recognized through the proposed system. Four different
                                                                                                                font styles with sizes ranging from 10 to 20 points were
                           Resolution               100%          200%         300%           400%              used in testing the system and the recognition accuracy
                        No. of Characters           5460          4340         5690           2710
                                                                                                                was 99.41%, when considering the rejected numerals as
 Recognition Rate

                          Hamming                99.08%          99.39%        98.80%         98.89%
                                                                                                                un-recognized numerals.
                      Euclidean Distance         99.36%          99.08%        98.76%         99.88%
                        Fuzzy Neural
                           Network               98.13%           99.31        99.43%         100%                                        ACKNOWLEDGMENT
                       Proposed Hybrid
                           Method                   100%          100%         100%           100%                  The authors would like to acknowledge the financial
                                                                                                                support by the Deanship of Scientific Research at Taibah
   Table 3 shows the total number of misclassified                                                              University, KSA, under research reference number
numerals at different resolutions.                                                                              429/230 academic year 2008/2009 to carry out the
                                                                                                                research to design and develop the postal OCR system for
           TABLE 3: Number of misclassified characters using images with                                        the recognition of address postal codes in the Middle
                              different resolutions                                                             Eastern countries.
                                       Resolution          100%        200%     300% 400%
                                    No. of Characters       5460       4340     5690      2710
                                                                                                                [1] John Buck, “The postal service revolution- A look at the past and
                                        Hamming              72         70          68        30

                                                                                                                     where are we headed,” Mailing Systems Technology magazine,

                                    Euclidean Distance       50         40          70         3
                                                                                                                     Dec. 2007. <>
                                      Fuzzy Neural                                                              [2] Simon Jennings, “Speed Read,” OCR Systems, March 2008, pp.
                                         Network            146         30          32        0                      44-50.>
                                     Proposed Hybrid                                                            [3] United States Postal Services, "Postal Addressing Standards".
                                         Method              0            0         0         0                      Updated, Viewed on, May 28, 2009,
Table 4 shows the number of rejected characters when                                                            [4] John Buck, “International mail-sorting automation in the low-
                                                                                                                     volume environment,” The Journal of Communication
using the proposed hybrid method. As shown, there were
                                                                                                                     Distribution,                   May/June                   2009.
no misclassified or rejected numerals with the 400%                                                                  <

resolution - due to the fact that larger size numerals                                                               ailSorting.May2009.pdf>  U

provide good quality numerals when normalized.                                                                  [5] United States Postal Services, Viewed 23rd of
                                                                                                                                                      U                     U

                                                                                                                     June, 2009.
                                                                                                                [6] A.C. Downton, R.W.S. Tregidgo and C.G. Leedham, and
                                                                                                                     Hendrawan, "recognition of handwritten british postal addresses,"
  TABLE 4: Number of rejected characters using the proposed hybrid                                                   From Pixels to Features 111: Frontiers in Handwriting
                             method                                                                                  Recognition, pp. 129-143, 1992.
                                                          Proposed Hybrid Method                                [7] Y. Tokunaga, "History and current state of postal mechanization in
                                                                                                                     Japan", Pattern Recognition Letters, vol. 14, no. 4, pp. 277-280,
                           Resolutions              100%       200%           300%       400%                        April 1993
                          No. of Rejected                                                                       [8] Canada Post, <>, Viewed 13th of July, 2009.
                                                                                                                                      U                       U

                            Characters               40           30           38         0                     [9] Canada Post, In Quest of More Advanced Recognition
                                                                                                                     Technology, 28th of October 2004. Viewed March
         Table 5 presents the recognition rate for the                                                               2009,<

hybrid method when considering the rejected numerals as                                                              ures/Panel-presen/Panel-Ulvr.pdf.>   U

misclassified.                                                                                                  [10] Udo Miletzki, Product Manager Reading Coding, Siemens AG and
                                                                                                                     Mohammed H. Al Darwish, "Significant technological advances in

                                                                                                                                                  ISSN 1947-5500
                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              Vol. 8, No. 1, 2010

       Saudi Post, the first Arabic address reader for delivery point
       sorting", Saudi Post Corporation World Mail Review, Nov. 2008
[11]    Hiromichi Fujisawa, “A view on the past and future of character
       and document recognition,” International Conference on
       Document Analysis and Recognition, Brazil, Sept. 23-26, 2009.
[12]    P.W. Palumbo and S. N. Srihari, "postal address reading in real
       time" Inter. J. of Imaging Science and Technology, 1996
[13]    K. Roy, S. Vajda, U. Pal, B. B. Chaudhuri, and A. Belaid, "A
       system for Indian postal automation", Proc. 2005 Eight Intl. Conf.
       on Document Analysis and Recognition (ICDAR'05)                                                  Yasser M. Alginahi, became a member of IEEE
[14]    Yih-Ming Su, and Jhing-Fa Wang, "recognition of handwritten              in 2000. He earned a Ph.D., in electrical engineering from the University
       Chinese postal address using Neural Networks", Proc.                      of Windsor, Ontario, Canada, a Masters of Science in electrical
       International Conference on Image Processing and Character                engineering and a Bachelors of Science in biomedical engineering from
       Recognition, Kaohsiung, Taiwan, Rep. of China, 1996, pp.213-219           Wright State University, Ohio, U.S.A. Currently, he is an Assistant
[15]   El-Emami and M. Usher, "On-line recognition of handwritten                Professor, Dept. of Computer Science, College of Computer Science and
       Arabic characters," Pattern Analysis and Machine Intelligence,            Engineering, Taibah University, Madinah, KSA. His current research
       IEEE Transactions on, vol. 12, pp. 704-710, 1990                          interests are Document Analysis, Pattern Recognition (OCR), crowd
[16]   U. Pal, R. K. Roy, and F. Kimura, "Indian multi-script full pin-          management, ergonomics and wireless sensor networks. He is a
       code string recognition for postal automation", In the 10th               licensed Professional Engineer and a member of Professional Engineers
       International Conference on Document Analysis and Recognition,            Ontario, Canada (PEO). He has over a dozen of research publications
       Barcelona, Spain, 2009, pp.460-465                                        and technical reports to his credit.
[17]    Sameh M. Awaidah & Sabri A. Mahmoud, "A multiple
       feature/resolution scheme to Arabic (Indian) numerals recognition
       using hidden Markov models", In Signal Processing, Volume 89
       ,No. 6, June 2009
[18]    R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd
       Edition, Prentice Hall, Upper Saddle River, New Jersey, 2008
[19]   Y. M. Alginahi, “Thresholding and character recognition for
       security documents with watermarked background”, Conference
       on Document Image Computing, Techniques and Applications,
       Canberra, Australia, December, 2008
[20]    H.K. Kwan and Y.Cai, “A Fuzzy Neural Network and its
                                                                                                       Dr. Abdul Ahad Siddiqi received a PhD and a MSc
       applications to pattern recognition,” IEEE Trans. On Fuzzy
                                                                                 in Artificial Intelligence in year 1997, and 1992 respectively from
       Systems, Vol.2. No.3, August 1994, pp. 185-193.
                                                                                 University of Essex, U.K. He also holds a bachelor degree in Computer
[21]   P.M. Patil, U.V. Kulkarni and T.R. Sontakke, “Performance
                                                                                 Systems Engineering from NED University of Engineering and
       evaluation of Fuzzy Neural Network with various aggregation
                                                                                 Technology, Pakistan. He is a Member of IEEE, and Pakistan
       operators,” Proceedings of the 9th International Conference on
                                                                                 Engineering Council. Presently he is an Associate Professor at College
       Neural Information Processing, Vol. 4, Nov 2002, pp. 1744-1748
                                                                                 of Computer Science and Engineering at Taibah University, Madinah,
[22]    Duda, R.O. and Hart, P.E. 1973. Pattern classification and scene
                                                                                 KSA. He has worked as Dean of Karachi Institute of Information
       analysis. Wiley: New York, NY.
                                                                                 Technology, Pakistan (affiliated with University of Huddersfield, U.K.)
[23]    Bow, S-T, Pattern Recognition and Image Processing, 2nd Edition,
                                                                                 between 2003 and 2005. He has over 18 research publications to his
        Marcel Dekker, Inc. New York, Basel, 2002.
                                                                                 credit. He has received research grants from various funding agencies,
[24]    Artificial Neural Networks Technology, Viewed Jan 2009,
                                                                                 notably from Pakistan Telecom, and Deanship of Research at Taibah
                                                                                 University for research in are areas of Intelligent Information Systems,
[25]    Yasser M. Alginahi, and Abdul Ahad Siddiqi, "A Proposed Hybrid
                                                                                 Information Technology, and applications of Genetic Algorithms.
       OCR System for Arabic and Indian Numerical Postal Codes ", The
       2009 International Conference on Computer Technology &
       Development (ICCTD), Kota Kinabalu, Malaysia, November,
       2009, pp-400-405.

                                                                                                                ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No. 1, 2010

        Attribute Weighting with Adaptive NBTree for
        Reducing False Positives in Intrusion Detection

       Dewan Md. Farid, and Jerome Darmont                                              Mohammad Zahidur Rahman
    ERIC Laboratory, University Lumière Lyon 2                                 Department of Computer Science and Engineering
         Bat L - 5 av. Pierre Mendes, France                                              Jahangirnagar University
            69676 BRON Cedex, France                                                     Dhaka – 1342, Bangladesh,                                 

Abstract—In this paper, we introduce new learning algorithms               small network, several data mining algorithms, such as decision
for reducing false positives in intrusion detection. It is based on        tree, naïve Bayesian classifier, neural network, Support Vector
decision tree-based attribute weighting with adaptive naïve                Machines, and fuzzy classification, etc [10]-[20] have been
Bayesian tree, which not only reduce the false positives (FP) at           widely used by the IDS community for detecting known and
acceptable level, but also scale up the detection rates (DR) for           unknown intrusions. Data mining based intrusion detection
different types of network intrusions. Due to the tremendous               algorithms aim to solve the problems of analyzing the huge
growth of network-based services, intrusion detection has                  volumes of audit data and realizing performance optimization
emerged as an important technique for network security.                    of detection rules [21]. But there are still some drawbacks in
Recently data mining algorithms are applied on network-based
                                                                           currently available commercial IDS, such as low detection
traffic data and host-based program behaviors to detect
intrusions or misuse patterns, but there exist some issues in
                                                                           accuracy, large number of false positives, unbalanced detection
current intrusion detection algorithms such as unbalanced                  rates for different types of intrusions, long response time, and
detection rates, large numbers of false positives, and redundant           redundant input attributes.
attributes that will lead to the complexity of detection model and             A conventional intrusion detection database is complex,
degradation of detection accuracy. The purpose of this study is to         dynamic, and composed of many different attributes. The
identify important input attributes for building an intrusion              problem is that not all attributes in intrusion detection database
detection system (IDS) that is computationally efficient and
                                                                           may be needed to build efficient and effective IDS. In fact, the
effective. Experimental results performed using the KDD99
                                                                           use of redundant attributes may interfere with the correct
benchmark network intrusion detection dataset indicate that the
proposed approach can significantly reduce the number and
                                                                           completion of mining task, because the information they added
percentage of false positives and scale up the balance detection           is contained in other attributes. The use of all attributes may
rates for different types of network intrusions.                           simply increase the overall complexity of detection model,
                                                                           increase computational time, and decrease the detection
    Keywords-attribute weighting; detection rates; false positives;        accuracy of the intrusion detection algorithms. It has been
intrusion detection system; naïve Bayesian tree;                           tested that effective attributes selection improves the detection
                                                                           rates for different types of network intrusions in intrusion
                       I.    INTRODUCTION                                  detection. In this paper, we present new learning algorithms for
                                                                           network intrusion detection using decision tree-based attribute
    With the popularization of network-based services,                     weighting with adaptive naïve Bayesian tree. In naïve Bayesian
intrusion detection systems (IDS) have become important tools              tree (NBTree) nodes contain and split as regular decision-trees,
for ensuring network security that is the violation of                     but the leaves contain naïve Bayesian classifier. The proposed
information security policy. IDS collect information from a                approach estimates the degree of attribute dependency by
variety of network sources using intrusion detection sensors,              constructing decision tree, and considers the depth at which
and analyze the information for signs of intrusions that attempt           attributes are tested in the tree. The experimental results show
to compromise the confidentiality and integrity of networks                that the proposed approach not only improves the balance
[1]-[3]. Network-based intrusion detection systems (NIDS)                  detection for different types of network intrusions, but also
monitor and analyze network traffics in the network for                    significantly reduce the number and percentage of false
detecting intrusions from internal and external intruders [4]-[9].         positives in intrusion detection.
Internal intruders are the inside users in the network with some
authority, but try to gain extra ability to take action without                The rest of this paper is organized as follows. In Section II,
legitimate authorization. External intruders are the outside               we outline the intrusion detection models, architecture of data
users without any authorized access to the network that they               mining based IDS, and related works. In Section III, the basic
attack. IDS notify network security administrator or automated             concepts of feature selection and naïve Bayesian tree are
intrusion prevention systems (IPS) about the network attacks,              introduced. In Section IV, we introduce the proposed
when an intruder try to break the network. Since the amount of             algorithms. In Section V, we apply the proposed algorithms to
audit data that an IDS needs to examine is very large even for a           the area of intrusion detection using KDD99 benchmark

                                                                                                       ISSN 1947-5500
                                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 8, No. 1, 2010
network intrusion detection dataset, and compare the results to            the IDS alert the network security administrator or automated
other related algorithms. Finally, Section VI contains the                 intrusion prevention system (IPS). The generic architectural
conclusions with future works.                                             model of data mining based IDS is shown in Fig 1.


A. Misuse Vs. Anomaly Vs. Hybrid Detection Model
    Intrusion detection techniques are broadly classified into
three categories: misuse, anomaly, and hybrid detection model.
Misuse or signature based IDS detect intrusions based on
known intrusions or attacks stored in database. It performs
pattern matching of incoming packets and/or command
sequences to the signatures of known attacks. Known attacks
can be detected reliably with a low false positive using misuse
detection techniques. Also it begins protecting the
computer/network immediately upon installation. But the major
drawback of misuse-based detection is that it requires
frequently signature updates to keep the signature database up-
to-date and cannot detect previously unknown attacks. Misuse
detection system use various techniques including rule-based
expert systems, model-based reasoning systems, state transition
analysis, genetic algorithms, fuzzy logic, and keystroke
monitoring [22]-[25].
    Anomaly based IDS detect deviations from normal
behavior. It first creates a normal profile of system, network, or
program activity, and then any activity that deviated from the
normal profile is treated as a possible intrusion. Various data
mining algorithms have been using for anomaly detection
techniques including statistical analysis, sequence analysis,
neural networks, artificial intelligence, machine learning, and
artificial immune system [26]-[33]. Anomaly based IDS have
the ability to detect new or previously unknown attacks, and
insider attacks. But the major drawback of this system is large
                                                                                  Figure 1. Organization of a generalized data mining based IDS
number of false positives. A false positive occurs when an IDS
reports as an intrusion an event that is in fact legitimate
                                                                              •     Audit data collection: IDS collect audit data and
network/system activity.
                                                                                    analyzed them by the data mining algorithms to detect
    A hybrid or compound detection system detect intrusions                         suspicious activities or intrusions. The source of the
by combining both misuse and anomaly detection techniques.                          data can be host/network activity logs, command-based
Hybrid IDS makes decision using a “hybrid model” that is                            logs, and application-based logs.
based on both the normal behavior of the system and the
                                                                              •     Audit data storage: IDS store the audit data for future
intrusive behavior of the intruders. Table I shows the
                                                                                    reference. The volume of audit data is extremely large.
comparisons of characteristics of misuse, anomaly, and hybrid
                                                                                    Currently adaptive intrusion detection aims to solve the
detection models.
                                                                                    problems of analyzing the huge volumes of audit data
      TABLE I. COMPARISONS OF INTRUSION DETECTION MODELS                            and realizing performance optimization of detection
    Characteristics       Misuse         Anomaly         Hybrid                     rules.
   Detection Accuracy    High (for         Low            High
                       known attacks)                                         •     Processing component: The processing block is the
 Detecting New Attacks      No              Yes            Yes                      heart of IDS. It is the data mining algorithms that apply
     False Positives       Low           Very high         High                     for detecting suspicious activities. Algorithms for the
    False Negatives        High             Low            Low                      analysis and detection of intrusions have been
  Timely Notifications     Fast            Slow         Rather Fast                 traditionally classified into two categories: misuse (or
 Update Usage Patterns   Frequent       Not Frequent   Not Frequent                 signature) detection, and anomaly detection.
B. Architecture of Data Mining Based IDS                                      •     Reference data: The reference data stores information
    An IDS monitors network traffic in a computer network                           about known attacks or profiles of normal behaviors.
like a network sniffer and collects network logs. Then the                    •     Processing data: The processing element must
collected network logs are analyzed for rule violations by using                    frequently store intermediate results such as
data mining algorithms. When any rule violation is detected,

                                                                                                           ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 8, No. 1, 2010
         information    about    partially   fulfilled   intrusion        into a normal instance, known attack, or new attack. In 2004,
         signatures.                                                      Amor et al. [43] conducted an experimental study of the
                                                                          performance comparison between NB classifier and DT on
     •   Alert: It is the output of IDS that notifies the network         KDD99 dataset. This experimental analysis reported that DT
         security officer or automated intrusion prevention               outperforms in classifying normal, denial of service (DoS), and
         system (IPS).                                                    R2L attacks, whereas NB classifier is superior in classifying
     •   System security officer or intrusion prevention system           Probe and U2R attacks. With respect to running time, the
         (IPS) carries out the prescriptions controlled by the            authors pointed out that NB classifier is 7 times faster than DT.
         IDS.                                                             Another naïve Bayes method for detecting signatures of
                                                                          specific attacks is motivated by Panda and Patra in 2007 [44].
C.    Related Work                                                        From the experimental results implemented on KDD99 dataset,
                                                                          the authors give a conclusion that NB classifier performs back
    The concept of intrusion detection began with Anderson’s              propagation neural network classifier in terms of detection rates
seminal paper in 1980 [34] by introducing a threat                        and false positives. It is also reported that NB classifier
classification model that develops a security monitoring                  produces a relatively high false positive. In a later work, the
surveillance system based on detecting anomalies in user                  same authors Panda and Patra [45] in 2009, compares NB
behavior. In 1986, Dr. Denning proposed several models for                classifier with 5 other similar classifiers, i.e., JRip, Ridor,
commercial IDS development based on statistics, Markov                    NNge, Decision Table, and Hybrid Decision Table, and
chains, time-series, etc [35], [36]. In 2001, Lindqvist et al.            experimental results shows that the NB classifier is better than
proposed a rule-based expert system called eXpert-BSM for                 other classifiers.
detecting misuse of host machine by analyzing activities inside
the host in forms of audit trails [37], which generates detail
reports and recommendations to the system administrators, and                   III.   FEATURE SELECTION AND ADAPTIVE NB TREE
produces low false positives. Rules are conditional statements
that derived by employing domain expert knowledge. In 2005,               A. Feature Selection
Fan et al. proposed a method to generate artificial anomalies                 Feature selection becomes indispensable for high
into training dataset of IDS to handle both misuse and anomaly            performance intrusion detection using data mining algorithms,
detection [38]. This method injects artificial anomaly data into          because irrelevant and redundant features may lead to complex
the training data to help a baseline classifier distinguish               intrusion detection model as well as poor detection accuracy.
between normal and anomalous data. In 2006, Bouzida et al.                Feature selection is the process of finding a subset of features
[39] introduced a supplementary condition to the baseline                 from total original features. The purpose of feature selection is
decision tree (DT) for anomaly intrusion detection. The idea is           to remove the irrelevant input features from the dataset for
that instead of assigning a default class (normally based on              improving the classification accuracy. Feature selection in
probability distribution) to the test instance that is not covered        particularly useful in the application domains that introduce a
by the tree, the instance is assigned to a new class. Then,               large number of input dimensions like intrusion detection.
instances with the new class are examined for unknown attack              Many data mining methods have been used for selecting
analysis. In 2009, Wu and Yen [21] applied DT and support                 important features from training dataset such as information
vector machine (SVM) algorithm to built two classifiers for               gain based, gain ratio based, principal component analysis
comparison by employing a sampling method of several                      (PCA), genetic search, and classifier ensemble methods etc
different normal data ratios. More specifically, KDD99 dataset            [46]-[53]. In 2009, Yang et al. [54] introduced a wrapper-based
is split into several different proportions based on the normal           feature selection algorithm to find most important features from
class label for both training set and testing set. The overall            the training dataset by using random mutation hill climbing
evaluation of a classifier is based on the average value of               method, and then employs linear support vector machine
results. It is reported that in general DT is superior to SVM             (SVM) to evaluate the selected subset-features. Chen et al. [55]
classifier. In the same way, Peddabachigari et al. [40] applied           proposed a neural-tree based algorithm to identify important
DT and SVM for intrusion detection, and proven that decision              input features for classification, based on an evolutionary
tree is better than SVM in terms of overall accuracy.                     algorithm that the feature contributes more to the objective
Particularly, DT much better in detecting user to root (U2R)              function will consider as an important feature.
and remote to local (R2L) network attacks, compared to SVM.
                                                                              In this paper, to select the important input attributes from
    Naïve Bayesian (NB) classifier produces a surprising result           training dataset, we construct a decision tree by applying ID3
of classification accuracy in comparison with other classifiers           algorithm in training dataset. The ID3 algorithm constructs
on KDD99 benchmark intrusion detection dataset. In 2001,                  decision tree using information theory [56], which choose
Barbara et al. [41] proposed a method based on the technique              splitting attributes from the training dataset with maximum
called Pseudo-Bayes estimators to enhance the ability of                  information gain. Information gain is the amount of
ADAM intrusion detection system [42] in detecting new                     information associated with an attribute value that is related to
attacks and reducing false positives, which estimates the prior           the probability of occurrence. Entropy is the quantify
and posterior probabilities for new attacks by using information          information that is used to measure the amount of randomness
derived from normal instances and known attacks without                   from a dataset. When all data in a set belong to a single class,
requiring prior knowledge about new attacks. This study                   there is no uncertainty then the entropy is zero. The objective
constructs a naïve Bayes Classifier to classify a given instance          of ID3 algorithm is to iteratively partition the given dataset into

                                                                                                      ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 8, No. 1, 2010
sub-datasets, where all the instances in each final subset belong               Adaptive naïve Bayesian tree splits the dataset by applying
to the same class. The value for entropy is between 0 and 1 and             entropy based algorithm and then used standard naïve Bayesian
reaches a maximum when the probabilities are all the same.                  classifiers at the leaf node to handle attributes. It applies
Given probabilities p1, p2,..,ps, where ∑i=1 pi=1;                          strategy to construct decision tree and replaces leaf node with
                                                                            naïve Bayesian classifier.
            Entropy: H(p1,p2,…ps) =           ∑ (pi log(1/pi))   (1)
                                              i =1                                      IV.    PROPOSED LEARNING ALGORITHM
   Given a dataset, D, H(D) finds the amount of sub-datasets of
original dataset. When that sub-dataset is split into s new sub-            A. Proposed Attribute Weighting Algorithm
datasets S = {D1, D2,…,Ds}, we can again look at the entropy of                 In a given training data, D = {A1, A2,…,An} of attributes,
those sub-datasets. A subset is completely ordered if all                   where each attribute Ai = {Ai1, Ai2,…,Aik} contains attribute
instances in it are the same class. The ID3 algorithm calculates            values and a set of classes C = {C1, C2,…,Cn}, where each
the gain by the equation “(2)”.                                             class Cj = {Cj1, Cj2,…,Cjk} has some values. Each example in
                                                                            the training data contains weight, w = {w1,w2…, wn}. Initially,
              Gain (D,S) = H(D)- ∑ p(Di)H(Di)                    (2)        all the weights of examples in training data have equal unit
                                       i =1                                 value that set to wi = 1/n. Where n is the total number of
                                                                            training examples. Estimates the prior probability P(Cj) for
    After constructing the decision tree from training dataset,             each class by summing the weights that how often each class
we weight the attributes of training dataset by the minimum                 occurs in the training data. For each attribute, Ai, the number
depth at which the attribute is tested in the decision tree. The            of occurrences of each attribute value Aij can be counted by
depth of root node of the decision tree is 1. The weight for an             summing the weights to determine P(Aij). Similarly, the
attribute is set to1 d , where d is the minimum depth at which              conditional probability P(Aij |Cj) can be estimated by summing
the attribute is tested in the tree. The weights of attributes that         the weights that how often each attribute value occurs in the
do not appear in the decision tree are assigned to zero.                    class Cj in the training data. The conditional probabilities P(Aij
                                                                            |Cj) are estimated for all values of attributes. The algorithm
B. Naïve Bayesian Tree                                                      then uses the prior and conditional probabilities to update the
                                                                            weights. This is done by multiplying the probabilities of the
    Naïve Bayesian tree (NBTree) is a hybrid learning
                                                                            different attribute values from the examples. Suppose the
approach of decision tree and naïve Bayesian classifier. In
                                                                            training example ei has independent attribute values {Ai1,
NBTree nodes contain and split as regular decision-trees, but
                                                                            Ai2,…,Aip}. We already know the prior probabilities P(Cj) and
the leaves are replaced by naïve Bayesian classifier, the
                                                                            conditional probabilities P(Aik|Cj), for each class Cj and
advantage of both decision tree and naïve Bayes can be utilized
                                                                            attribute Aik. We then estimate P(ei |Cj) by
simultaneously [57]. Depending on the precise nature of the
probability model, NB classifier can be trained very efficiently                         P(ei | Cj) = P(Cj) ∏ P(Aij | Cj)                       (5)
in a supervised learning. In many practical applications,
parameter estimation for naïve Bayesian models uses the                          To update the weight of training example ei, we can
method of maximum likelihood. Suppose the training dataset,                 estimate the likelihood of ei for each class. The probability that
D consists of predictive attributes {A1, A2,…,An}, where each               ei is in a class is the product of the conditional probabilities for
attribute Ai = {Ai1, Ai2,…,Aik} contains attribute values and a set         each attribute value. The posterior probability P(Cj | ei) is then
of classes C = {C1, C2,…,Cn}. The objective is to classify an               found for each class. Then the weight of the example is
unseen example whose class value is unknown but values for                  updated with the highest posterior probability for that example
attributes A1 through Ak are known. The aim of decision tree                and also the class value is updated according to the highest
learning is to construct a tree model: {A1, A2,…,An}→C.                     posterior probability. Now, the algorithm calculates the
Correspondingly the Bayes theorem, if attribute Ai is discrete              information gain by using updated weights and builds a tree.
or continuous, we will have:                                                After the tree construction, the algorithm initialized weights
                                                                            for each attributes in training data D. If the attribute in the
                               P(Aij | C j )P(C j )                         training data is not tested in the tree then the weight of the
              P(Cj | Aij) =                                      (3)        attribute is initialized to 0, else calculates the minimum depth,
                                      P(Aij )                               d that the attribute is tested at and initialized the weight of
                                                                            attribute to 1 d . Finally, the algorithm removes all the
   Where P(Cj|Aij) denote the probability. The aim of
Bayesian classification is to decide and choose the class that              attributes with zero weight from the training data D. The main
maximizes the posteriori probability. Since P(Aij) is a constant            procedure of proposed algorithm is described as follows.
independent of C, then:                                                       Algorithm 1: Attribute Weighting
               C = arg max P C j | Aij             )                          Input: Training Dataset, D
                                                                              Output: Decision tree, T
                    = arg max P Aij | C j P C j    )( )          (4)            1. Initialize all the weights for each example in D,
                                                                                     wi=1/n, where n is the total number of the examples.

                                                                                                         ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 8, No. 1, 2010
    2.   Calculate the prior probabilities P(Cj) for each class             create a NB classifier for the current node. Partition the
         Cj in D. P(Cj) = Ci
                                                                            training data D according to the test on attribute Ai. If Ai is
                                                                            continuous, a threshold split is used; if Ai is discrete, a multi-
                                                                            way split is made for all possible values. For each child, call
                                                                            the algorithm recursively on the portion of D that matches the
    3.   Calculate the conditional probabilities P(Aij | Cj) for            test leading to the child. The main procedure of algorithm is
                                                                            described as follows.
         each attribute values in D. P(A | C ) = P(Aij )
                                             ij   j
                                                        ∑w   i                Algorithm 2: Adaptive NBTree
                                                        Ci                    Input: Training dataset D of labeled examples.
    4.   Calculate the posterior probabilities for each example               Output: A hybrid decision tree with naïve Bayesian
         in D.                                                                classifier at the leaves.
                    P(ei | Cj) = P(Cj) ∏ P(Aij | Cj)                          Procedure:
    5.   Update the weights of examples in D with Maximum                       1. Calculate the prior probabilities P(Cj) for each class
         Likelihood (ML) of posterior probability P(Cj|ei);
                           wi= PML(Cj|ei)                                            Cj in D. P(Cj) = Ci
    6.   Change the class value of examples associated with                                             n

         maximum posterior probability, Cj = Ci→ PML(Cj|ei).                                           ∑w
    7.   Find the splitting attribute with highest information
         gain using the updated weights, wi in D.                               2.   Calculate the conditional probabilities P(Aij | Cj) for
          Information Gain =                                                         each attribute values in D. P(A | C ) = P(Aij )
                                                                                                                        ij   j
          k ∑ wi
                                       
                                        n ∑ wi
                         ∑ wi                                 

                                       −  i=Cij log w  
          − i =Ci log i =Ci
          ∑ n                          ∑ ∑ wi
                                                     ∑ i                      3.   Classify each example in D with maximum posterior
                           n                          i =C                                                          m
            j =1
                 i =1
                      wi ∑ wi
                         i =1
                                        i=Ci
                                         
                                            i =1      ij  
                                                                                     probability. P(ei | Cj) = P(Cj )∏P(Aij | Cj )Wi

    8.  T = Create the root node and label with splitting                       4.   If any example in D is misclassified, then for each
        attribute.                                                                   attribute Ai, evaluate the utility, u(Ai), of a spilt on
    9. For each branch of the T, D = database created by                             attribute Ai.
        applying splitting predicate to D, and continue steps 1                 5.   Let j = argmaxi(ui), i.e., the attribute with the highest
        to 8 until each final subset belong to the same class or                     utility.
        leaf node created.                                                      6.   If uj is not significantly better than the utility of the
    10. When the decision tree construction is completed, for                        current node, create a naïve Bayesian classifier for
        each attribute in the training data D: If the attribute is                   the current node and return.
        not tested in the tree then weight of the attribute is                  7.   Partition the training data D according to the test on
        initialized to 0. Else, let d be the minimum depth that                      attribute Ai. If Ai is continuous, a threshold split is
        the attribute is tested in the tree, and weight of the                       used; if Ai is discrete, a multi-way split is made for all
        attribute is initialized to1 d .                                             possible values.
    11. Remove all the attributes with zero weight from the                     8.   For each child, call the algorithm recursively on the
        training data D.                                                             portion of D that matches the test leading to the child.

B. Proposed Adaptive NBTree Algorithm                                                V.    EXPERIMENTAL RESULTS AND ANALYSIS
   Given training data, D where each attribute Ai and each
example ei have the weight value. Estimates the prior                       A. Dataset
probability P(Cj) and conditional probability P(Aij | Cj) from                  Experiments have been carried out on KDD99 cup
the given training dataset using weights of the examples. Then              benchmark network intrusion detection dataset, a predictive
classify all the examples in the training dataset using these               model capable of distinguishing between intrusions and normal
prior and conditional probabilities with incorporating attribute            connections [58]. In 1998, DARPA intrusion detection
weights into the naïve Bayesian formula:                                    evaluation program, a simulated environment was set up to
                                        m                                   acquire raw TCP/IP dump data for a local-area network (LAN)
                   P(ei | Cj) = P(Cj )∏P(Aij | Cj )Wi            (6)        by the MIT Lincoln Lab to compare the performance of various
                                       i=1                                  intrusion detection methods. It was operated like a real
   Where Wi is the weight of attribute Ai. If any example of                environment, but being blasted with multiple intrusion attacks
training dataset is misclassified, then for each attribute Ai,              and received much attention in the research community of
evaluate the utility, u(Ai), of a spilt on attribute Ai. Let j =            adaptive intrusion detection. The KDD99 dataset contest uses a
argmaxi(ui), i.e., the attribute with the highest utility. If uj is         version of DARPA98 dataset. In KDD99 dataset each example
                                                                            represents attribute values of a class in the network data flow,
not significantly better than the utility of the current node,

                                                                                                         ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 8, No. 1, 2010
and each class is labeled either normal or attack. Examples in                     C. Experiment and analysis on Proposed Algorithm
KDD99 dataset are represented with a 41 attributes and also                           Firstly, we use proposed algorithm 1 to perform attribute
labeled as belonging to one of five classes as follows: (1)
                                                                                   selection from training dataset of KDD99 dataset and then we
Normal traffic; (2) DoS (denial of service); (3) Probe,
surveillance and probing; (4) R2L, unauthorized access from a                      use our proposed algorithm 2 for classifier construction. The
remote machine; (5) U2R, unauthorized access to local super                        performance of our proposed algorithm on 12 attributes in
user privileges by a local unprivileged user. In KDD99 dataset                     KDD99 dataset is listed in Table IV.
these four attack classes are divided into 22 different attack                     TABLE IV. PERFORMANCE OF PROPOSED ALGORITHM ON KDD99 DATASET
classes that tabulated in Table II.
                                                                                           Classes          Detection Rates (%)    False Positives (%)
                TABLE II. ATTACKS IN KDD99 DATASET                                         Normal                   100                   0.04
  4 Main Attack Classes                    22 Attack Classes                                Probe                  99.93                  0.37
  Denial of Service (DoS)      back, land, neptune, pod, smurt, teardrop                     DoS                    100                   0.03
                            ftp_write, guess_passwd, imap, multihop, phf,                   U2R                    99,38                  0.11
   Remote to User (R2L)                                                                      R2L                   99.53                  6.75
                                     spy, warezclient, warezmaster
    User to Root (U2R)        buffer_overflow, perl, loadmodule, rootkit
                                                                                      Table V and Table VI depict the performance of naïve
          Probing                  ipsweep, nmap, portsweep, satan
                                                                                   Bayesian (NB) classifier and C4.5 algorithm using the original
    The input attributes in KDD99 dataset are either discrete or                   41 attributes of KDD99 dataset.
continuous values and divided into three groups. The first                            TABLE V. PERFORMANCE OF NB CLASSIFIER ON KDD99 DATASET
group of attributes is the basic features of network connection,
which include the duration, prototype, service, number of bytes                            Classes          Detection Rates (%)    False Positives (%)
from source IP addresses or from destination IP addresses, and                             Normal                  99.27                  0.08
                                                                                            Probe                  99.11                  0.45
some flags in TCP connections. The second group of attributes                                DoS                   99.68                  0.05
in KDD99 is composed of the content features of network                                     U2R                    64.00                  0.14
connections and the third group is composed of the statistical                               R2L                   99.11                  8.12
features that are computed either by a time window or a
                                                                                    TABLE VI. PERFORMANCE OF C4.5 ALGORITHM USING KDD99 DATASET
window of certain kind of connections. Table III shows the
number of examples of 10% training data and 10% testing data                               Classes          Detection Rates (%)    False Positives (%)
in KDD99 dataset. There are some new attack examples in                                    Normal                  98.73                   0.10
testing data, which is no present in the training data.                                     Probe                  97.85                   0.55
                                                                                             DoS                   97.51                  0.07
  TABLE III. NUMBER OF EXAMPLES IN TRAINING AND TESTING KDD99                               U2R                    49.21                   0.14
                            DATA                                                             R2L                   91.65                  11.03
          Attack Types    Training Examples     Testing Examples                       Table VII and Table VIII depict the performance of NB
             Normal             97277                 60592
        Denial of Service       391458               237594
                                                                                   classifier and C4.5 using reduces 12 attributes.
         Remote to User          1126                  8606                          TABLE VII. PERFORMANCE OF NB CLASSIFIER USING KDD99 DATASET
          User to Root            52                    70
            Probing              4107                  4166                                Classes          Detection Rates (%)    False Positives (%)
        Total Examples          494020               311028                                Normal                  99.65                  0.06
                                                                                            Probe                  99.35                  0.49
B. Performance Measures                                                                      DoS                   99.71                  0.04
                                                                                            U2R                    64.84                  0.12
    In order to evaluate the performance of proposed learning                                R2L                   99.15                  7.85
algorithm, we performed 5-class classification using KDD99
network intrusion detection benchmark dataset and consider                         TABLE VIII. PERFORMANCE OF C4.5 ALGORITHM USING KDD99 DATASET
two major indicators of performance: detection rate (DR) and                               Classes          Detection Rates (%)    False Positives (%)
false positives (FP). DR is defined as the number of intrusion                             Normal                  98.81                  0.08
instances detected by the system divided by the total number of                             Probe                  98.22                  0.51
intrusion instances present in the dataset.                                                  DoS                   97.63                  0.05
                                                                                            U2R                    56.11                  0.12
             DR = Total _ det ected _ attacks * 100                     (7)                  R2L                   91.79                  8.34
                       Total _ attacks                                               We also compare the intrusion detection performance
   FP is defined as the total number of normal instances.                          among Support Vector Machines (SVM), Neural Network
                                                                                   (NN), Genetic Algorithm (GA), and proposed algorithm on
             FP = Total _ misclassif ied _ process * 100                (8)        KDD99 dataset that tabulated in Table IX [59], [60].
                    Total _ normal _ process                                                TABLE IX. COMPARISON OF SEVERAL ALGORITHMS

    All experiments were performed using an Intel Core 2 Duo                                         SVM       NN        GA        Proposed Algorithm
Processor 2.0 GHz processor (2 MB Cache, 800 MHz FSB)                                   Normal       99.4      99.6     99.3             99.93
with 1 GB of RAM.                                                                        Probe       89.2      92.7     98.46            99.84
                                                                                          DoS        94.7      97.5     99.57            99.91
                                                                                         U2R         71.4       48      99.22            99.47
                                                                                          R2L        87.2       98      98.54            99.63

                                                                                                                  ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 8, No. 1, 2010
             VI.    CONCLUSIONS AND FUTURE WORKS                                           model,” Computer Physics Communications, Vol. 180, Issue 10,
                                                                                           October 2009, pp. 1795-1801.
    This paper presents a hybrid approach to intrusion detection                    [13]   Chih-Forn, and Chia-Ying Lin, “A triangle area based nearset neighbors
based on decision tree-based attribute weighting with naïve                                approach to intrusion detection,” Pattern Recognition, Vol. 43, Issuse 1,
Bayesian tree, which is suitable for analyzing large number of                             January 2010, pp. 222-229.
network logs. The main propose of this paper is to improve the                      [14]   Kamran Shafi, and Hussein A. Abbass, “An adaptive genetic-based
performance of naïve Bayesian classifier for network intrusion                             signature learning system for intrusion detection,” Expert System with
detection systems (NIDS). The experimental results manifest                                Applications, Vol. 36, Issue 10, December 2009, pp. 12036-12043.
that proposed approach can achieve high accuracy in both                            [15]   Zorana Bankovic, Dusan Stepanovic, Slobodan Bojanic, and Octavio
                                                                                           NietopTalasriz, “Improving network security using genetic algorithm
detection rates and false positives, as well as balanced detection                         approach,” Computers & Electrical Engineering, Vol. 33. Issues 5-6,
performance on all four types of network intrusions in KDD99                               2007, pp. 438-541.
dataset. The future works focus on applying the domain                              [16]   Yang Li, and Li guo, “An active learning based TCM-KNN algorithm
knowledge of security to improve the detection rates for current                           for supervised network intruison detection,” Computers & security, Vol.
attacks in real time computer network, and ensemble with other                             26, Issues 7-8, December 2007, pp. 459-467.
mining algorithms for improving the detection rates in intrusion                    [17]   Wun-Hwa Chen, Sheng-Hsun Hsu, and Hwang-Pin Shen, “Application
detection.                                                                                 of SVM and ANN for intrusion detection,” Computers & Operations
                                                                                           Research, Vol. 32, Issue 10, October 2005, pp. 2617-1634.
                                                                                    [18]   Ming-Yang Su, Gwo-Jong Yu, and Chun-Yuen Lin, “A real-time
                         ACKNOWLEDGMENT                                                    network intrusion detection system for large-scale attacks based on an
                                                                                           incremental mining approach,” Computer & Security, Vol. 28, Issue 5,
   Support for this research received from ERIC Laboratory,                                July 2009, pp. 301-309.
University Lumière Lyon 2 – France, and Department of
                                                                                    [19]   Zeng Jinquan, Liu Xiaojie, Li Tao, Liu Caiming, Peng Lingxi, and Sun
Computer Science and Engineering, Jahangirnagar University,                                Feixian, “A self-adaptive negative selection algorithm used for anomaly
Bangladesh.                                                                                detection,” Progress in Natural Science, Vol. 19, Issue 2, 10 February
                                                                                           2009, pp. 261-266.
                              REFERENCES                                            [20]   Zonghua Zhang, and Hong Shen, “Application of online-training SVMs
                                                                                           for real-time intrusion detection with different considerations,”
                                                                                           Computer Communications, Vol. 28, Issue 12, 18 July 2005, pp. 1428-
[1]  Xuan Dau Hoang, Jiankun Hu, and Peter Bertok, “A program-based                        1442.
     anomaly intrusion detection scheme using multiple detection engines            [21]   Su-Yun Wu, and Ester Yen, “Data mining-based intrusion detectors,”
     and fuzzy inference,” Journal of Network and Computer Applications,                   Expert Systems with Applications, Vol. 36, Issue 3, Part 1, April 2009,
     Vol. 32, Issue 6, November 2009, pp. 1219-1228.                                       pp. 5605-5612.
[2] P. Garcia-Teodoro, J. Diaz-Verdejo, G. Macia-Fernandez, and E.                  [22]   S. R. Snapp, and S. E. Smaha, “Signature analysis model definition and
     Vazquez, “Anomaly-based network intrusion detection: Techniques,                      formalism,” In Proc. of the 4th Workshop on Computer Security Incident
     systems and challenges,” Computers & Security, Vol. 28, 2009, pp. 18-                 Handling, Denver, CO. 1992.
     28.                                                                            [23]   P. A. Poras, and A. Valdes, “Live traffic analysis of TCP/IP gateways,”
[3] Animesh Patch, and Jung-Min Park, “An overview of anomaly detection                    In Proc. of the Network and Distributed System Security Symposium,
     techniques: Existing solutions and latest technological trends,”                      San Diego, CA: Internet Society, 11-13 March, 1998.
     Computer Netwroks, Vol. 51, Issue 12, 22 August 2007, pp. 3448-3470.           [24]   T. D. Garvey, and T. F. Lunt, “Model based intrusion detection,” In
[4] Lih-Chyau Wuu, Chi-Hsiang Hung, and Sout-Fong Chen, “Building                          Proc. of the 14th National Conference Security Conference, 1991, pp.
     intrusion pattern miner for Snort network intrusion detection system,”                372-385.
     Journal of Systems and Software, Vol. 80, Issue 10, October 2007, pp.          [25]   F. Carrettoni, S. Castano, G. Martella, and P. Samarati, “RETISS: A real
     1699-1715.                                                                            time security system for threat detection using fuzzy logic,” In Proc. of
[5] Chia-Mei Chen, Ya-Lin Chen, and Hsiao-Chung Lin, “An efficient                         the 25th IEEE International Carnahan Conference on Security
     network intrusion detection,” Computer Communications, Vol. 33, Issue                 Technology, Taipei, Taiwai ROC, 1991.
     4, 1 March 2010, pp. 477-484.                                                  [26]   T. F. Lunt, A. Tamaru, F. Gilham, R. Jagannathan, P. G. Neumann, H. S.
[6] M. Ali Aydin, A. Halim Zaim, and K. Gokhan Ceylan, “A hybrid                           Javitz, A. Valdes, and T. D. Garvey, “A real-time intrusion detection
     intrusion detection system for computer netwrok security,” Computer &                 expert system (IDES),” Technical Report, Computer Science
     Electrical Engineering, Vol. 35, Issue 3, May 2009, pp. 517-526.                      Laboratory, Menlo Park, CA: SRI International.
[7] Franciszek Seredynski, and Pascal Bouvry, “Anomaly detection in                 [27]   S. A. Hofmeyr, S. Forrest, A. Somayaji, “Intrusion detection using
     TCP/IP networks using immune systems paradigm,” Computer                              sequences of system calls,” Journal of Computer Security, Vol. 6, 1998,
     Communications, Vol. 30, Issue 4, 26 February 2007, pp. 740-749.                      pp. 151-180.
[8] Jr, James C. Foster, Matt Jonkman, Raffael Marty, and Eric Seagren,             [28]   S. A. Hofmeyr, and S. Forrest, “Immunity by design: An artificial
     “Intrusion detection systems,” Snort Intrusion detection and Prevention               immune system,” In Proc. of the Genetic and Evolutionary Computation
     Toolkit, 2006, pp. 1-30.                                                              Conference (GECCO 99), Vol. 2, San Mateo, CA: Morgan Kaufmann,
[9] Ben Rexworthy, “Intrusion detections systems – an outmoded network                     1999, pp. 1289-1296.
     protection model,” Network Security, Vol. 2009, Issus 6, June 2009, pp.        [29]   J. M. Jr. Bonifacio, A. M. Cansian, A. C. P. L. F. Carvalho, and E. S.
     17-19.                                                                                Moreira, “Neural networks applied in intrusion detection systems,” In
[10] Wei Wang, Xiaohong Guan, and Xiangliang Zhang, “Processing of                         the Proc. of the International Conference on Computational Intelligence
     massive audit data streams for real-time anomaly intrusion detection,”                and Multimedia Application, Gold Coast, Australia, 1997, pp. 276-280.
     Computer Communications, Vol. 31, Issue 1, 15 January 2008, pp. 58-            [30]   H. Debar, M. Becker, and D. Siboni, “A neural network component for
     72.                                                                                   an intrusion detection system,” In Proc. of the IEEE Symposium on
[11] Han-Ching Wu, and Shou-Hsuan Stephen Huand, “Neural network-                          Research in Security and Privacy, Oakland, CA: IEEE Computer Society
     based detection of stepping-stone intrusion,” Expert Systems with                     Press, 1992, pp. 240-250.
     Applications, Vol. 37, Issuse 2, March 2010, pp. 1431-1437.                    [31]   W. Lee, S. J. Stolfo, and P. K. Chan, “Learning patterns from Unix
[12] Xiaojun Tong, Zhu Wang, and Haining Yu, “A research using hybrid                      precess execution traces for intrusion detection,” AAAI Workshop: AI
     RBF/Elman neural netwroks for intrusion detection system secure

                                                                                                                       ISSN 1947-5500
                                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                               Vol. 8, No. 1, 2010
       Approaches to Fraud Detection and Risk Management, Menlo Park, CA:                       Recognition (ICPR 2002), Quebec: IEEE Computer Society, 2002, pp.
       AAAI Press, 1999, pp. 50-56.                                                             568-571.
[32]    W. Lee, S. J. Stolfo, and K. W. Mok, “Mining audit data to built                 [52]   S. Mukkamala, and A.H. Sung, “Identifying key features for intrusion
       intrusion detection models,” In Proc. of the 4th International Conference                detection using neural networks,” In Proc. of the ICCC International
       on Knowledge Discovery and Data Mining (KDD-98), Menlo Park, CA:                         Conference on Computer Communications, 2002.
       AAAI Press, 2000, pp. 66-72.                                                      [53]   W. Lee, and S. J. Stolfo, “A framework for constructing features and
[33]   S. Forrest, S. A. Hofmeyr, A. Somayaji, and T. A. Longstaff, “A sence                    models for intrusion detection systems,” ACM Transactions on
       of self for Unix Precesses,” In Proc. of the 1996 IEEE Symposium on                      Information and System Security, 3(4), 2000, pp. 227-261.
       Security and Privacy, Oakland, CA: IEEE Computer Society Press,                   [54]   Y. Li, J.L. Wang, Z.H. Tian, T.B. Lu, and C. Young, “Building
       1996, pp. 120-128.                                                                       lightweight intrusion detection system using wrapper-based feature
[34]   James P. Anderson, “Computer security threat monitoring and                              selection mechanisms,” Computer & Security, Vol. 28, Issue 6,
       surveillance,” Technical Report 98-17, James P. Anderson Co., Fort                       September 2009, pp. 466-475.
       Washington, Pennsylvania, USA, April 1980.                                        [55]   Y. Chen, A. Abraham, and B. Yang, “Hybrid flexible neural-tree-based
[35]   Dorothy E. Denning, “An intrusion detection model,” IEEE Transaction                     intrusion detection systems,” International Journal of Intelligent
       on Software Engineering, SE-13(2), 1987, pp. 222-232.                                    Systems, 22, pp. 337-352.
[36]   Dorothy E. Denning, and P.G. Neumann “Requirement and model for                   [56]   J. R. Quinlan, “Induction of Decision Tree,” Machine Learning Vol. 1,
       IDES- A real-time intrusion detection system,” Computer Science                          1986, pp. 81-106.
       Laboratory, SRI International, Menlo Park, CA 94025-3493, Technical               [57]   R. Kohavi, “Scaling up the accuracy of naïve Bayes classifiers: A
       Report # 83F83-01-00, 1985.                                                              Decision Tree Hybrid,” In Proc. of the 2nd International Conference on
[37]   U. Lindqvist, and P. A. Porras, “eXpert-BSM: A host based intrusion                      Knowledge Discovery and Data Mining, Menlo Park, CA:AAAI
       detection solution for Sun Solaris,” In Proc. of the 17th Annual Computer                Press/MIT Press, 1996, pp. 202-207.
       Security Applications Conference, New Orleans, USA, 2001, pp. 240-                [58]   The      KDD         Archive.     KDD99      cup     dataset,    1999.
[38]   W. Fan, W. Lee, M. Miller, S. J. Stolfo, and P. K. Chan, “Using artificial        [59]   Mukkamala S, Sung AH, and Abraham A, “Intrusion dection using an
       anomalies to detect unknown and known netwrok intrusions,”                               ensemble of intelligent paradigms,” Proceedings of Journal of Network
       Knowledge and Information Systems, 2005, pp. 507-527.                                    and Computer Applications, 2005, 2(8): pp. 167-182.
[39]   Y. Bouzida, and F. Cuppens, “Detecting known and novel network                    [60]   Chebrolu S, Abraham A, and Thomas JP, “Feature deduction and
       intrusions,” Security and Privacy in Dynamic Environments, 2006, pp.                     ensemble design of intrusion detection systems.” Computer & Security,
       258-270.                                                                                 2004, 24(4), pp. 295-307.
[40]   S. Peddabachigari, A. Abraham, and J. Thomas, “Intrusion detection
       systems using decision tress and support vector machines,” International
       Journal of Applied Science and Computations, 2004.                                                            AUTHORS PROFILE
[41]   D. Barbara, N. Wu, and Suchil Jajodia, “Detecting novel network
       intrusions using Bayes estimators,” In Proc. of the 1st SIAM Conference           Dewan Md. Farid was born in Dhaka, Bangladesh in 1979. He is currently a
       on Data Mining, April 2001.                                                       research fellow at ERIC Laboratory, University Lumière Lyon 2 - France. He
                                                                                         obtained B.Sc. Engineering in Computer Science and Engineering from Asian
[42]   D. Barbara, J. Couto, S. Jajodia, and N. Wu, “ADAM: A tested for
                                                                                         University of Bangladesh in 2003 and Master of Science in Computer Science
       exploring the use of data mining in intrusion detection,” Special Interest
       Group on Management of Data (SIGMOD), Vol. 30 (4), 2001.                          and Engineering from United International University, Bangladesh in 2004.
                                                                                         He is pursuing Ph.D. in the Department of Computer Science and
[43]   N. B. Amor, S. Benferhat, and Z. Elouedi, “Naïve Bayes vs. decision               Engineering, Jahangirnagar University, Bangladesh. He is a faculty member in
       trees in intruison detection systems,” In Proc. of the 2004 ACM                   the Department of Computer Science and Engineering, United International
       Symposium on Applied Computing, New York, 2004, pp. 420-424.                      University, Bangladesh. He is a member of IEEE and IEEE Computer
[44]   M. Panda, and M. R. Patra, “Network intrusion deteciton using naïve               Society. He has published 10 international research papers including two
       Bayes,” International Journal of Computer Science and Network                     journals in the field of data mining, machine learning, and intrusion detection.
       Security (IJCSNS), Vol. 7, No. 12, December 2007, pp. 258-263.
[45]   M. Panda, and M. R. Patra, “Semi-naïve Bayesian method for network                Jérôme Darmont received his Ph.D. in computer science from the University
       intrusion detection system,” In Proc. of the 16th International Conference        of Clermont-Ferrand II, France in 1999. He joined the University of Lyon 2,
       on Neural Information Processing, December 2009.                                  France in 1999 as an associate professor, and became full professor in 2008.
[46]   P.V.W. Radtke, R. Sabourin, and T. Wong, “Intelligent feature                     He was head of the Decision Support Databases research group within the
       extraction for ensemble of classifiers,” In Proc. of 8th International            ERIC laboratory from 2000 to 2008, and has been director of the Computer
       Conference on Document Analysis and Recognition (ICDAR 2005),                     Science and Statistics Department of the School of Economics and
       Seoul: IEEE Computer Society, 2005, pp. 866-870.                                  Management since 2003. His current research interests mainly relate to
[47]   R. Rifkin, A. Klautau, “In defense of one-vs-all classification,” Journal         handling so-called complex data in data warehouses (XML warehousing,
       of Machine Learning Research, 5, 2004, pp. 143-151.                               performance optimization, auto-administration, benchmarking...), but also
                                                                                         include data quality and security as well as medical or health-related
[48]   S. Chebrolu, A. Abraham, and J.P. Thomas, “Feature deduction and                  applications.
       ensemble design of intrusion detection systems,” Computer & Security,
       24(4), 2004, pp. 295-307.
[49]   A. Tsymbal, S. Puuronen, and D.W. Patterson, “Ensemble feature                    Mohammad Zahidur Rahma is currently a Professor at Department of
       selection with the simple Bayesian classification,” Information Fusion,           Computer Science and Engineering, Jahangirnager University, Banglasesh. He
       4(2), 2003, pp. 87-100.                                                           obtained his B.Sc. Engineering in Electrical and Electronics from Bangladesh
                                                                                         University of Engineering and Technology in 1986 and his M.Sc. Engineering
[50]   A.H. Sung, and S. Mukkamala, “Identifying important features for
                                                                                         in Computer Science and Engineering from the same institute in 1989. He
       intrusion detection using support vector machines and neural networks,”
                                                                                         obtained his Ph.D. degree in Computer Science and Information Technology
       In Proc. of International Symposium on Applications and the Internet
                                                                                         from University of Malaya in 2001. He is a co-author of a book on E-
       (SAINT 2003), 2003, pp. 209-217.
                                                                                         commerce published from Malaysia. His current research includes the
[51]   L.S. Oliveira, R. Sabourin, R.F. Bortolozzi, and C.Y. Suen, “Feature              development of a secure distributed computing environment and e-commerce.
       selection using multi-objective genetic algorithms for handwritten digit
       recognition,” In Proc. of 16th International Conference on Pattern

                                                                                                                           ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 8, No. 1, April 2010

Improving Overhead Computation and pre-processing
        Times for Grid Scheduling System

        Asgarali Bouyer, 2Mohammad javad Hoseyni,                                            Abdul Hanan Abdullah
                Department of Computer Science                                Faculty of Computer Science and Information Systems
            Islamic Azad University-Miyandoab branch                                UNIVERSITI TEKNOLOGI MALAYSIA
                        Miyandoab, Iran                                                         Johor, Malaysia

Abstract— Computational Grid is enormous environments with               on the place of tasks or applications. For example, suitable
heterogeneous resources and stable infrastructures among other           node selection can reduce overhead communication and cost
Internet-based computing systems. However, the managing of               and makespan and even execution time. Resource discovery is
resources in such systems has its special problems. Scheduler            important but not enough because of the dynamic variation in
systems need to get last information about participant nodes from        the grid, such that resource prediction is necessary for grid
information centers for the purpose of firmly job scheduling. In         system to predict coming status of nodes and their workloads.
this paper, we focus on online updating resource information             Therefore, for prediction of node's status, schedulers need to
centers with processed and provided data based on the assumed            get up-to date or last information about nodes. Another
hierarchical model. A hybrid knowledge extraction method has
                                                                         problem is how to get up-to date information about nodes. In
been used to classifying grid nodes based on prediction of jobs’
features. An affirmative point of this research is that scheduler
                                                                         most of the grid scheduling systems, there are some special
systems don’t waste extra time for getting up-to-date information        centers that maintain last information about grid node's status
of grid nodes. The experimental result shows the advantages of           that periodically updated by its management section such as
our approach compared to other conservative methods, especially          Meta-computing Directory Services [1] in Globus toolkit. In
due to its ability to predict the behavior of nodes based on             the Globus Toolkit, Resource and status information is
comprehensive data tables on each node.                                  provided via a LDAP-based network directory called Meta-
                                                                         computing Directory Services (MDS). It has a grid information
   Keywords-component; job scheduling; hierarchical model; Grid          service (GIS) that is responsible for collecting and predicting
nodes modul; Grid resource information center                            the resource status information, such as CPU capacities,
                                                                         memory size, network bandwidth, software availabilities, and
                      I.    INTRODUCTION                                 load of a site in a particular period. GIS can answer queries for
    In computational grid systems, a job or application can be           resource information or push information subscribers [2]. n our
divided into tasks and distributed to grid nodes. These tasks can        research, we have used GIS idea to maintain nodes’
be executed independently at the same time in parallel ways to           information, but a little different from Globus’ GIS, for
minimize completion time of job execution. Therefore, grid               predicting in a local fashion. For this aim, we used a special
nodes dynamically share their resources to use by another grid           component on all participant Grid nodes that is called grid
application. In order to perform job scheduling and resource             node’s module (GNM). In Globus, all processing information
management at Grid level, usually it is used a meta-scheduler.           is done by MMDS, and it does not use local processing for this
A resource scheduler is fundamental in any large-scale Grid              purpose. However, we have used a local information center
environment. The task of a Grid resource broker and scheduler            each node to maintain a complete information or background
dynamically is to identify and characterize the available                about its status n order to exactly exploration of knowledge for
resources, and to select and allocate the most appropriate               valuation and scheduling.
resources for a given job. In a broker-based management                      The rest of this paper is ordered as follow. In section two, a
system, brokers are responsible for selecting best nods,                 problem formulation is described. Some, related works on
ensuring the trustworthiness of the service provider. Resource           earlier research have been reviewed in section 3. Our proposed
selection is an important issue in a grid environment where a            approach has been discussed in section4. In section 5, the
consumer and a service provider are distributed geographically           experimental results and evaluations have been mentioned.
across multiple administrative domains. Choosing the suitable            Finally, the paper is concluded in section 6.
resource for a user job to meet predefined constraints such as
deadline, speedup and cost of execution is an main problem in                           II. PROBLEM FORMULATION
grids. As you know, each task has some conditions that must be
considered by schedulers to select the destination nodes based              One motivation of Grid computing is to aggregate the
                                                                         power of widely distributed resources, and provide non-trivial

                                                                                                     ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                               Vol. 8, No. 1, April 2010

services to users. To achieve this goal, an efficient Grid                          AppLeS (Application Level Scheduling) [6] focuses on
scheduling system must be considered as an essential part of                    developing scheduling agents for individual Grid applications.
the Grid. Since the grid is a dynamic environment, the                          It applies agents for individual Grid applications. These agents
prediction and detection of available resources and then use an                 use application oriented scheduling, and select a set of
economic policy in resource scheduling for coming jobs with                     resources taking into consideration application and resource
consider some sensible criteria is important in scheduling                      information. AppLeS is more suitable for Grid environment
cycle. In a Grid environment, prediction of resource                            with its sophisticated NWS[7] mechanism for collecting system
availability, allocation of proper nodes to desired tasks, a fairly             information [8]. However, it performs resource discovering and
price adapter for participant nodes is the prerequisite for a                   scheduling without considering resource owner policies.
reasonable scheduling guarantee. Many approaches for grid                       AppLeS do not have powerful resource managers that can
meta-scheduler are discussed from different points of view,                     negotiate with applications to balance the interests of different
such as static and dynamic policies, objective functions,                       applications [8]. EMPEROR [9] provides a framework for
application models, adaptation, QOS constraints, and strategies                 implementing scheduling algorithms based on performance
dealing with dynamic behavior of resources that have some                       criteria. The implementation is based on the Open Grid
weaknesses (e.g., complexity time, predicting problems, using                   Services Architecture (OGSA) and makes use of common
out of date data, unfair, unreliable, nonflexible, etc.). Based on              Globus services for tasks such as monitoring, discovery, and
the current researches, a new approach has been proposed as a                   job execution. EMPEROR is focused on resource performance
helpful tool for meta-scheduler to do a dynamic and intelligent                 prediction and is not distributed nor does it support economic
resource scheduling for grid with considering some important                    allocation mechanisms.
criteria such as dynamism, fairness, response time, and
reliability.                                                                        Singh et al. proposed an approach for solving the Grid
                                                                                resource      management        problem      by      taking    into
     The job scheduling problem is defined as the process of                    consideration[10]. The paper proposed an approach aimed at
making decision for scheduling tasks of job based on grid                       obtaining guarantees on the allocation of resources to task
resources and services. Grid scheduling problem is formally                     graph structured applications. In mentioned research, resource
represented by a set of the given tasks and resources. A grid                   availabilities are advertised as priced time slots, and the authors
system is composed of a set on nodes as N = {N1 , N 2 ,..., N n }               presented the design of a resource scheduler that generates and
and each node consists of several resources, that is,                           advertises the time slots. Moreover, Singh et al. demonstrated
                                                                                that their proposed framework (incorporating resource
 N i = {R 1 , R i2 ,…, R ir } and each resource is appeared often in all        reservation) can deliver better performance for applications
nodes within different characteristics. By a set of the given jobs              than the best effort approach.
in time period T, it consists of several jobs within different
                                  {             }
characteristics, that is, J = J1 , J 2 ,..., J j that belong to c                   Another work has been done by Chao et al. that is a
                                                                                coordination mechanism based on group selections of self-
consumers C = {C1 , C 2 ,..., Cc } . Each job necessarily is divided            organizing agents operating in a computational Grid [18]. The
                                      {                 }
into several tasks, that is, J i = T1i , T2i , T3i ,..., Tti . The main         authors argued that due to the scale and dynamicity of
                                                                                computational Grids, the availability of resources and their
objective in most scheduling systems often is to design a                       varying characteristics, manual management of Grid resources
scheduling policy for scheduling submitted jobs with the goal                   is a complex task, and automated and adaptive resource
of maximizing throughput and efficiency and also minimizing                     management using self-organizing agents is a possible solution
job completion times. Job’s scheduling is generally broken                      to address this problem. Authors have pointed out that for Grid
down into three steps:                                                          resource management, examples in which performance
1- To define a comprehensive and versatile method and divide fairly             enhancement can be achieved through agent-based
job between grid nodes.                                                         coordination include: decision making in resource allocation
2- The allocation of tasks to the computing nodes based on user                 and job scheduling, and policy coordination in virtual
requirement and grid facilities.                                                organizations.
3- The monitoring of running grid tasks on the nodes over time and
reliability factors.                                                                Kertész and Kacsuk have argued that there are three
   With a large number of users attempting to execute jobs                      possible levels of interaction to achieve interoperability in
concurrently on the grid computing, parallelism of the                          Grids: operating system level, Grid middleware level, and
applications and their respective computational and storage                     higher-level services level [11]. Authors described three
requirements are all issues that make the resource scheduling                   approaches to address the issue of Grid interoperability,
problem difficult in these systems.                                             namely: 1) extending current Grid resource management
                                                                                systems; 2) multi-broker utilization; and 3) meta-brokering. In
                       III.   RELATED WORKS                                     extending current Grid resource management systems, they
    Condor’s Matchmaker [3-5] adopts a centralized                              developed a tool called GTBroker that interacts with Globus
mechanism to match the advertisement between resource                           resources and performs job submission. This proposed meta-
requesters and resource providers. However, these centralized                   brokering service is designed for determining which Grid
servers can become bottlenecks and points of failures. So the                   broker should be best selected and concealing the differences in
system would not scale well when the number of the nodes                        utilizing them. Extra to the meta-brokering service, they
increases.                                                                      proposed a development of the Broker Property Description
                                                                                Language (BPDL) that is designed for expressing metadata

                                                                                                            ISSN 1947-5500
                                                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                   Vol. 8, No. 1, April 2010

about brokers. The authors have also implemented their ideas                                                     for job submission and resource allocation like other methods,
in the Grid meta-broker architecture that enables users to                                                       we only focus on Grid Node’s Module (GNM) as a significant
access resources of different Grids through their own broker(s).                                                 part of our research. Note that the model described here does
                                                                                                                 not prescribe any implementation details; the protocols,
    Many other considerable approach such as hierarchical grid                                                   programming languages, operating systems, user interfaces and
resource management [12], a new prediction-based method for                                                      other components. Proposed architecture uses a hierarchical
dynamic resource provisioning and scaling of MMOGs in grid                                                       model with minimum communication cost and time.
[13], aggregated resource information for resource selection
methods by grid broker[14] has been offered with considerable                                                        In this research, the knowledge extraction module is
idea that is recommended for researches as hopeful methods.                                                      devolved to Provider Node (PN). In many approaches [24], the
                                                                                                                 needed information is gathered in special places in order to
   IV.                                   GRID NODE’S MODULE FOR OPTIMIZED SCHEDULING                             manage by Grid Resource Brokers or Meta-Schedulers that
    Most of grid scheduling systems consist of two main                                                          surely take much time or have the problem of out-of-date
components: nodes, and schedulers. Scheduler can be                                                              information. Here, the proposed module for provider node
considered as local schedulers and meta-schedulers. In some                                                      saves all required information in the local database and it will
earlier methods [3, 15-18] meta-scheduler, as the main                                                           do knowledge extraction methods in a local fashion. Finally,
component, are responsible for job scheduling. However, there                                                    the summarized information about each grid node’s status is
                                                                                                                 saved in local scheduler’s data tables and dynamically is
                                                                                                                 updated by an online method [25]. A new illustration of GNM
                                                                  Local DB
                                                                                                                 is depicted in Fig. 2 with more details.
               Grid Node Module

                                                     Knowledge extraction (RS and CBR)
                                                                                                                                            Knowledge Extraction
                                              Status announcer and
                                                                             Task Management
                                                   adjustment                                                           Local-DB                        Announcer Section
                                                                                                                                                         Urgent Change
                                                                                                                                                         Price adjusting
                                                                                                                          Task Submission
                                                               Sender/ Receiver                                                                            Node Status
                                                                                                                           Pre-processing                  Announcing
            Local -Scheduler module

                                                                                LS-DB                                   Preparing message               Updating Data File
                                              Job Manager

                                                Fault Tolerance provider          Info Collector
                                                                                                                                        Interface (input/output)
                                                     Bidding services

                                                                                                                       Figure 2. The Grid Node’s Module (GNM) with more details.
                                                            Coordinator layer
  Meta-Scheduler module

                                                                                                                 A. Knowledge Extraction
                                                                        Auction Manager
                                                                                                                     The applied methods for knowledge extraction are Rough
                                             Ms-DB                                                               Set theory and Case-based Reasoning. GNM uses Case-Based
                                                                                                                 Reasoning (CBR) technique and Rough Set Analysis. These
                                                                         New Arrival Job Queues                  techniques work together to discover knowledge in order to
                                         Fault Tolerance Management                                              supply a satisfied recommendation to send to local scheduler.
                                                                                                                 The exploration is based on previous data or experiments on
                                                                                                                 the “node data table” that is helpful to make a better decision.
                                      Figure 1. A hierarchical architecture for optimized scheduling.
                                                                                                                 Use of multiple knowledge extraction methods is beneficial
                                                                                                                 because the strengths of individual technique can be leveraged.
is other scheduling methods [19-23] in which local schedulers                                                    In previous research [18], we proposed a learning method for
perform most of job scheduling steps. These mentioned                                                            resource allocation based on the fuzzy decision tree. This
methods have not applied the impact of using grid nodes in                                                       method observed that it has a successful potential to increase
scheduling system and they only map jobs to nodes. In this                                                       accuracy and reliability if the job length is large. However, in
section we are going to devolve some steps of scheduling                                                         this section, we use a hybrid of CBR and RS to get the exact
process to grid nodes or all participant nodes in grid system.                                                   knowledge with considering economic aspects. This section is
                                                                                                                 divided in three sub-sections: Rough Set Analyzer, Case-based
   A general architecture of the grid scheduling system has                                                      reasoning method, and calculating some information for
been depicted in Fig.1. Since, this architecture uses an auction                                                 computing of priority.
mechanism by meta-scheduler and participant local schedulers

                                                                                                                                                ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                          Vol. 8, No. 1, April 2010

    We used Rough Set (RS) theory [26] to generate rules in               rules to classify and define training set. It consists of two steps:
order to analyze by GNM to classify proper nodes to use by                1) selecting consistent rules for the job in order to get desired
CBR method. Rough set analysis provides an effective means                samples (records) to define training sets. In this case, it can
for analysis of data by synthesizing or constructing                      select a best training set. 2) Final processing and predicting the
approximations (upper and lower) of set concepts from the                 situation of a coming job by using neighboring records (in the
acquired data. It also proved to be very useful for analysis of           same class).
decision problems concerning objects described in a data table
by a set of condition attributes and decision attributes. The goal            After doing CBR, the obtained knowledge about job and
of using the rough set theory in this research is to generate             job (executing job on this node) will be sent to scheduler. In the
useful rules for classifying similar states to apply by CBR in            next sections, we will describe how local scheduler use this
order to explain the best state for node to accept or reject this         extracted knowledge for resource allocation.
new task. Our purpose is to carry out resource selection for the          B. Task Management
desired job based on job condition in the scheduling phase. To                Since the capacity of resources in each node is changed at
do this issue, we will use Rough Set Analyzer (RSA) to                    the moment, new task must be processed before submitting
generate rules. It takes the nodes’ information data table as             because the existing capacity may not be sufficient for a
input. The output is three Matrixes (generated rules are shown            desired task in determined deadline. In this case, task is not
in matrix form).                                                          inserted to queue and rejection information is sent to local
    The RSA uses three important attributes (final status of              scheduler (LS). This operation is done after CBR execution and
task, completion time, and cost price) as decision attributes.            the result is sent along with extracted knowledge (by CBR). In
These attributes can be acted upon as the condition attributes            contrast, if the existing resources be enough for this task, it will
and decision attribute of a decision system. Desired application          be successfully submitted in the queue of the grid’s task on the
only uses one of this attributes at a moment as decision                  provider node. Also, all information about this task is inserted
attribute and at the same time, other two attributes will be              in the related data table as a new Recordset. GNM record
considered as conditional attributes. For example, if                     several important properties at this time such as CPU Load,
dependability and speed factors be more important, the second             Free memory (RAM), Task-ID, size of the new task, priority of
and third attribute is considered as Decision attribute,                  the new task (we consider only 3 priority Low, Normal and
respectively. There are other conditional attributes that we have         High), number of all grid tasks (in waiting status), amounts of
mentioned in next section. In addition, RSA needs to discretize           Data Transmission Rate (DTR) related to this node in the grid
the input data for some attributes. Since RSA takes analysis              (DTR probably has upheaval in sometimes), start time of task
time in order to perform the rough set method, though not                 execution, spent time for this task, completion time, status of a
considerable, it is possible that we are encountered with this            task (wait, running, success, and fail). Some of this information
question: When will RSA execute rough set analysis? To                    (e.g. spent and completion time, task status and so on) is
answer this question, we supply two conditions for doing a                updated after finishing a task. In our approach, task has four
rough set analysis:                                                       states: wait, running, fail, and success. After submit a task in
                                                                          the queue, at first, it take wait state. When a task is started for
  Number of currently added tasks to this node is more than               executing, its state changes to running state until it is
1% of previous submitted tasks in the past days.                          terminated. After successfully finishing, the task state will be
    Rough set analysis has not been done in last 24 hours.                changed to success state. It is possible that task state is changed
                                                                          to fail state due to diverse software and hardware problems. At
    Case-based Reasoning is a technique that adapts past                  the end, the result completely is given back to LS if the task
solutions for new demands by using earlier cases to explain,              successfully is executed.
criticize and interpret novel situations for a new problem [27].
The basic CBR processes are defined as a cycle and include the            C. Announcer Section
following: RETRIEVE the most similar cases; REUSE the                         This section is the most important section in GNM. It is
cases to solve the problem; REVISE the proposed problem                   responsible to decide on whether the node is ready to accept a
solution; RETAIN the modified solution as a new case. These               new task or deny new task. Announcer section (AS) analyzes
steps fully must be done to get the satisfied knowledge. Now,             the grid tasks queue and its own status (mentioned in above) to
we encounter with this question: When will Case-based                     specify coming status. For example, it specifies that in next two
Reasoning be executed? For this question, first we should say             hours it cannot accept any new task. This section is definitely
that when will the nodes get the new tasks (or job)                       analyzing its own status after every submitting. It evaluates
information? During online resource selection by local                    deadline and execution time of the previous submitted task
scheduler, the job information is sent to all nodes. In [28] an           (waiting and running state) to determine how many processes
optimized version of Case-Based Reasoning had been proposed               in the near future will finish. With the assumption of finishing
to increase accuracy in final results. This method applies CBR            these processes, when would the desired node be able to accept
algorithm by using Decision Tree in order to select suitable              new tasks in the future? In addition, it is possible that some
sampling. Improving accuracy criterion was a success key in               high priorities local processes will join to current processes in
this method. However, due to classification of input data by              near future (e.g. automatically start a Virus Scan program, Auto
data mining techniques such as decision tree, selecting training          saves or backup by some application, and so forth). Thus, AS
set takes much time that is not negligible for online processes.          has to consider all possible status to get the best decision. This
Therefore, to reduce of this overhead time, we use rough set

                                                                                                       ISSN 1947-5500
                                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                           Vol. 8, No. 1, April 2010

process will be done by sub-section that is called Node Status                                   Four important measures were evaluated in the simulation:
Announcing (NSA) module.                                                                     dependability or reliability, accuracy prediction, and success
                                                                                             ratio and iteration of job in other nodes. In GridSim, each
   NSA module also computes some related information, such                                   simulated interactive component (e.g. resource and user) is an
as Success Ratio, Average of Completion Time (ACT),                                          entity that must inherit from the class GridSim and override a
Average of CPU-Idle (how much percent is CPU free or idle)                                   body()method to implement its desired behavior. Class Input
and Average of free memory (RAM), about this node and                                        and Output in GridSim is considered for interacting with other
sending it along with other obtained results to Local scheduler.                             entities. Both classes have their own body() method to handle
For instance, ACT measure is computed as following equation:                                 incoming and outgoing events, respectively. Entities modeled
Success Ra
         tio= Ns /N a                                                                        in GridSim include the resources, users, information services,
             n                                                                               and network-based I/O.A resource, that in our method called
ACT k = (∑GTp ) / n
            i =1
                   i         (1)                                                             provider node, is characterized by a number of processors,
                                                                                             speed of processing (a specialized CPU rate for the grid task),
GTpi is completiontime for taski; and n is the number of success tasks by node k th .        The tolerance of price variation for provider node, The real
                                                                                             data transmission rate per second, The capacity of RAM
Ns: Number of successfully completed tasks.                                                  memory, Monetary unit, Tolerance of the price variation and
Na: Number of Successful + Failed tasks                                                      time zone. Furthermore, the node’s price is computed based on
                                                                                             mentioned characteristics. Tolerance of price variation is a
It is mentioned that aborted tasks are different from failed                                 parameter to give a discount over node’s price that is used for
task. Fail event can be occurred because of a nodes’ problem                                 some low budget jobs. For each resource, the CPU speed has
such as software, hardware, deadline or budget problems.                                     been determined by MIPS measure (million instructions per
Where abort event is done by scheduler for that canceling of                                 second). Each property is defined in an object of the
job by consumer or other problems and executive node has not                                 ResourceCharacteristics class. The flow of information from
any problem for continuing job execution. Therefore, aborted                                 other entities via I/O port can be processed by overriding
task is considered as neutral tasks and those are not taken into                             processOtherEvent() method. We used a uniform allocation
account for measuring of the success ratio.                                                  method for all nodes [23].
    Sometimes a node is encountered with unpredictable cases.                                    For our testing, we define three different local scheduler
For example, suppose that a desired node is ready to accept                                  (Table 1) and three groups of jobs (Table2). Each group of jobs
new tasks. If node’s resources have unexpectedly been                                        has special features that have been mentioned in Table 2. In our
occupied by local tasks (OS processes), this node cannot accept                              previous work [18] the nodes’ specifications and their
a new task until to come back to normal state. In this case,                                 performance is collected from a real local grid. In this research,
Urgent Change section, a sub-section in Announcer Section,                                   the updated of this data table is used for supposition nodes, and
has to change its status to non-acceptance and then inform this                              so we do not explain about nodes’ properties.
change to scheduler. After come back to normal state, this
section has to announce it to Local Scheduler.                                                   Each group of jobs is submitted on different times all three
                                                                                             local schedulers. It is necessary to say that, tasks of jobs are
    Another subsection is Price adjusting section. This module                               submitted in a parallel form on available nodes in every local
is responsible for determining the price of a node based on                                  scheduler. For example, the Job_Group1 is composed of 250
standard factors and the current node status. For example, if the                            tasks and 45500 Million Instructions for every task that each
computed price based on standard parameters for one minute                                   task averagely has 1200 second deadline to complete integrally.
become α, this module can change this price based on current                                 Each group of jobs has been tested 15 times separately on each
status such as the number of current submitted tasks (in waiting                             local scheduler’s node.
state), number of success tasks/ number of failure tasks in last
day and last week and so on. Its mention that, due to respect for                                Since, most of the presented scheduling systems and
grid owners and grid costumers profits, the price increment or                               scheduling algorithms were tested and evaluated based on
decrement can be in the following range:                                                     specific assumptions and parameters of authors, therefore,
                                                                                             nobody cannot claim that his/her method is the best. However,
 α*(1-p)< Offered Price <α*(1+p) :          α is standard                                    in this research we tried to test of our approach in GridSim
price, and 0≤p≤0.5                                                                           simulator with developing real nodes’ behavior for node
    At the end, this Offered Price is sent to local scheduler.                               (resource) entity. The following experiments show the
Therefore, the offered price by provider node always is                                      comparison of GNM effect to use in node selection step by
dynamic.                                                                                     local schedulers. In Fig. 3 the number of tasks’ completion is
                                                                                             compared for new approach and recent work [18].
    To observe the effect of GNM architecture, we used                                           The analysis of obtained results in Fig. 3 show that, due to
GridSim simulator [29]. GridSim support hierarchical and                                     use rough set based case base reasoning on grid node module
economic-based grid scheduling systems and it is used as a                                   (it is not necessary online), the workload of schedulers is
reference simulation environment for most of the significant                                 decreased and so the overhead time for selection node is
research such as [30-32] and compare our results with the job                                decreased. Consequently, as it is seen in Fig. 3, the overhead
scheduling algorithms proposed in [18, 33].                                                  time for node selections, starting, and gathering results plus
                                                                                             execution time for the new proposed approach is less than the

                                                                                                                         ISSN 1947-5500
                                                                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 8, No. 1, April 2010

  nsidered deadline for each task of job_G
con                                        Groups3 on loc  cal                                                                                   Howev decision a
                                                                                                                                                        ver,                          he             ach
                                                                                                                                                                       accuracy for th new approa was low
  heduler LS1, w
sch                                         based scheduli
                 whereas this time in fuzzy b              ing                                                                                   rather than other m    methods for jobs that reliability and
met              lly           an
   thod [18] total is more tha the task dea adline. Therefo
                                                          ore,                                                                                   comple                early had equal priority because of especial
                                                                                                                                                        etion factors ne
any task of job_                not
                 _Groups3 cann be finishe in consider
                                            ed            red                                                                                           y              y
                                                                                                                                                 priority computing by each method.
  adline on LS1 based on fuz based sch
dea                             zzy         heduling metho od.
                                        TABLE I.                                          THE DESCRIPTIO OF CONSIDER
                                                                                          T            ON                      HEDULERS.
                                                                                                                   RED LOCAL SCH

             Locaal                     N
                                        Number                                           f
                                                                                        of                                                     Medium node’s s
                                                                                                         MIPS) allocated current
                                                                                                  GMIPS (M               d               queue
             schedduler’s               a
                                        available                                                                                              dep
                                                                                                  CPU MIPS for grid tasks deadline sta (sec)
             (LS) Name                  N
             LS1                        4
                                        400                                                       65                                                                     460                                      2
             LS2                        3
                                        320                                                       140                                                                    350                                      3
             LS3                        7
                                        750                                                       80                                                                     400                                      5

                                                                                                   TABLE II.                                        F
                                                                                                                                            SAMPLE OF JOBS.

            Groups      of   Number of jobs Deadline for each task Me
                                                                    emory for each                                                                                   T
                                                                                                                                                                     Task length by (Miillion
                                                                                                                                                                                              reliability completion time
            jobs na                                 (sec)
                                                    (                 task (MB)                                                                                                        I)
                                                                                                                                                                        Instruction) (MI
            Job_GGroup1      5 job (25 task)
                                     50             1
                                                    1200                 1.93                                                                                                 45500               0.8          0.2
            Job_GGroup2              10
                             3 job (21 task)        2
                                                    2100                  3.4                                                                                                 72000               0.3          0.7
            Job_GGroup3      5 job (10 task)
                                     00              900                 6.25                                                                                                 30000               0.5          0.5

                                                                                                                     LS1      LS2                LS3

                                                                                 0.94 0.8 0.945 0.91
                                                                                        885                                 0.92
                                                                                                                                                 0.89                   0.915                   0.92
                                                                    0.89                                              0.9                    0.93                    0.95
                                1                                                       05
                                                                                                                                0.89                                                           0.9


                                    0                                                                                                                                                                   LS3






                                                                               New appr

                                                                                                                                            eduling [18]
                                                                                                                             Fuzzy based sche

                 e                 ted              n                 dulers for new app
     Figure 3. The ration of complet tasks of jobs on three local sched                                   work [18] based on grouped jobs in Table. 2.
                                                                                       proach and earlier w

                                                                               New appro                            ased scheduling [1
                                                                                                             Fuzzy ba                18]                                        TRF method [33]
                                         Accuracy of prediction

                                                        LS1                    LS2                  L
                                    e               son            rediction for the ne approach and o
                               Figure 4. The comparis of accuracy pr                  ew             other two methods.

                                                                                                                                                                                                         ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                 Vol. 8, No. 1, April 2010



                       5                                                                                                                   LS1
                        0                                                                                                                  LS3
                                          Job_Group2                                                                         1
                                       New approach                                                    Job_G

                                                                            Fuzzy based scheduling [18]

                                                    valuation of tasks iteration in the new approach and ot
                                     Figure 5. The ev                                     w               ther method.

    The evaluatio of the Fig. 4 show that the fuzzy bas   sed
  heduling metho has better accuracy predi
                 od          a             iction rather th
                                                          han                                                    REFERENCES
new approach a  and TRF meth  hods [33] because of onli   ine                    1. The Globus Project, ht     ttp://www.glob
  cision with u
dec                           d
                up-to date and consuming more time f       for                   2. Ferre                                      id                h
                                                                                          eira, L., et al., Introduction to Gri Computing with Globus. 2002,
knoowledge extrac            er,            od
                 ction. Howeve TRF metho is acceptab       ble             
when scheduler want to select a small nu    umber of nod  des                    3. Frey, J., et al. Condor                     on
                                                                                                              r-G: A Computatio Management A     Agent for Multi-
   ween large n
betw            numbers of av               s.
                              vailable nodes By the wa     ay,                            tutional Grids. in 10th IEEE Sy
                                                                                     Instit                   n                                  gh
                                                                                                                              ymposium on Hig Performance
ove             or           s
   erhead time fo TRF is less than both oth methods a
                                            her           and                        Distr                    g
                                                                                          ributed Computing (HPDC10). 2001 San Francisco, C
                                                                                                                               1.                CA.
close to our new a
                 approach.                                                       4. Proje C. http://ww
                                                                                          ect,                         orG/
                                                                                                                                /condor/condo . [cited.
                                                                                 5. Doug                      ,
                                                                                          glas, T., T. Todd, and L. Miron, D  Distributed comput  ting in practice:
     The purpose of iteration in this paper is the replace ne  ew                    the C                    ce:
                                                                                          Condor experienc Research Arti                        Comput. : Pract.
                                                                                                                                icles. Concurr. C
nod instead of fa
  de                             d
                 aulted node and restart job in new node. Since,                         er.,
                                                                                     Expe 2005. 17(2-4): p. 323-356.
                 e               ed
the overhead time in fuzzy base method is m                   has
                                              much, if a task h                  6. Berm man, F., et al., Ad                   g                 ng
                                                                                                             daptive computing on the grid usin apples. IEEE
not been a suffici
   t                             i
                 ient deadline, it cannot be ite
                                               erated on the neew                    Trannsaction on Parallel and Distributed Systems, April 2003. 14(4): p.
nod and so this t
  de             task is failed. According to obtained results in
                                 A                            s                      369-382.
Fig 5, the new a  approach has more task iter  ration than fuzzzy                7. Wols R., N. Spring and J. Hayes, T Network Wea
                                                                                          ski,                g,                The              ather Service: A
                                                                                     Distrributed Resourc                       e
                                                                                                               ce Performance Forecasting Service for
  heduling metho and so it will act a bit b
sch              od,             w             better than fuzz
                                                              zy-                    Meta                     re
                                                                                         acomputing. Futur Generation Com      mputing Systems, Metacomputing
  sed            sed             er
bas method bas on a numbe of iterations.                                             Spec Issue, October 1999. 15(5-6): p. 757–768.
                                                                                         cial                r
                                                                                 8. Vadh  hiyar, S.S.D., J.J A metaschedul for the Grid. in 11th IEEE
                                                                                                             J.                ler
                            VI.   CON
                                    NCLUSION                                             rnational Symposi
                                                                                     Inter                    ium on High Perf formance Distribu uted Computing,
     According to the weaknesses of the earli grid resour     rce                        2.
                                                                                     2002 HPDC-11. 2002.
disccovery approaches, such as using out-of- date informati  ion                 9. Adzi  igogov, L., J. Sold                  ymenakos, EMPER
                                                                                                              datos, and L. Poly                  ROR: An OGSA
                                                                                     grid meta-scheduler b                     c
                                                                                                             based on dynamic resource predict   tions. Journal of
for job schedulin much elap     psed time for task submissio  on,                    Grid Computing, 2005 3(1-2): p. 19-37.
                                                                                        d                    5.                .
connsiderable over  rhead commun                             per
                                  nications and cost, this pap
                                                                                                            man,               an,               d
                                                                                 10. Singh, G., C. Kesselm and E. Deelma An End-to-End Framework for
  es               l
use an optimal strategy to improve res         source discove ery                    Provvisioning-Based R   Resource and App pplication Management. Systems
   iciency. In this approach, all information ab
effi              s                                          s
                                               bout grid nodes is                         nal,
                                                                                     Journ IEEE, 2009. 3     3(1): p. 25-48.
   t                             o
not necessarily aggregated on grid resou       urce informatiion                 11. Kerteesz, A. and P. Kacsuk, Grid Interoperability Solutions in Grid
  nters. We used a local data table on each grid node to
cen                d             a                                                   Reso                      t.              l,                 ):
                                                                                         ource Management Systems Journal IEEE, 2009. 3(1) p. 131-141.
mai                rmation about grid node’s s
    intain all infor                           status. Moreovver,                12. Edua                    recursive architect
                                                                                         ardo, H., et al., A r                                   cal
                                                                                                                               ture for hierarchic grid resource
grid resource infoormation center maintain a su
                                 rs            ummary of up-to-                      mana agement. Future G  Gener. Comput. Sy 2009. 25(4): p. 401-405.
date information that continua                 d             s
                                 ally is updated if there was a                  13. Radu P. and N. Vlad Prediction-based real-time resour provisioning
                                                                                         u,                  d,                 d                 rce
signnificance changing in node’s resources or pperformance. TThe                     for mmassively multipl                    es.
                                                                                                              layer online game Future Gener. Comput. Syst.,
                                                                                     2009 25(7): p. 785-79   93.
exp                ults           hat
   perimental resu explain th our appro        oach reduces t the
                                                                                 14. Ivan, R., et al., Grid broker selection str
                                                                                                                               rategies using aggrregated resource
ove               nd              e
   erhead time an improves the resource disc   covery efficien
                                                             ncy                     infor
                                                                                         rmation. Future Ge                    st.
                                                                                                              ener. Comput. Sys 26(1): p. 72-86.
in a grid system.
                                                                                 15. Casaanova, H., et al., Adaptive Schedul   ling for Task Far rming with Grid
                                                                                     Midddleware. Internati   ional Journal of Supercomputer A  Applications and
                                                                                     Highh-Performance Com     mputing, 1999.

                                                                                                                    ISSN 1947-5500
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                      Vol. 8, No. 1, April 2010

16. Chard, K. and K. Bubendorfer, A Distributed Economic Meta-scheduler
    for the Grid, in Cluster Computing and the Grid, 2008. CCGRID '08. 8th
    IEEE International Symposium on May 2008. p. 542-547
17. Bouyer, A., E. Mohebi, and A.H. Abdullah, Using Self-Announcer
    Approach for Resource Availability Detection in Grid Environment, in
    Fourth International Multi-Conference on Computing in the Global
    Information Technology - ICCGI2009. 23-29 Aug. 2009 IEEE Computer
    Society: Cannes, La Bocca , France. p. 151-156.
18. Bouyer, A., et al., A new Approach for Selecting Best Resources Nodes by
    Using Fuzzy Decision Tree in Grid Resource Broker. International Journal
    of Grid and Distributed Computing, 2008. Vol(1)(1): p. 49-62.
19. A. Bouyer, M.K., M. Jalali. An online and predictive Method for Grid
    Scheduling based on Data Mining and Rough Set. in ICCSA2009: 9th
    International Conference on Computational Science and Its Applications.
    June 2009. South Korea: Springer-Verlag Berlin Heidelberg.
20. Bouyer, A. and S.M. Mousavi, A Predictive Approach for Selecting
    Suitable Computing Nodes in Grid Environment by Using Data Mining
    Technique, in Advances in Computational Science and Engineering. 2009,
    Springer-Verlag Berlin Heidelberg. p. 190-205.
21. N.B. Gorde, S.K.A. A Fault Tolerance Scheme for Hierarchical Dynamic
    Schedulers in Grids. in International Conference on Parallel Processing -
    Workshops, 2008. ICPP-W '08. Sept. 2008.
22. Li, C. and L. Li, Utility-based scheduling for grid computing under
    constraints of energy budget and deadline. Computer Standards &
    Interfaces, 2009. 31(6): p. 1131-1142.
23. Li, Z.-j., C.-t. Cheng, and F.-x. Huang, Utility-driven solution for optimal
    resource allocation in computational grid. Computer Languages, Systems
    & Structures, 2009. 35(4): p. 406-421.
24. Young Choon, L. and Y.Z. Albert, Practical Scheduling of Bag-of-Tasks
    Applications on Grids with Dynamic Resilience. IEEE Trans. Comput.,
    2007. 56(6): p. 815-825.
25. Bouyer, A., E. Mohebi, and A.A. Hanan, Using Self-Announcer Approach
    for Resource Availability Detection in Grid Environment, in Proceedings
    of the 2009 Fourth International Multi-Conference on Computing in the
    Global Information Technology. 2009, IEEE Computer Society.
26. Z. Pawlak, J.G.-B., R. Slowinski, W. Ziarko, Rough Sets.
    Communications on the ACM, 1995: p. 89–95.
27. Kolodner, J., Case-Based Reasoning. 1993: Morgan Kaufmann.
28. Bouyer, A., B. Arasteh, and A. Movaghar, A new Hybrid Model using
    Case-Based Reasoning and Decision Tree Methods for improving
    Speedup and Accuracy, in 4th Iadis International Conference Applied
    Computing 2007. 2007: Spain. p. 787-789.
29. Buyya, R. and M. Murshed, GridSim: a Toolkit for modeling and
    simulation of grid resource management and scheduling. Concurrency
    and Computation—Practice & Experience, 2002. 14 p. 1175–1220.
30. Foster, I., C. Kesselman, and S. Tuecke, The anatomy of the grid:
    enabling scalable virtual organizations. International Journal of
    Supercomputer Applications 2001. 15 p. 200–222.
31. Li, H. and R. Buyya, Model-based simulation and performance
    evaluation of grid scheduling strategies. Future Generation Computer
    Systems, 2009. 25(4): p. 460-465.
32. Leal, K., E. Huedo, and I.M. Llorente, A decentralized model for
    scheduling independent tasks in Federated Grids. Future Generation
    Computer Systems, 2009. 25(8): p. 840-852.
33. Brent, R. and J.L. Michael, Resource Availability Prediction for Improved
    Grid Scheduling, in Proceedings of the 2008 Fourth IEEE International
    Conference on eScience. 2008, IEEE Computer Society.

                                                                                                                 ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          Vol. 8, No. 1, 2010

     The New Embedded System Design Methodology
       For Improving Design Process Performance
                  Maman Abdurohman                                                              Sarwono Sutikno
                   Informatics Faculty                                                           STEI Faculty
             Telecom Institute of Technology                                             Bandung Institute of Technology
                   Bandung, Indonesia                                                         Bandung, Indonesia

                        Kuspriyanto                                                              Arif Sasongko
                     STEI Faculty                                                                 STEI Faculty
             Bandung Institute of Technology                                             Bandung Institute of Technology
                  Bandung, Indonesia                                                           Bandung, Indonesia

Abstract—Time-to-market pressure and productivity gap force                 hardware and software co-design environments, do not fit with
vendors and researchers to improve embedded system design                   the rising demands.
methodology. Current used design method, Register Transfer
Level (RTL), is no longer be adequate to comply with embedded                   Fortunately, the electronic design automation industry has
system design necessity. It needs a new methodology for facing              prepared to face this problem by providing engineers with the
the lack of RTL. In this paper, a new methodology of hardware               support for these challenging. The introduction of register
embedded system modeling process is designed for improving                  transfer level (RTL) as a higher abstraction layer over gate
design process performance using Transaction Level Modeling                 level design is a revolution step to face this challenges. The
(TLM). TLM is a higher abstraction design concept model above               RTL abstraction layer is accepted as the abstraction layer for
RTL model. Parameters measured include design process time                  describing hardware designs. The vendor of EDA is pushing
and accuracy of design. For implementing RTL model used                     the abstraction layer for addressing the lack of RTL. The
Avalon and Wishbone buses, both are System on Chip bus.                     definition of ESL is “a level above RTL including both
Performance improvement measured by comparing TLM and                       hardware and software design” as suggested by The
RTL model process. The experiment results show performance                  International Technology Roadmap for Semiconductors
improvements for Avalon RTL using new design methodology                    (ITRS).
are 1,03 for 3-tiers, 1,47 for 4-tiers and 1,69 for 5-tiers.
Performance improvements for Wishbone RTL are 1,12 for 3-                       ESL design and verification methodology consists of a
tiers, 1,17 for 4-tiers and 1,34 for 5-tiers. These results show the        broad spectrum of environments for describing formal and
trend of design process improvement.                                        functional specifications. There are many terms used to
                                                                            illustrate ESL layer such as hardware and software co-design
   Keywords : Design Methodology, Transaction Level Modeling                models, architectural models, RTL and software models, and
(TLM), Register Transfer level (RTL), System on Chip.                       cell-level models. This prescription include the modeling,
                                                                            simulation, validation and verification of system level designs.
                       I. INTRODUCTION                                      Models at the higher layer level are descriptions above the RTL
    Design is an important step on whole embedded system                    abstraction layer depicting the system behavior. There are a
design process. Embedded system development process begins                  number of ways to define the abstraction layer may be raised
by making hardware and software specification. The growing                  above RTL. For example, SystemC presents transaction level
consumer demands for more functionality tools has lead to an                model for modeling and simulation of embedded software
increase in complexity of the final implementation of such                  systems.
designs. The ability of semiconductor industry to reduce the
minimum feature sizes of chip has supported these demands.                  Time-to-Market Pressure
Also, show that the Moore’s Law, roughly doubling the devices                   The growing consumer demands for various complex
per chip every eighteen to twenty-four months, is still accurate.           application led to pressure vendor to design, implement
However, even though current IC technology is following the                 embedded system in short time frame. The late to enter market
growing consumer demands, the effort needed in modeling,                    is means cost or opportunity lost.
simulating, and validating such designs is adversely affected.
This is because current modeling method and frameworks,                         This condition called time-to-market pressure for embedded
                                                                            system vendor. It needs shorter design process approach.

                                                                                                       ISSN 1947-5500
                                                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                                                 Vol. 8, No. 1, 2010
                                                                                                                amount of time is not fit with the increase in complexity. This
                                                                                                                is referred to as the productivity gap, which is based on the
                                                                                                                ITRS       (International    Technology       Roadmap       for
                 Revenues ($)

                                                               Time (months)
                                Figure 1. Time-to-Market and revenues [5]

Embedded system design
    Design flow of embedded system begins with design
specification, its define system constraint, both cost and
processing time. System functionality is defined in behavioral
description, hardware software partitioning is done to optimize
design result and still fit the requirement. Hardware and                                                                             Figure 3. Moore’s law [9]
software integration is done after hardware/software detail
design. Register transfer level design is carried out by means                                                      Increasing the complexity and functionality of electronics
hardware programming language such as, Verilog, VHDL and                                                        systems, causes the increasing of the possible design choices
Esterel. Verification and testing process is done to ensure                                                     and the alternatives to explore for optimization purposes.
embedded system design is fit to specification [1].                                                             Therefore, design space exploration is vital when constructing
                                                                                                                a system in order to choose the optimal alternative with respect
                                       Fase 1 : Product specification                                           to performance, cost, etc. The reduction of time to develop
                                                                                                                these system-level models for optimization purposes can
                                                                                                                improve design acceleration with acceptable performance. A
                                       Fase 2 : HW/SW partitioning
                                                                                                                possible way to reduce this time is to raise the abstraction layer
                                                                                                                of design.
                                                                                 2 – 6 months needed

                                                                                                                Register Transfer Level design
                                                                                                                    One of the past design revolutions in hardware design was
     SW design

                                               HW design

                                                             Fase 3 : Detailed
                                                             HW/SW desing
                                                                                                                the introduction of RTL design layer as the entry point of the
                                                                                                                design flow. At RT level, registers and a data-flow description
                                                                                                                of the transfers between them replace the gate-level instantiate
                                                                                                                of independent flip-flops and logical operators. Some hardware
                                                           Fase 4 : HW/SW                                       description languages such as VHDL, Verilog and Esterel are
                                                           integration                                          used for writing models at this RT level. The translation to gate
                                                                                                                level is called synthesis. Component example at this level are
                                      Fase 5 : Acceptance Testing
                                                                                                                adder, multiplexer, decoder, and memory.
                                                                                                                    The complexity of hardware design combined with the lack
                                      Fase 6 : Maintenance and Upgrade
                                                                                                                of a revolution design approach similar to the RTL introduction
                                Figure 2. Embedded system design flow [1]
                                                                                                                has induced very slow simulations and caused productivity gap.
                                                                                                                The peak problem for system-on-chips is software development
                                                                                                                need, co-simulating embedded software with the RTL model is
   The embedded design process is not as simple as the                                                          possible, but too slow to allow its effective development.
concept. A considerable amount of iteration and optimization                                                    Designer are forced to wait the final chip to begin writing the
occurs within phases and between phases.                                                                        software of the system. This results is wasted time in the
                                                                                                                development cycle and increased time-to-market. While
                                                                                                                efficient in terms of speed, the still require the RTL model be
Moore’s Law and Productivity gap                                                                                available, they are very costly and they provide limited
   Moore’s Law said that silicon capacity has been steadily                                                     debugging capabilities. Another approach to face the problem
doubling every 18-24 months. Its allow companies to build                                                       is to try to raise the abstraction level : by creating models with
more complex systems on a single silicon chip. However,                                                         less details before the RTL one, it should be possible to achieve
designer ability to develop such systems in a reasonable                                                        better simulation speeds while at the same time less accuracy.

                                                                                                                                             ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 1, 2010
                                                                        C. VULCAN
                                                                            Vulcan is designed to cut cost of ASIC Program Time. The
                    II.   RELATED WORK                                  cost reduction could be achieved by separating designing parts
                                                                        into software.
A. Ptolemy
                                                                            Initial specification is written in Hardware-V, i.e., the HS-
    Ptolemy is a project developed at the University California,        Description Language (HDL_. That could be synthesis trough
Berkeley [13]. The latest Ptolemy Release is Ptolemy II 7.0.1           OLYMPUS synthesis System. Specifications in C would be
that has been launched u 4 April 2008. Ptolemy is a framework           mapped into representations between Control-Data Flow Graph
for simulation, prototype, and synthesis of software that has           (CDFG). It is this level that Vulcan separates hardware from
been dedicated solely to digital signal processing (DSP).               software.
    The basic concept of Ptolemy is the use of a pre-defined                The separation of hardware from software is achieved
commutation model that will regulates inter components                  through heuristic graphs partition algorithm that work under
interactions. The main problem address by the Ptolemy is the            polynomial time. This separation algorithm has paid its
use of the mix of various commutation models. Some of the               attention on different partitions on CDFG between Hardware
model domains that have been implemented are: CT                        and software, to minimize hardware costs but simultaneously
(continuous-time modeling), DDF(dynamic dataflow), DE                   maintain the predetermined deadlines.
(discrete-event modeling), FSM(finite state machines and
modal model), PN(process networks with asynchronous                                 TABLE 1. COMPARISON OF MODELING FRAMEWORKS
message passing), Rendezvous( process), networks with
synchronous message passing, SDF (synchronous dataflow),                  Name       Specific   Modeling     HW/SW          SW        HW
                                                                                      ation                    Part        design    design
SR (synchronous reactive), Wireless.                                     Ptolemy      C++         FSM         GCLP           C       VHDL
    Ptolemy II comprises supporting packages such as graphs,             Cosyma        C*        Syntax        Sim           C       Hardw
                                                                                                  DAG        annealing                areC
provides the manipulations of Graph theory, math, provides                Vulcan     Hercules    Vulcan       Greedy       DLXC      HEBE
mathematical matrices and vectors and signal processing,                                                                     C
plots, provides visual data display, data, provides type system,          Stellar     Nova       Nebula      Magellan      GCC       Asserta
data wrapping and expression parses. Ptolemy II package
comprises the following parts:                                          D. STELLAR
                                                                            STELLAR is a system level co-synthesis environment for
   Ptolemy II C Code Generation: The main function is to
                                                                        transformative application. A transformative transformation is
generate codes for the SDF model, FSM and HDF: the entire
                                                                        an application that executes processes every time it has a
model could be converted into C Codes.
                                                                        trigger such as JPEG Encoder. As an input specification,
    Ptalon: Is an actor oriented designing representing the most        STELLAR provides a C++ Library with ist NOVA name.
commonly designing strategy in an embedded system                       The inputted specifications are in forms of application
designing. This system is frequently modeled as block diagram,          specifications, the architecture and performance yardsticks.
where a block presents system or ;lines or inter-block arrows           The outer format of the NOVA is executable. STELLAR
representing signals.                                                   supports software estimation through profiling and using
                                                                        ASSERTA synthesis device in estimating hardware.
    Backtracking: This facilities serve the function to save the
previous system state values. The function is the most critical             STELLAR get input specification and definitions in
in a distributed computations.                                          NEBULA mediating format.            Its designing environment
                                                                        provides two devices: MAGELLAN and ULYSSES of co-
   Continuous domain : Continuous Domain is a remake of                 synthesizing and evaluation the GW-SW. the MAGELLAN
Continuous Time domains with meticulous semantics.                      optimize latency retimes and the ULYSSES OPTIMIZES the
                                                                        applications throughput. Its outer part comprises hardware
B. COSYMA (CO-SYnthesis for eMbedded Architectures).                    specification, software and interface. The exterior specification
   The Cosyma is developed by the Braunschweig University/              could be translated into SystemC code. And its functionalities
The Cosyma performs operation-separation process on the                 would be verified through simulation.
lowest blocks to improve the speed of program execution time.
                                                                            Table 1 shows the comparison between all embedded
    This speed improvement is achieved by adding co-                    system design frameworks.
processors hard ware that will perform part of the functions
that traditionally run by software. The following Figure
indicates Cosyma flow diagram. Its inputs comprises the Cx
program (8). It is an extension of the C Program to enhance                          III.    THE NEW DESIGN METHODOLOGY
parallel data processing. Its final output is hardware block and
the primitive of communication in hardware software.                    A. Transaction Level Modeling (TLM)
                                                                           Transaction-level Modeling fills the gap between purely
                                                                        functional descriptions and RTL model. They are crated after
                                                                        hardware/software partitioning, that is, after is has been

                                                                                                    ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 8, No. 1, 2010
decided which, for each processing, if it would be done using a          •    The type of transaction determinates the direction of the
specific hardware block or by software. The main application                  data exchange, it is generally read or write.
of TLM is to serve as a virtual chip (or virtual platform) on            •    The address is an integer determining the target
which the embedded software can be run.
                                                                              component and the register or internal component
    The main idea of TLM is to abstract away communication                    memory address.
on the buses by so-called transactions : instead of modeling all         •    The data that is sent to received.
the bus wires and their state change, only the logical operations        •    Some additional meta-data including : a return status
(reading, writing etc) carried out by the busses are considered
                                                                              (error, success, etc), duration of the transaction, bus
in the model. In contrary to the RTL, where everything is
synchronized on one or more clocks (synchronous description),                 attributes (priority, etc).
TLM models do not use clocks. They are asynchronous by
nature, with synchronzation occuring during the                              The most basic functionality shared by all buses or more
communication between components. These abstractions allow               generally interconnection networks is to route the transactions
simulations multiple orders of magnitude faster than RTL.                to their destination depending on their address. The destination
                                                                         is determined by the global memory address map which
                                                                         associates a memory range to each target port.

               Algorithm Model

               UnTimed Functional Model

               Timed Functional Model

               Bus Cycle Accurate Model
                                                                                             Figure 5. TLM process model
               Cycle Accurate Model
                                                                             In order for the embedded software to execute correctly, the
                                                                         address map, the offset for each register must be the same as in
               Register Transfer Level Model                             the final chip (register accuracy). Additionally, the data
                                                                         produced and exchanged by the components must also be the
                                                                         same (data accuracy). Finally, the interrupts have to correspond
                    Figure 4. TLM Model Stack
                                                                         logically to the final ones. One can view these requirements as
                                                                         a contract between the embedded software and the hardware.
    The other advantage of TLM models is that they require far           This contract guarantees that if the embedded software runs
less modeling effort than RTL or than Cycle Accurate model.              flawlessly on the virtual platform, then it will run in the same
This modeling effort is further reduced when there alreay exists         way on the final chip.
a C/C++ functional code for the processing done by the
hardware block to model. For instance, one can reuse the                 B. New Design Flow of Hardware Embedded System Design
reference code for a video decoder or for a digital signal                   In this paper, the new design flow for modeling hardware
processing chain to produce a TL model. Unlike a Cycle                   embedded system is designed using transaction level modeling
Accurate model, which is no longer the reference after RTL is            (TLM) method for early verification purpose. Verification
created, TLM is by this means an executable, “golden model”              process done at the first step before detail design. Transaction
for the hardware. Various definitions of TLM exist; some of              level modeling is one of new trends on embedded system
them even rely on clocks for synchronization, which looks                design after the development of register transfer level
more like Cycle Accurate level. A transaction term is an atomic          modeling.
data exchange between an initiator and target. The initiator has
the initiative to do the transaction whereas the target is                   The research scope is particularly on hardware embedded
considered as always able to receive it (at least, to indicate to        system after performing separation of hardware and software
the initiator that it is busy). This corresponds to classical            process. There are three stages in detailed design:
concepts in bus protocols. The initiator issues transactions             1.   Hardware part definition: hardware embedded system
through an initiator port, respectively a target receives them by             definition that will be implemented.
a target port. Some components only have initiator ports some
have only targets ports. Also, some components contain both              2.   TLM modeling: Model construction with transaction
initiator and target ports.                                                   modeling approach and perform early verification. Model
                                                                              refinement process can be generate by performing tuple
   The information exchanged via a transaction depends on                     correction : M, S, A, PM.
the bus protocol. However, some of them are generally
common to all protocols :                                                3.   RTL modeling: RTL model construction is the final
                                                                              process of all hardware designs of embedded system. In

                                                                                                     ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 8, No. 1, 2010
     this process, transformation from TLM model into RTL                1) Diagram Block
     model is conducted.                                                     Diagram block is a diagram that shows many inputs and
                                                                         outputs of the system. Inputs of diagram block of transaction
                                                                         level modeling include :
                                                                         •   Master : Number of master component actively perform
                                                                             read () and write process as standard operation of
                                                                         •   Slave : Number of slave components considered passive
                                                                             components and waiting for transaction of master.
                                                                         •   Arbiter : bus management system, namely mutual access
                                                                             management algorithm of one slave with one master or
                                                                         •   PM : Process taking a place in master and slave such as
                                                                             read() and write() process.
                                                                         •   Tiers : Total the whole main components existing in a
                                                                             system including Arbiter.
                                                                         •   Specification is system requirement explanation that
                                                                             should be met by system designed.
                                                                            Output of design system block is a TLM Model. Model
                                                                         formulation process is conducted systematically.

                Figure 6. The new design methodology

C. Procudure and Modeling Diagram Block
   Basic procedure of modeling is designed as standard
process on hardware modeling. Modeling steps of new design
methodology are:
1.   Define : 4-tuple input (M, S, A, PM).
2.   A module with port and method is made for each master.
3.   A module with port and method is made for each slave.
                                                                                               Figure 7. Diagram Block
4.   An arbiter bus is made with algorithm in A.
5.   Every method in master and slave is defined in PM.
                                                                         2) Defining Master and Slave
6.   Early verification of system requirement compliance
                                                                         • Master and slave component definition consists of three
7.   If system requirement is not satisfying, then perform tuple             parts; name, port, and functions/method. Example of
     refinement starting from step 1.                                        master:
8.   Adding port and RTL process                                                                   Name : MicroPro
9.   Port and process removal from TLM.
                                                                                        Port :
10. RTL arbitrary implementation.                                                        int mydata, input1, input2, input3;
                                                                                         int cnt = 0;
11. Port mapping                                                                         unsigned int addr = m_start_address;
    Stages 1 to 6 are initial stage of transaction level modeling
creation in the purpose of early verification of hardware                               Function/Method :
modeling. The first process output is a TLM model that fulfill                          Do_add();
the design requirements.                                                                Memory_Read();

                                                                                                   ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 1, 2010
•    Arbiter is bus management algorithm, such as: round                       •   Sc_in_clk clock; (added)
                                                                               •   Sc_port<sc_signal_out_if<bool> > grantM1; (deleted)
•    PM is a process in master. PM is the more detail definition
     of in the form of pseudo code.                                       2.   Process addition and deletion:

   In transaction level modeling, data transfer process and                    •   Sel_mux_master1 (added)
control from master to slave are conducted by accessing a bus                  •   grantM1process (deleted)
controlled by an arbiter. Each master can deliver request of bus
access to send data or read data from slave. There will be                3.   Total Master and Slave determination:
several possible conditions achieved by master; they are bus                   •   Determining the amount of rows added and reduced of
condition is OK if bus is not being operated by other masters or                   all systems.
WAIT condition if bus is being used by other master or
ERROR condition if targeted slave is not around in slave list.            4.   Arbitrary determination
                                                                               •   Wishbone protocol arbitrary is Round robin
3) TLM – RTL Transformation                                                    •   Every master sends request of slave access. If there
    After finishing early verification process and being met                       are several masters requesting access of one similar
with given specification, then the last stage is transformation                    slave, then the arbiter will give an access for the
from TLM into RTL. The purpose of the transformation is to                         master and send waiting signal for other masters.
generate detail model available for synthesis. Phases of TLM              5.   Port mapping of all modules: master and slave
into RTL model transformation can be divided into several
general stages; those are:                                                     •   Mapping of master post to all multiplexers.
•    Port addition and deletion: in the process of TLM                         •   Mapping of multiplexer post to slave and the master.
     modeling, there should be ports that are required to delete,              •   Mapping of multiplexer post to slave and the master.
     because basic principle is not needed in RTL model, such
     port request. Meanwhile, it is necessary to add new ports
     in RTL model for performing detail process, as the nature
     of RTL modeling.                                                     D. Criteria and Measurement
                                                                             There are two criteria used for measuring new system
•    Process addition and deletion: In spite of ports addition            experiment, they are :
     and deletion, it is also necessary to add and delete process.
     Example of process that must be deleted from RLM is                  1.   Design performance improvement (Te)
     such process that tries to send request, while addition                  Performance improvement is characterized by the decrease
     process that should be given in RTL model is process of              of time required to design embedded system. Design period by
     accessing multiplexer.                                               using new method will be compared with RTL design period.
•    Total Master and Slave determination: Total master and               Time difference needed to design the same systems from two
     slave is used to make pattern of RTL bus. Total master and           different methods will be considered the success of new design.
     slave can influence total multiplexers and types of                  New design system is considered success if design period
     multiplexer. Multiplexer for 4 masters applies the first             needed is shorter than the previous time design. General
     mux4 while 2 masters apply the first mux2.                           formula of design performance improvement (Te) is Te =
                                                                          TRTL/TTLM. TRTL is design process time for modeling RTL
•    Determining arbitrary (according to given protocol)                  model and TTLM is design process time for modeling TLM.
     Arbitrary is management algorithm of slave access when
     the access is from one master or more. Example of                    2.   Target of Criteria: Accuracy level (α)
     algorithm used is round robin, such as in Avalon bus.                   Design model difference can be conducted in the purpose of
•    Port mapping : The last stage of transformation is                   improving performance. This can be accepted if both models
     connecting all ports from all components available along             can bring about the same output for the same input. The closer
     with additional components, such as multiplexer, detail,             the result of both systems to the same input is, the more
     pin-per-pin.                                                         accurate the system is. Accuracy level (α) = P(Input) –
                                                                          P(Input) ≈ 0.

4) Examples of TLM-RTL Transformation
    The followings are examples of transformation from TLM                          IV.   EXPERIMENT AND RESULTS ANALYSIS
to RTL by using RTL bus with Wishbone.
Bus target: Wishbone                                                      A. Avalon and Wishbone Bus for on Chip System (SoC)
                                                                             Avalon and Wishbone bus are busses for SoC. The bus is
1.   Port addition and deletion:                                          designed for chip-based application. SoC is a compact system

                                                                                                      ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 8, No. 1, 2010
with three kind components, master and slave components and                On each testing scenario, testing model is generated in
bus system.                                                            transaction level modeling and RTL. Both of the models will
                                                                       be compared based on the line amount required to implement.
    Avalon and Wishbone buses are used in implementation               Testing is conducted starting from simple system, consisting of
stage in the level of RTL. There are 5 main components in              a master and a slave. Then, testing component complexity will
Avalon bus along with each function as follows:                        be ignored periodically by adding the amount of tiers
1.   Master : active components which have initiative to               continuously.
     perform data access either read() or write().
2.   Slave : passive component waiting for data access from
3.   Logic Request : components managing access requests
     from master for slave. each component has one logic
     request component.
4.   Logic Arbitrator: component managing access of one slave
     according to request of one master of more. Each slave has
     one Logic Arbitrator to manage the slave access.
5.   Multiplexer : component for managing access of a slave
     according to request of Logic Arbitrator. There are 5
     multiplexers for each slave; mux address, mux BE_n, mux
     write, mux writedata and mux read. There is one mux for
     data displayed for master; mux master.
                                                                                        Figure 9. Wishbone bus architecture

                                                                           Tiers are general terms of displaying embedded system
                                                                       components which communicate each other, for example, 2-
                                                                       tiers means there are two components communicating each
                                                                       others, 3-tiers means there are 3 components communicating
                                                                       each others, and so on.

                  Figure 8. Avalon bus architecture
                                                                                        Figure 10. multi master-slave system

There are 5 main components in Wishbone bus include :
master, slave, decoder, round robin arbiter and multiplexor.           C. TLM-RTL Model Testing
The function of each component the same as on Avalon bus.
                                                                       a. Line of Code of TLM-RTL Model
                                                                           The testing involves two, three, four and five components,
B. Testing Scenario                                                    those are master, slave and arbiter. Master component actively
   Test is performed to measure performance increase of                generates and sends data to slave, while slave serves as
design by using new design flow of transaction level modeling          receiver.
(TLM – Transaction Level Modeling) compared to Level
Register Transfer modeling (RTL – Register Transfer Level).                Based on the experiment results, it suggests that the amount
Some testing scenario will be conducted in testing process             of lines needed to model system by using TLM is less than that
involving several master components, slave and arbiter.                of using RTL. Such condition can take a place because master,

                                                                                                    ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 8, No. 1, 2010
slave, and bus definition on RTL is more detail than that on           b. Measurement is based on design time (man days).
                                                                           Design time refers to time needed by programming for code
                                                                       generation and report. Standardarization used is 8 line codes
                                                                       per man day.Design time can be directly decreased from the
                                                                       total modeling code for each TLM and RTL modeling. Figure
                                                                       12 shows comparison result of time needed to design the four
                                                                       testing scenario.
                                                                          Measurement of Design Process Performance (Te)
                                                                           Performance is one of the important parameters to measure
                                                                       the success of new method. In this dissertation, design process
                                                                       performance can be measured according to the comparison
                                                                       between design process times needed by using RTL model
                                                                       compared with TLM model.
                                                                          The measurement of performance improvement of design
                                                                       process can be conducted by using the following equation:
                 Figure 11. Line of Code comparison                                            T(e) = Trtl / Ttlm

    Detail level of modeling on RTL can influence several parts           Based on the experiment result conducted as shown in
of program, including:                                                 Figures 11 and 12, it can be concluded that performance
                                                                       improvement graph as shown in Figure 13 can be obtained.
•   Port definition of each master and slave component.
                                                                           As shown in Figure 13 it indicates that design process
•   Initial definition of top level system, including port             performance increases as the increase of amount of component
    addition, instantaneousness, port mapping and destruction.         on the embedded system except for case study of 2-tiers whose
                                                                       performance is higher than that of 3-tiers.
•   New component definition called as multiplexor. On
    Avalon bus of each slave addition, 6 new multiplexor shall
    be added accordingly.
    In TLM, component definition process and port mapping is
simpler than that of in RTL, so it does not require many
instructions as compared to RTL. New port mapping when
master addition takes a place is a mapping with bus and clock.

                                                                                     Figure 13. Design performance improvement

                                                                           One of the advantages of TLM modeling is that transaction
                                                                       will occur among components. The more components the
              Figure 12. Design process time comparison                system is, the higher increase of transaction level by the use of
                                                                       bus is. Transaction improvement of components is very
                                                                       appropriate to TLM modeling. In RTL level modeling, in the

                                                                                                    ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          Vol. 8, No. 1, 2010
contrary, the amount of components and transactions of a              [2] Chatha, Karamvir Sigh. “System-Level Cosynthesis of
system will make the design difficulty higher.                             Transformative Application for Heterogeneous Hardware-
                                                                           Software Architecture”. Dissertation at University of Cincinnati.
    Therefore, total components and transaction in the                     2001.
modeling are getting bigger and bigger, and thereby design            [3] Cornet, Jerome. “Separation of Functional and Non-Functional
process will get slower. Based on the condition mentioned                  Aspects in Transactional Level Models of Systems-on-Chip”.
                                                                           Dissertation at Institut Polytechnique De Grenoble. 2008.
above, design process on TLM modeling is getting better and
                                                                      [4] Cummings, Clifford. “SystemVerilog’s priority & Unique – A
better under the circumstances that there are many components              Solution to Verilog’s full_case & parallel_case Evil Twins”.
making interaction each others. Those are the advantages of                SNUG. Israel. 2005.
TLM level modeling taking a place in transaction level.               [5] Frank Vahid and Tony Givargis. “Embedded system A Unified
                                                                           Hardware/Software Introduction”. JohnWiley & Sons, Inc., New
                                                                           York, 2002.
                         V. CONCLUSION
                                                                      [6] Genovese, Matt. ”A Quick-Start Guide for Learning SystemC”.
    Based on the testing shown in the previous chapter, it can             The University of Texas. Austin. 2004. 15
be concluded that there are several important things, including:      [7] Gordon E. Moore. “Cramming more components onto integrated
                                                                           circuits”. Electronics, 38(8):114-117, 19 April 1965.
1. The new embedded system design flow can be used to                 [8] Leung, Julie. Kern, Keith. Dawson, Jeremy. “Genetic
     increase design process performance. It means that using              Algorithms and Evolution Strategies”.
     this method the design process will shorter than RTL             [9] Mathaikutty, D., A. (2007) : Metamodeling Driven IP Reuse for
     modeling with performance improvement compare to RTL                  System-on-chip Integration and Microprocessor Design,
     Avalon bus are 1.03, 1.47, 1.69 for 3,4 and 5 tiers                   Dissertation at Virginia Polytechnic Institute and State
     respectively. The performance improvement compare to                  University.
     RTL Wishbone bus are 1.12, 1.17 and 1.34 for 3,4 and 5           [10] Mooney III, Vincent John. “Hardware/Software co-design of
     tiers respectively.                                                   run-time systems”. Dissertation at Stanford University. 1998.
                                                                      [11] Palnitkar, Samir. “Verilog® HDL: A Guide to Digital Design
2. TLM level modeling will be better implemented in a                      and Synthesis, Second Edition”. Sun Microsystems. Inc.
     complex system, in the condition of more than two                     California. 2003.
     components having interactions in which there occurs             [12] Patel, Hiren D. “Ingredients for Successful System Level
     arbitrary process.                                                    Automation & Design Methodology”. Dissertation at Virginia
                                                                           Polytechnic Institute and State University. 2007.
    Contribution of this paper are the new embedded system            [13] _____, “Ptolemy II Project”. UC. Berkeley. 2008.
design flow by using transaction level modeling approach and          [14] _____. (2002) : Describing Synthesizable RTL in SystemC™,
standard procedure to design hardware RTL model.                           Synopsys,       Inc.,     Version    1.2,    November        2002.
    In the effort of constructing an integrated framework             [15] _____. (2003) : Avalon Bus Spesification : Reference Manual,
ranging from specification till model construction that is                 Altera. :
available to be synthesized, the following discussion shall be
conducted in the next research:
1.    Automation process of all standard procedure
                                                                                                     AUTHORS PROFILE
2.    Design of parts of system software and application
3.    Integrated process between software and hardware of                 Maman Abdurohman is a PhD student at STEI faculty of Bandung Institute of
                                                                             Technology. He is working at Informatics faculty of Telecom Institute of
      software.                                                              Techonolgy – Bandung. His primary areas of interest include embedded
                                                                             system design and microcontroller. Maman has an master degree from
    By adding the three parts of processes, then new framework               Bandung Institute of Technology. Contact him at
will soon be generated under embedded system design. The
initial framework includes systematic steps of embedded
                                                                          Kuspriyanto is a Professor at STEI faculty of Bandung Institute of
system construction. The next process is automation of the                    Technology. He is a senior lecturer in computer engineering laboratory.
whole processes.                                                              His major areas of interest include digital system and electronic design.
                                                                              His job is Head of Laboratory of Computer Engineering. Contact him at
    Maman Abdurohman thanks to the Faculty of Informatic IT               Sarwono Sutikno is a Associate Profesor at STEI faculty of Bandung Institute
Telkom and Faculty of STEI Electro Bandung Institut of                        of Technology. His major areas of interest include cryptography and
Technology for their financial support and research resources                 embedded system design. His job is a director in PPATK on electronic
so that this research could be completed.                                     transaction control. Contact him at

                                                                          Arif Sasongko is a lecturer at STEI faculty of Bandung Institute of
                           REFERENCES                                          Technology. His major areas of interest include wimax design and
                                                                               embedded system design. His current project is designing highspeed
[1]   Berger, Arnold S. “Embedded System Design : An Introduction              data link wimax system. Contact him at
      to Processes, Tools, and Techniques”. CMP Books. 2002.

                                                                                                           ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                           Vol. 8, No. 1, April 2010

          Semi-Trusted Mixer Based Privacy Preserving
    Distributed Data Mining for Resource Constrained

                    Md. Golam Kaosar                                                     Xun Yi, Associate Preofessor
            School of Engineering and Science                                           School of Engineering and Science
                   Victoria University                                                         Victoria University
                  Melbourne, Australia                                                        Melbourne, Australia

Abstract— In this paper a homomorphic privacy preserving                         Rapid development of information technology, increasing
association rule mining algorithm is proposed which can be                  use of advanced devices and development of algorithms have
deployed in resource constrained devices (RCD). Privacy                     amplified the necessity of privacy preservation in all kind of
preserved exchange of counts of itemsets among distributed                  transactions. It is more important in case of data mining since
mining sites is a vital part in association rule mining process.            sharing of information is a primary requirement for the
Existing cryptography based privacy preserving solutions                    accomplishment of data mining process. As a matter of fact the
consume lot of computation due to complex mathematical                      more the privacy preservation requirement is increased, the less
equations involved. Therefore less computation involved privacy             the accuracy the mining process can achieve. Therefore a trade-
solutions are extremely necessary to deploy mining applications
                                                                            off between privacy and accuracy is determined for a particular
in RCD. In this algorithm, a semi-trusted mixer is used to unify
the counts of itemsets encrypted by all mining sites without
revealing individual values. The proposed algorithm is built on
with a well known communication efficient association rule                       In this paper we denote Resource Constrained Device
mining algorithm named count distribution (CD). Security proofs             (RCD) as any kind of device having limited capability of
along with performance analysis and comparison show the well                transmission, computation, storage, battery or any other
acceptability and effectiveness of the proposed algorithm.                  features. Examples includes but not limited to mobile phones,
Efficient and straightforward privacy model and satisfactory                Personal Digital Assistants (PDAs), sensor devices, smart
performance of the protocol promote itself among one of the                 cards, Radio Frequency Identification (RFID) devices etc. We
initiatives in deploying data mining application in RCD.                    also interpret lightweight algorithm as a simple algorithm
                                                                            which requires less computation, low communication overhead
   Keywords- Resource Constrained Devices (RCD), semi-trusted
                                                                            and less memory and can be deployed in a RCD. Integration of
mixer, association rule mining, stream cipher, privacy, data mining.
                                                                            communication devices of various architectures lead to global
                       I.    INTRODUCTION                                   heterogeneous network which comprises of trusted, semi-
                                                                            trusted, untrustworthy, authorized, unauthorized, suspicious,
                                                                            intruders, hackers types of terminals/devices supported by
     Data mining sometimes known as data or knowledge                       fewer or no dedicated and authorized infrastructure. Sharing
discovery is a process of analyzing data from different point of            data for data mining purposes among such resource constrained
views and to deduce into useful information which can be                    ad-hoc environment is a big challenge itself. Preservation of
applied in various applications including advertisement,                    privacy intensifies the problem by another fold. Therefore
bioinformatics, database marketing, fraud detection, e-                     privacy preserving data mining in RCD envisions facilitating
commerce, health care, security, sports, telecommunication,                 the mining capability to all these tiny devices which may have
web, weather forecasting, financial forecasting, etc.                       a major impact in the market of near future.
Association rule mining is one of the data mining techniques
which helps discovering underlying correlation among
different data items in a certain database. It can deduce some                  Data mining capability of RCD would flourish the future
hidden and unpredictable knowledge which may provide high                   era of ubiquitous computing too. Owner of the device would
interestingness to the database owners or miners.                           perform mining operation on the fly. Small sensor devices
                                                                            would be able to optimize or extend their operations based on
                                                                            the dynamic circumstance instead of waiting for time
                                                                            consuming decision from the server. Scattered agents of a

                                                                                                       ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 8, No. 1, April 2010

security department can take instant decision of actions about a              Some research approaches address the issue of hiding
crime or a criminal while in duty. To comprehend the necessity           sensitive information from data repository. In [23] and [24]
of lightweight privacy preserving data mining, let us consider           authors basically propose some techniques to hide sensitive
another circumstance: there are many scattered sensor devices            association rules before the data is disclosed to public. A
located in a geographical location belonging to different                hardware enhanced association rule mining technique is
authorities which are serving different purposes with some               proposed in [25]. Data is needed to be fed into the hardware
common records about the environment. Now if it is required              before the hash based association rule mining process starts.
to mine data among those sensor devices to accomplish a                  This approach may not be well realistic for RCD because it
common interest of the authorities in real time, then preserving              requires special purpose hardware as well as it does not
privacy would be the first issue that must be ensured. Another           handle privacy issue. A homomorphic encryption technique;
motivation behind developing our proposed system could be                Paillier encryption is used by X. Yi and Y. Zhang [9] to
healthcare awareness. Let us assume some community                       preserve privacy where authors propose a privacy preserving
members or some university students want to know about the               distributed association rule mining using a semi-trusted mixer.
extent of attack of some infectious diseases such as swine flu,          This algorithm involves lot of computation due to the use of
bird flu, AIDS etc. Each individual is very concerned about the          complex mathematical equations and big prime numbers as
privacy since the matter is very sensitive. They are equipped            keys in the Paillier encryption.
with a mobile phone or similar smart device and want to know
the mining result on the fly. In such circumstances, a                        A heterogeneous mobile device based data collection
distributed lightweight privacy preserving data mining                   architecture is proposed by P.P. Jayaraman [10]. Sensor
technique would provide a perfect solution. In addition to that;         devices are scattered in the environment to collect various data
relevant people can be warned or prescribed based on all                 whereas regular mobile phones can work as bearers of the data.
available health information including previously generated              Detail architecture of the model is available in [10]. Authors
knowledge about a particular infectious diseases.                        did not consider the privacy issue during the transmission of
                                                                         data. If the mobile devices in the environment are intended to
     There is not much research work done for lightweight                be utilized to work as a data bearer then privacy should be one
privacy preserving data mining but there is plenty of research           of the major concerns. Therefore it would be difficult to be
on privacy preserving data mining. Essentially two main                  implementable in real life unless privacy is preserved. A
approaches are adapted for privacy preserving data mining                lightweight privacy preserving algorithm similar like in this
solutions. First one is the randomization which is basically used        paper could provide privacy preservation as well as data
for centralized data. In this approach data is perturbed using           mining solution for these kinds of models.
randomization function and submitted for mining.
Randomization function is chosen such that the aggregated                     Main focus of CD [18] algorithm is to reduce
property of the data can be recognized in the miner side. In [1,         communication overhead with the cost of redundant parallel
2, 3] authors have proposed such approaches. One of the major            computation in each data site. In addition to that this algorithm
drawbacks of randomization approach is: if the precision of              does not transmit the large itemset in the association rule
data mining result is increased, the privacy is not fully                mining process. Rather it communicates the counts of the
preserved [4].                                                           itemsets only, which let it reduce communication overhead
                                                                         dramatically. These features make it feasible to be deployed in
     Another one is the cryptographic approach in which the              RCD. On the other hand semi-trusted mixer based privacy
data is encrypted before it is being shared. The miner cannot            solution provided by Yi and Zhang in [9] requires lot of
decrypt individual inputs separately rather it needs to decrypt          computation with managing big encryption key size. In this
unified encrypted data together. Therefore the miner cannot              paper a more efficient semi-trusted mixer and homomorphic
associate particular information to a particular party. An               encryption based privacy algorithm is proposed which adopts
example of such approach is Secure Multiparty Computation                the rule mining technique of CD to make the solution
(SMC) proposed by Yao [5]. Another cryptography based                    deployable in RCD.
privacy preservation technique is proposed by M. Kantarcioglu
and C. Clifton [6] which involves enormous amount of                         The remainder of the paper is oriented as follows: Section 2
mathematical computation and communication between data                  describes some necessary background information. Section 3
sites. This is too heavy to be implemented in a RCD. Among               describes proposed solution which consists of privacy
other privacy preserving data mining, [7] and [8] are ones               preserving algorithm and association rule mining algorithm for
which also involve vast mathematical complex equations to be             RCD. Section 4 contains security analysis and section 5
solved. There are some research works on privacy issues for              discusses the proofs and performance comparison. Finally the
RCD separately too. Authors in [21] propose a technique to               conclusion is presented in section 6.
hide location information of a particular device for location
based applications. A middleware LocServ is designed which
lies in between the location-based application and the location                              II.    BACKGROUND
tracking technology. A group signature based privacy for
vehicles is proposed in [22], which addresses the issue of                 Privacy: According to The American Heritage Dictionary
preserving privacy in exchanging secret information such as              privacy means “The quality or condition of being secluded
vehicle’s speed, location etc.                                           from the presence or view of others”. In data mining if the

                                                                                                    ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

owner of the data requires the miner to preserve privacy, then               Stream Cipher: It is a symmetric key cipher where
the miner gets no authority to use the data unless an acceptable        plaintext bits are combined with a pseudorandom cipher bit
and trustworthy privacy preservation technique is ensured.              stream typically by an XOR operation. In stream cipher a seed
Different points of views define privacy in different ways. For         is used as a key to generate continuous stream of bits. This
simplicity we consider a definition which is most relevant to           idea can be used in generating random keys by encrypting a
this work. According to J.Vaidya [1] privacy preserving data            constant with the secret key/seed. Therefore multiple
mining technique must ensure two conditions: ‘any                       randomly generated keys can be shared among multiple
information disclosed cannot be traced back to an individual’           entities simply by sharing a seed. In our proposed algorithm
and ‘any information disclosed does not constitute an                   we need some randomly generated keys which can be
intrusion’. More technical definition of privacy can be found           generated by Output Feedback Mode (OFB) of Data
in [11]. This paper also provides technical definition in               Encryption Standard (DES) detail of which is available in
security analysis in section 4.                                         [13].
                                                                                  Homomorphic Encryption: Homomorphic encryption
                                                                        is a special form of encryption using which one can perform a
     Association Rule Mining: Let us consider; in a distributed         specific algebraic operation on the plaintext by performing the
data mining environment collective database DB is subdivided            same or different operation on the ciphertext. Detail definition
into DB1, DB2, … , DBN in wireless data sites S1, S2, … ,SN             could be found in [13]. If x1 and x2 are two plaintext and E
respectively. I= {i1, i2, … , im} is the set of items where each        and D denotes encryption and decryption function
transaction T⊆I. Typical form of an association rule is X⇒Y,            respectively. Let us consider y1 and y2 are two ciphertexts
where X⊆I, Y⊆I and X∩Y=φ. The support s of X⇒Y is the                   such that: y1=Ek(x1) and y2=Ek(x2) where, k is the encryption
probability of a transaction in DB contains both X and Y. On            key. This encryption will be considered homomorphic if the
the other hand confidence c of X⇒Y is the probability of a              following condition is held: y1+y2=Ek(x1+x2).
transaction containing X will contain Y too. Usually it is the
interest of the data vendor to find all association rules having                          III.   PROPOSED SOLUTION
support and confidence greater than or equal to minimum
threshold value. For another instance of an association rule
                                                                        In this paper we propose a privacy preserving secret
AB⇒C,                                                                   computation protocol which is based on a homomorphic
                                                                        encryption technique for distributed data sites. In this section
                                                                        first the privacy preserving frequency mining algorithm is
                                                                        discussed and then the modified CD algorithm is discussed
                                                                        which ensures privacy in the association rule mining process.
                                                                            A. Privacy Preserving Frequency Mining
                                                                        In our proposed approach, there would be a number of
                                                 ⇒C                     participating semi honest devices or data sites (>2) which are
                                                                        connected to each other using heterogeneous media. There
                                                                        would be a semi-trusted mixer which would receive encrypted
     More detail on association rule mining process is
                                                                        count values from sites through its private channel. It is
available in [12, 20].
                                                                        assumed that the semi-trusted mixer would never collude with
     Association rule mining process consists of two major              any of the data site. In practice it could be assumed that it is
parts. First one is to find frequent large itemsets which have          authorized by the government or semi-government agent. Data
support and confidence values more than a threshold number              sites communicate to the mixer through the private channel
of times. Second part is to construct association rules from            and the mixer communicates to all sites through public
those large itemsets. Due to the simplicity and straightforward         channel. Necessary keys would be distributed to the sites by
nature of the second part, most of the association rule mining          corresponding authorities or owner of the sites. It is also
papers do not address this. Apriori algorithm is one of the             assumed that the private channel is protected by a standard
leading algorithms, which determines all frequent large                 secret key cryptosystem, such as DES [15] or AES [16]. Fig.1
itemsets along with their support counts from a database                describes the proposed model in brief.
efficiently. This algorithm was proposed by Agrawal in [14]
which is discussed here in brief:
    Let us say Li be the frequent i-itemsets. Apriori algorithm
finds Lk from Lk-1 in two stages: joining and pruning:
     Joining: Generates a set of k-itemsets Ck, known as
candidate itemsets by joining Lk-1 and other possible items in
the database.
     Pruning: Any (k−1)-itemsets cannot be a subset of a
frequent k –itemset which is not frequent. Therefore it should
be removed.

                                                                                                   ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                Vol. 8, No. 1, April 2010


                       (1) MN              (1) M1

                     (1) Mi                (1) M2                               Fig.2: Random key generation from stream cipher for each
                Si              Mixer                    S                                            iteration.

                          (1) M3
                                                                                  It is already mentioned that, each data sites communicate
                                                (2) є’                        to the mixer through a private channel and the mixer
                                                                              communicates to all sites through public channel.
                                   S                                          Communication stages of the algorithm are depicted in the
                                                                              flow diagram of fig.3.

         Broadcast channel
                 Private channel between each site and mixer                                                            α2
                Broadcast channel between all sites and mixer

  Fig.1: Privacy preserving communication and computation
           process between data sites and the mixer                                                                      αi

                                                                                     Each Site

     In our proposed model we also assume no site would
collude with the mixer to violate other’s privacy since this

would reveal privacy of itself too. In this model the privacy
would be preserved if (1) the coalition of N-2 sites would not
certain a revelation of privacy of any site and (2) mixer can                                                           αN
learn nothing about the distributed database.
    Let us consider; there are N resource constrained sites S1,
S2 … SN want to share the summation of a specific secret value                                                          ε’
of their own without disclosing the value itself. The secret
values are c1, c2 … cN respectively. cij denotes the value
belongs to site i for jth iteration (in case of association rule
mining it would be jth itemset).
     Secret parameters: Let us assume ρ is a large prime                                         Fig.3: Flow diagram of the algorithm
                            N                                                 Step 1: (Encryption)
number such that, ρ                          . Stream cipher seed
is µ. These ρ and µ are shared by all the sites using any key                      1.1 Each site Si computes rj following above mentioned
agreement algorithm similar to one proposed in [17]. In fact                        constraints
there will not be any effect if ρ is disclosed to the mixer. There
are two more parameters r and n which are generated from a                         1.2 Encodes its count : α
stream cipher in which the seed µ is used as key and any                           1.3 Then Si sends αi using mixer’s private channel
constant (may be ρ) as a plaintext. In each repetition the values             Step 2: (Mixing)
of r and n will be different due to the characteristics of the
stream cipher. Parameter r is chosen such that             . If it is               2.1 The mixer receives αi in its private channel (for all
assumed that the length of r and n are l bits then total number               i=1 to N).
of bits in each chunk in the stream will be: l+N.l = l(1+N).                       2.2 Adds them all α together:              ′
First l bits would be the value of r, second l bits for ni which is
a random number allocated for ith site for communication                           2.3 Broadcasts         ′
                                                                                                              back to all participating sites.
purpose. In every iteration the value of ni would be different
(similar to the value of nonce used in various cryptosystems).                Step 3: (Decryption)
Thus for jth site nj will be allocated from bit l+j.l to l+(j+1).l.                3.1 Each participating site Si receives ε′ .
Following figure (Fig.2) describes the allocation of values of r
and n from the stream cipher. The length of l should be chosen                     3.2 Si already had computed           . It gets the sum
such that following constrained is held:               .                               of the current iteration j by computing         ε′
                                                                                                        Where,             .

                                                                                                                 ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                            Vol. 8, No. 1, April 2010

      Thus               , the sum of the count is shared among             previous iteration using Apriori algorithm (discussed in
all sites without revealing individual count. An example of the             section 2).
algorithm is provided in the following section for more                         (2) Count computation: Si passes over all the transactions
clarification.                                                              in DBi to compute the count for all items in Lk-1.
    B. Example                                                                    (3) Transmission of counts: Counts of Ck is sent to the
     For simplicity let us consider three sites S1, S2 and S3               mixer using privacy preserving communication techniques
want to share the sum of their count values {5, 7 and 6}                    discussed in subsection 3.1. Communication between the data
without revealing their own values among themselves. Other                  sites and the mixer is performed through the private channel.
shared and generated secret parameters are: ρ=91, r=23,                     The value of j in the algorithm (subsection 3.1) maps to the
n1=17, n2=11, n3=10 and              (mod 91). To minimize                  itemsets sequence number.
complexity, values of r and ni are not calculated from the                         (4) Mixer functions: Mixer adds all the encrypted counts
stream cipher, rather their values are chosen spontaneously.                received from all the sites and broadcasts the result back to all
Also let us assume the values of r-1 are the same instead of                sites.
different for each site. Communication between sites and the
mixer is performed using private channel which is not                            (5) Result decryption: Each data site decrypts the result
depicted in this example too.                                               received from the mixer as it is stated in section 3.1 to get sum
                                                                            of the counts.
    Exchange of count values: Each site transmits it’s            to
the mixer using private channel.                                                     (6) Termination: Since all sites perform identical
                                                                            operation, all of them terminate at the same iteration and end
                                                                            up with generation of large itemset.
                                                                                             IV.    SECURITY ANALYSIS

    The mixer computes                                                                In this section we demonstrate that our proposed
                                                                            protocol preserves privacy during the transmission of counts
                                                                            of itemsets in association rule mining process. With the basis
     ′                                                                      of privacy requirement and security definition provided in [9,
         is received in all sites. Sites calculate sum of counts T:
                                                                            19], following formulation can be addressed.
                                                                                     Let us assume N≥3, since privacy preservation is
                                                                            impossible for less than three parties. VIEW(Si, N) implies
                                                                            view of the party Si where total number of participants is N.
    Thus T is equal to the intended sum of {5, 7 and 6}.                    Similarly VIEW(M,N) implies the view of the mixer.
    C. Association Rule Mining                                              Therefore by definition VIEW(M,0), VIEW(Si,0), VIEW(Si,1)
                                                                            and VIEW(Si,2) all equal to Φ. If X and Y are two random
     Among many association rule mining algorithms we                       variables then,
choose the one which focuses on reduction of communication
cost; Parallel Mining of Association Rules [18]. In this paper                       X≈polyY = (the probability of distinguishing X and Y)
authors have proposed three algorithms for the                              ≤         for all polynomials Q(l) [9]. N parties want to find
accomplishment three different objectives. Count Distribution               the sum of their counts of itemset c1, c2 … cN. The privacy will
(CD) is one of them which aims to reduce the communication                  be preserved if following conditions are satisfied [9].
cost at the cost of parallel redundant computation in each data
site. In this subsection we would integrate our proposed                        (a) Two                   random                   variables
privacy preserving communication technique with CD                                                                         and
algorithm which would be suitable for RCD in terms of                                                    are polynomially indistinguishable
computation and communication.                                                      (AN,j≈polyBN,j) for 1≤j≤N and 0≤R<ρ.
    Since frequent large itemset computation is considered as                   (b) Two                    random                  variables
the major task in association rule mining algorithms, we focus                                                              and
our effort for the accomplishment of the same task as it is the                                                     are        polynomially
case in many other papers. Following are the notations, major                       indistinguishable (CN,j≈polyDN,j) for n ≥ 3, 1 ≤ j ≤ n-2
stages and actions performed in each data site in every cycle:                      and 0≤R<ρ.
    Let, Si: Data site (site) of index i. N: Number of sites.                        Since    all   users have identical             values   of
DBi: Database (collection of transactions) in Si. Lk: Set of
frequent k-itemset. Ck: Set of candidate k-itemset and                                                  ,                                     …
         .                                                                                                , they are the same.

   (1) Candidate set generation: Each site Si generates a                           Theorem 1: The proposed protocol preserves privacy
complete candidate set Ck from Lk-1 which is computed in the                based on the above mentioned privacy definition.

                                                                                                       ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                   Vol. 8, No. 1, April 2010

            Proof:       (a)      When N=1, then j=1 and                           can decrypt the outer encryption of the double encrypted
                         =(α,c1).                                                  ciphertext. It cannot decrypt or read the secret value of Si.
                                                                                   Mixer only adds all the ciphertexts together and broadcasts the
 Since the mixer does not know the secret parameters (ρ, µ) it                     result to all sites in step 2. Now the sum is known to all
 cannot decrypt α. Therefore                                                       parties. They all can decrypt it which is a summation of their
         α            α                              .                             secret values. Therefore none can reveal or relate any secret
 When N>1 and 1≤j≤N                                                                value associated to any site.
                                                                                        Theorem 3: Security against the mixer and any other
                                                                                   individual participant or outsider
                                                                                        Proof: Unlike any other kind of regular security protocols
                                                                                   our proposed protocol has neither a straight forward sender
                                                                                   nor a receiver. Rather it involves encryption of different
                                                                                   contents with different keys by multiple senders, a mixer and
                                                                                   multiple receivers together in a complete single
                                                                                   communication. The senders send in the first step and receive
                                                                                   in the third step. Moreover each transaction in this protocol is
                                                                                   consists of multiple communication attempts, which make the
                                                                                   protocol different and more secure compared to other
                     ]                                                             protocols. Let us study the vulnerability in following cases:
                                                                                        Replay attack: If an eavesdropper gets all the
   (b)      When n=3, j=1. Therefore                                               communications between all sites and the mixer, he cannot
                                                                                   learn anything significant about the secret value of an
                                                                                   individual party. Because in every communication the value of
                                                                                   nj chosen randomly in step 1.2 of the algorithm, which would
                                                                                   raise the high degree of unpredictability of the data in the
         With given c1 and                     , party S1 cannot be certain
about c2. Therefore,                                                                    Brute force attack: Again due to the frequent and random
                                                                                   change of value of nj in each communication, brute force
                                  =                                                attack is unrealistic.
When N>3 and 1≤ j ≤ n-2,

                                                                                                 V.   PERFORMANCE ANALYSIS

                                                                                       Yi-Zhang’s [9] privacy preserving association rule mining
                                                                                   algorithm uses semi-trusted mixer which is similar to our
                                       ′                    ′          ′           proposed model. We compare the performance of the
            Let      us       assume                                               proposed protocol with Yi-Zhang protocol. To measure and
                                                                                   compare the performance between these two protocols, let us
Since                                                                              assume following parameters:
                                                                                       H= Average number of items in the large k-itemset.
            ′             ′   ′            ′           ′                               L= Size of each entry in the list of large itemsets to store
                                                                                   index and count in Bytes.
                                                                                       N= Number of data sites.
                                                                                       K= Average size of each item in Byte (number of
                                                                                   characters as for example).
            Therefore the privacy is preserved for the proposed                        φ=Encryption ratio (                     ) in step 1.2 in the
protocol.                                                                          proposed algorithm.
    Theorem 2: The protocol does not reveal support count of                           | αi|=Size of αi in step 1.2 in the proposed algorithm.
one participant to either the mixer or to other participants.                          |є'|= Size of є' in step 2.3 in the proposed algorithm.
     Proof: In step 1.2 of the algorithm, each site Si encrypts                         Proposed algorithm: Communication payload in each
the secret value using private keys which are only known to                        iteration is
sites. Before the ciphertext is transmitted to the private                             N*| αi|*H+|є'|*H*N = N*φ*L*H+φ*H*N = φHN(1+L)
channel of the mixer, it is farther encrypted using the public
key of the mixer in step 1.3. None has the private key of the                          In case of φ=1, Communication overhead= HN(1+L).
mixer except the mixer itself; therefore no eavesdropper can
get access to the ciphertext. On the other hand the mixer only

                                                                                                              ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 8, No. 1, April 2010

     Yi-Zhang algorithm [9]: Let us assume same encryption                    Measure                  Our Proposed                Yi-Zhang
ratio (that is same φ and β') in both level of encryptions.                                              Protocol                  Protocol
Communication payload in each iteration is:
                                                                         Communication                       3NH                      6NH
N*(Cipher-text sent by each site) + Data broadcasted by the              overhead (each
mixer = N*φ*H*K+N* φ*H*K=2φNHK.                                            iteration)
    If φ=1, Communication overhead= 2NHK.                                    Number of                        0                         4
    For farther comparison let us assume value of L=2 (two                   exponential
bytes to store two values: index and count) and K=3 (on an                    operations
average). Therefore communication payload in our proposed                     Key size                        80                      1024
algorithm and Yi-Zhang’s algorithm are 3NH and 6NH bytes
respectively. Therefore the proposed algorithm generates as             Table 1: Performance comparison between Yi-Zhang and the
much as half communication payload of the Yi-Zhang                                          proposed protocol.
                                                                            Though there is no use of exponent operations in the
     Let us now compare the number of instructions necessary           proposed algorithm, it involves some other cryptographic
in encrypting and decrypting a message m. We compare only              operations which would be efficient enough due to small key
the homomorphic encryption involved in both Yi-Zhang and               size. Therefore the performance comparison shows that the
the proposed protocol. Basic encryption and decryption                 proposed algorithm is more efficient and straightforward,
equations of Yi-Zhang protocol are:                                    which make it suitable for RCD.
Encryption:                       and                                                            VI.      CONCLUSION
Decryption:        λ
                                                                                 Rapid development and increasing popularity of
     Where, m: the message, c: the ciphertext, N: pq (p and q          ubiquitous computing and RCD in the environment demands
are large prime numbers), g: public key, r: a random number.           the deployment of varieties of lightweight applications. A
                                                                       lightweight algorithm which would lead one step ahead to
    Therefore number of operations involved for encryption             deploy data mining applications in RCD is proposed in this
and decryption are:                                                    paper. All security protocols involve detail consideration of
Exponential operations: 1+1+1+1=4                                      various security threats. But our proposed model can avoid
                                                                       many security threats such as replay attack, brute force attack
Basic operations: 1+1+1+1+1+1+1+1+1+1+1+1+1=13                         etc, due to the nature of the protocol itself. This is so because
                                                                       in this protocol a single communication is not consists of
    In case of the proposed protocol, basic encryption and             simply between a sender and a receiver rather it involves
decryption equations are (as stated in section 3.1):                   multiple senders, receivers and the mixer all together. All the
Encryption:                          and                               secret parameters and keys in our proposed homomorphic
                                                                       encryption technique are very small in size; therefore less
Decryption:                      ρ                                     computation is involved in the encryption and decryption
    Where, r and n: random numbers, ρ: prime number > sum              process. This feature makes the proposed algorithm more
of counts of items. For the sake of measuring the operations           suitable for RCD. Performance analysis and proofs of privacy
count, we treat    and r as the same.                                  and security also imply the strength and appropriateness of the
                                                                       algorithm. Therefore this effort should be considered as one of
    Therefore number of basic instructions involved in                 the effective initiative towards the deployment of data mining
encryption and decryption are:                                         in ubiquitous computing environment.
Exponential operations: 0                                                                              REFERENCES
Basic operations: 1+1+1+1=4
                                                                       [1]   R. Agrawal, R. Srikant, “Privacy-preserving data mining”, Proceedings
     Finally let us compare the size of the keys used in both
the protocols:                                                               of ACM SIGMOD International Conference on Management of Data,
                                                                             2000, pp. 439–450.
     In the proposed protocol the key is considered as the seed
µ of the stream cipher. The size of µ can be considered as a           [2]   A. Evfimievski, “Randomization in privacy preserving data mining”,
typical one: 80 bits.                                                        ACM SIGKDD Explorations Newsletter, Volume 4 Issue 2, December
    Yi-Zhang protocol: It is mentioned in [9] that for security              2002, pp. 43 – 48
concern the value of N should be such that                    .        [3]   A. Evfimievski, S. Ramakrishnan, R. Agrawal, J. Gehrke, “Privacy-
Therefore size of N is 1024 bits.                                            preserving mining of association rules”, 8th ACM SIGKDD
     All the performance comparisons between Yi-Zhang and                    International Conference on Knowledge Discovery and Data Mining,
the proposed protocol are summarized in table 1.                             ACM Press, 2002, pp. 217–228.

                                                                                                        ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                     Vol. 8, No. 1, April 2010

[4]   H. Kargupta, S. Datta, Q. Wang, K. Sivakumar, “On the privacy                   [18] R. Agrawal and J.C. Shafer, “Parallel Mining of Association Rules”,
      preserving properties of random data perturbation techniques” 3rd Int’l              Knowledge and Data Engineering, IEEE Transactions on Volume 8,
      Conference on Data Mining, 2003, pp. 99–106.                                         Issue 6, Dec. 1996 pp. 962-969.
[5]   A.C. Yao, “How to generate and exchange secrets”, 27th IEEE                     [19] W.G. TZeng, “A secure fault-tolerant conference key agreement
      Symposium on Foundations of Computer Science, 1986, pp. 162–167.                     protocol”, IEEE Transactions on Computers vol. 51 issue 4, April 2002,
[6]   M. Kantarcioglu, C. Clifton,”Privacy-preserving distributed mining of                pp. 373 – 379.
      association rules on horizontally partitioned data”, Knowledge and Data         [20] P-N. Tan, M. Steinbach, V. Kumar, “Introduction to Data Mining”, 1st
      Engineering IEEE Transaction Volume 16, Issue 9, Sep. 2004 pp. 1026-                 Edition, ISBN/ISSN: 0321321367, 2006.
      1037.                                                                           [21] G. Myles, A. Friday, N. Davies, “Preserving privacy in environments
[7]   Y. Lindell, B. Pinkas, “Privacy Preserving Data Mining”, Journal of                  with location-based applications”, Pervasive Computing IEEE, vol. 2,
      Cryptology, Volume 15 - Number 3, 2002.                                              issue 1, Jan-Mar 2003, pp 56-64.
[8]   Z. Yang, S. Zhong, R. N. Wright, “Privacy-Preserving Classification of          [22] J. Guo, J. P. Baugh, S. Wang, “A Group Signature Based Secure and
      Customer Data without Loss of Accuracy”, Proceedings of the Fifth                    Privacy-Preserving Vehicular Communication Framework” 2007 Mobile
      SIAM International Conference on Data Mining, Newport Beach, CA,                     Networking for Vehicular Environments, May 2007, pp 103-108.
      April 21-23, 2005.                                                              [23] V.S. Verykios, A.K. Elmagarmid, E. Bertino,Y. Saygin, E. Dasseni,
[9]   X. Yi, Y. Zhang, “Privacy-preserving distributed association rule mining             “Association rule hiding”, Knowledge and Data Engineering, IEEE
      via semi-trusted mixer”, Data and Knowledge Engineering, vol. 63, no.                Transactions, Volume 16, Issue 4, April 2004, pp. 434 – 447.
      2, 2007.                                                                        [24] B.J. Ramaiah, A. Reddy, M.K. Kumari,” Parallel Privacy Preserving
[10] P.P. Jayaraman, A. Zaslavsky, J. Delsing, “Sensor Data Collection Using               Association rule mining on pc Clusters”, 2009 IEEE International
      Heterogeneous Mobile Devices”, Pervasive Services, IEEE International                Advance Computing Conference (IACC 2009), March 2009, pp. 1538 –
      Conference, Istambul, 15-20 July 2007 pp. 161-164.                                   1542.
[11] J. Vaidya, C. Clifton, M. Zhu, “Privacy Preserving Data Mining”,                 [25]      Y.H. Wen, J.W. Huang, M.S. Chen, “Hardware-Enhanced
      Springer 2006, ISBN-13: 978-0-387-25886-8.                                           Association Rule Mining with Hashing and Pipelining”, IEEE
[12] J. Han, M. Kamber, “Data Mining Concepts and Techniques”, Second                      Transactions on Knowledge and Data Engineering, vol. 20, no. 6, June
      Edition, Elsevier Inc. 2006. ISBN: 13:978-1-55860-901-3.                             2008.
[13] J. Katz, Y. Lindell, “Introduction to Modern Cryptography”, Taylor &
      Francis Group, LLC, 2008. ISBN: 13: 978-1-58488-551-1.
[14] R. Agrawal and R. Srikant, “Fast algorithms for mining association                                          AUTHORS PROFILE

      rules,” in Proceedings of the 20th International Conference on Very
                                                                                      Md. Golam Kaosar is a PhD student at the School of Engineering and
      Large Data Bases. Santiago, Chile: VLDB, Sept. 12-15 1994, pp. 487–             Science, Victoria University, Australia. Before he starts his PhD, he used to
      499.                                                                            work as an engineer at Research Institute (RI) in King Fahd University of
                                                                                      Petroleum and Minerals (KFUPM), Saudi Arabia. Before that he got his MS
[15] NBS FIPS PUB 46, Data Encryption Standard (National Bureau of                    in Computer Engineering and BSC in Computer Science and Engineering
                                                                                      from KFUPM, and Bangladesh University of Engineering and Technology
      Standards, US Department of Commerce, 1977).
                                                                                      (BUET), Bangladesh at the years 2006 and 2001 respectively.
[16] FIPS PUB 197, Advanced Encryption Standard (Federal Information                  As a young researcher, he has a good research background. He has published
      Processing Standards Publications, US Department of                             number of conference papers including IEEE and some journals. His area of
                                                                                      research includes but not limited to Privacy Preserving Data Mining,
      Commerce/N.I.S.T., National Technical Information Service, 2001).               Ubiquitous Computing, Security and Cryptography, Ad-hoc sensor network,
[17] S. B. Wilson, A. Menezes, “Authenticated Diffie-Hellman Key                      Mobile and Wireless Network, Network Protocol, etc.

      Agreement Protocols”, Lecture Notes in Computer Science, Springer-
      Verlag Berlin Heidelberg, ISBN- 978-3-540-65894-8, January 1999, pp.

                                                                                                                       ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          Vol. 8, No. 1, 2010

S.Malathy                     G.Sudhasadasivam                        K.Murugan                             S.Lokesh
Research Scholar              Professor, CSE Department               Lecturer, IT Dept                     Lecturer,CSE Dept
Anna University               PSG College of Technology               Hindusthan Institute of Tech          Hindusthan Institute of Tech
 Coimbatore                   Coimbatore                              Coimbatore                            Coimbatore

                                                                           Service (QoS) is required to manage the incoming new
                                                                           calls and handoff calls more efficiently. The Geographical
Abstract - Mobility management and bandwidth management
are two major research issues in a cellular mobile network.
Mobility management consists of two basic components:
                                                                           area is divided into smaller areas in the share of hexagon.
location management and handoff management. To Provide
                                                                           These hexagonal areas are called as cells. A Base Station
QoS to the users Handoff is a key element in wireless cellular
                                                                           (BS) is located at each cell. The Mobile Terminals (MT)
networks. It is often initiated either by crossing a cell boundary
or by deterioration in the quality of signal in the current                within that region is served by these BS. Before a mobile
channel. In this paper, a new admission control policy for                 user can communicate with other mobile user in the
cellular mobile network is being proposed. Two important QoS               network, a group of channels should be assigned. The cell
parameter in cellular networks are Call Dropping Probability               size plays a major role in the channel utilization. A user has
(CDP) and Handoff Dropping Probability (HDP). CDP                          to cross several cells during the ongoing conversation, if
represents the probability that a call is dropped due to a handoff         the cell size is small. During the ongoing conversation, the
failure. HDP represents the probability of a handoff failure due
                                                                           call has to be transferred from one cell to another to
to insufficient available resources in the target cell. Most of the
                                                                           achieve the call continuation during boundary crossing.
algorithms try to limit the HDP to some target maximum but not
CDP. In this paper, we show that when HDP is controlled, the
CDP is also controlled to a minimum extent while maintaining                  Here comes the role of handoff. Transferring the active
lower blocking rates for new calls in the system.                          call from one cell to another without disturbing the call is
                                                                           called as the process of Handoff. Hand is otherwise a
Index Terms— Wireless Cellular Networks, Handoff                           “make before break” process. Time slot, Frequency band,
Dropping Probability, Call Dropping Probability, Resource                  or code word to a new base station [1] may be the terms of
Allocation, Prioritization Schemes.                                        call transfer from a cell to another.

                  1. INTRODUCTION                                             A typical Cellular network is shown in figure 1. A
  Due to the increased urge to use the wireless                            limited frequency spectrum is allocated. But it is very
communication in a satisfied way, a promised Quality of                    successfully utilized because of the frequency reuse

                                                                                                        ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 8, No. 1, 2010
concept. To avoid the interference while neighboring cells

are utilizing the same frequency, the group of channels                                             ALLOCATION

assigned to one cell should be different from the
neighboring cells. The MTSO scans the residence of the                                 CAC           HANDOFF         POWER

MS and assigns the channel to that cell for the call.

                                                                                                  MULTIPLE ACCESS
  If the MS is travelling while the call is in progress, the
MS need to get a new channel from the neighboring BS to                     FIGURE 2 RESOURCE MANAGEMENT IN CELLULAR
continue the call without dropping. The MSs located in the                                       NETWORKS

cell share the available channels. The Multiple Access
                                                                      Call Admission Control denotes the process of admitting a
Methods and channel allocation schemes govern the
                                                                      fresh call or a handoff call based on the availability of
sharing and allocating the channels in a cell, respectively.

                                                                                   II LITERATURE SURVEY
                                                                         Various handoff schemes proposed [2] are Guard
                                                                      channel scheme (GCS), Handoff based on Relative Signal
                                                                      Strength [4], Handoff based on Relative Signal Strength
                                                                      with threshold, Handoff based on Relative Signal Strength
                                                                      with Hysteresis and threshold [3], Handoff based on

                                                                      Prediction techniques [5]. When MS moves from one cell
                                                                      to another, the corresponding BS hands off the MSs Call to
                                                                      the neighbor. This process is done under the control of
              FIGURE 1 CELLULAR NETWORK                               MTSO.        The handoff in initiated based on various
                                                                      parameters like signal strength received from BS, travelling
The Scenario of a basic cellular network is depicted in               speed of the MS etc.
                                                                         A handoff method based on the kinds of state
  The resource management in the cellular system deals                information [6] that have been defined for MSs, as well as
with CAC, Utilization of Power and channel allocation                 the kinds of network entities that maintain the state
strategy. The channel allocation strategy may be Fixed or             information has been devised. The handoff decision may be
Dynamic. The resource allocation is shown in Figure 2.                made at the MS or network. Based on the decision, three
                                                                      types of handoff may exist namely, Network-Controlled
                                                                      Handoff, Mobile-Assisted Handoff, and Mobile-Controlled
                                                                      Handoff [7]. Handoff based on Queuing is analyzed [7] for
                                                                      voice calls. The Queue accommodates both the originating
                                                                      calls and handoff requests [9]. Handoff schemes with two-
                                                                      level priority [10] have been proposed. How the non-real-

                                                                                                    ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 8, No. 1, 2010
time service has to be incorporated and its effect needs to             get the service. The priority is more for the handoff calls
be taken into consideration is proposed [11]. A new two-                than the originating calls.
dimensional model for cellular mobile systems with pre-
emptive priority to real time service calls [12] is proposed.           The following assumptions are made over the calls.
In [13] the concept of prioritization of handoff calls over                    a)   The arrival pattern of the calls follows the Poisson
new calls since it is desirable to complete an ongoing call                         process.
rather than accepting a new one is employed.                                   b) The cell consists of N Channels. If free channels
                                                                                    exist, both the calls will be served. If channels are
  In [14], a situation where the handoff calls are queued                           not available, then the originating calls will be
and no new calls are handled before the handoff calls in the                        dropped.
queue is presented. By combing guard channel and queue                         c)   Priority is given to the handoff calls on based on
schemes performs better [15]. [16] developed a non-                                 the call dwell time in the cells. The priority is low
preemptive prioritization scheme for access control in                              for a longer dwell time calls than the shorter calls.
cellular networks.                                                                  The channel holding time is assumed to have
                                                                                    exponential distribution.
                                                                                New calls
  If users request connection to the base station at the same
                                                                          λO                    CHo           2   1

time, the system checks the type of origin of the call. The                                                                             N
                                                                          1st priorityhandoff calls                                   channe
handoff decision may be made by the MS or the network
                                                                          λh                            COC           2   1             ls
based on the RSS, Traffic pattern, Location management
                                                                          2nd priorityhandoff calls
etc., while handoff is made the channel assignment plays                  λh                          CHhf            2   1

an important role. The total channels in the BS can be
allocated to different types of calls. If the originating calls
and handoff calls are treated in the same way, then the
                                                                                             FIGURE 3 QUEUEING CALLS
request from both kinds are not served if there are no free
                                                                               d) Two types of Queues are assumed. The queue for

  In another scheme, Priority is given to the handoff call                          handoff calls QHC and queue for originating calls

request by reserving a minimum number of channels to the                            QOC respectively.

handoff call. If there is N number of channels available, the                  e)   If no channels are available the handoff calls are

R number of channels is reserved to the handoff calls and                           queued in QHC, whose capacity is CHC .The

the remaining (N-R) channels are shared by the handoff                              originating calls are queued in QOC, only if the

and originating call requests. The handoff call request is                          available channels at the time of arrival are less

dropped only if there are no channels available in the cells.                       than (N-R). The originating call is blocked if the

To overcome this drawback of dropping the handoff calls,                            queue is full.

our system proposes an new model of queuing scheme in                          f)   Queue is cleared if the call is completed or the

with the handoff calls and originating calls are queued to                          user moves away from the cell.

                                                                                                        ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   Vol. 8, No. 1, 2010
    g) The capacity CHC of QHC is large enough so that
                                                                        ∞            ∞
        blocking probability of the handoff call is                     ∫ e−μHt dt = ∫ ((1 −λn FTHn(t) − λn FTHh(t))dt         (1)
        neglected.                                                      0            0      λ            λ

                                                                    where FTHn(t) and FTHh(t) are actual distribution of
The channel holding time TH can be calculated by using              channel holding time for new and handoff calls. [17]
the following formula

                                   FIGURE 3 CHANNEL ALLOCATION ALGORITHM

                     IV RESULTS                                     channels for handoff calls of real time traffic gets shared
                                                                    dynamically shared by handoff calls of non-real-time

  In this paper, a dynamic sharing of channels for the              traffic. The comparison between a normal bandwidth

handoff calls and new calls has been proposed. In the               reservation scheme and the proposed model is simulated. It

proposed scheme, when there is no channels, the reserved            is shown that, the call blocking probability as well as the

                                                                                                 ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 1, 2010
                                                                       Channel Utilization       Full                 Reduced
  handoff dropping probability is reduced when compared                Traffic Management        Controlled           Controlled
to the traditional algorithms even when traffic is increased.          Call        Dropping      Reduced              Reduced
            TABLE 1 COMPARISON BETWEEN                                 Probability
                                                                       Call        Blocking       Not Decreased       Decreased            as
                                                                       Probabilty                                     well
Parameter                Existing           Proposed
                         Scheme             Scheme

                                                                          The New Call Blocking Probability and the Handoff Call
                                                                       Dropping Probability with an increase in call arrival rate in
                                                                       a cell is reduced when compared to the traditional

                                                                                           IV CONCLUSION

                                                                          In this paper, we have showed that by integrating the
                                                                       concept of buffering and dwell time of the call, the New
                         RESULT 1
                                                                       Call blocking probability and handoff call dropping
                                                                       probability has been considerably reduced. In future, this
                     ARRIVAL RATE
                                                                       work can be extended for different types of calls and
                                                                       integrated services like data and images.
  The above graph shows that by adopting the new
algorithm the bandwidth utilization is considerably
increased with the increase in call rate.
                                                                       [1] S. Tekinay and B. Jabbari, “Handover and channel assignment
                                                                       in mobile cellular networks,” IEEE Commun. Mag., vol. 29, no.
                                                                       11, 1991, pp. 42-46.
                                                                       [2] I. Katzela and M. Naghshineh, “Channel assignment schemes
                                                                       for cellular mobile telecommunication systems: A comprehensive
                                                                       IEEE Personal Communications, pp. 10-31,June 1996.
                                                                       [3] Gregory P. Pollini, “Trends in Handover Design,” IEEE
                                                                       Communication Magazine, March 1996, pp. 82–90.
                                                                       [4] N. Zhang, Jack M. Holtzman, “Analysis of Handoff
                                                                       Algorithms Using Both Absolute and Relative Measurements,”
                                                                       IEEE Trans. Vehicular Tech., vol. 45, no. 1, pp. 174-179,
                                                                       February 1996.
                                                                       [5] Shian-Tsong Sheu, and Chin-Chiang Wu, “Using Grey
                                                                       prediction theory to reduce handoff overhead in cellular
                         RESULT 2                                      communication systems”, The 11th IEEE International
                                                                       Symposium on Personal, Indoor and Mobile Radio
  CALL BLOCKING PROBABILITY VERSUS CALL                                Communications, (PIMRC 2000), vol. 2, pp. 782-786, 2000.
                                                                       [6] N. D. Tripathi, J. H. Reed, and H. F. Vanlandingham, Handoff
                     ARRIVAL RATE                                      in Cellular Systems, IEEE Personal

                                                                                                    ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 8, No. 1, 2010
Commun., December 1998.                                                  Professor in Department of Computer Science and
[7] Handoff in Wireless Mobile Networks, QING-AN ZENG and                Engineering in PSG College of Technology, India. Her
DHARMA P. AGRAWAL,                                       areas of interest include, Distributed Systems,
[8] Guerin R, “Queuing Blocking System with Two Arrival
                                                                         Distributed Object Technology, Grid and Cloud
Streams and Guard Channels”, IEEE
Transactions on Communications, 1998, 36:153-163.                        Computing. She has published 20 papers in referred
[9] Zeng A. A, Mukumoto K. and Fukuda A., “Performance                   journals and 32 papers in National and International
Analysis of Mobile Cellular Radio System with Priority                   Conferences. She has authored 3 books. She has
Reservation Handoff Procedure”, IEEE VTC-94, , Vol 3, 1994,              coordinated two AICTE – RPS projects in Distributed
pp. 1829-1833.                                                           and Grid Computing areas. She is also the coordinator
[10] Zeng A. A, Mukumoto K. and Fukuda A., “Performance                  for PSG-Yahoo Research on Grid and Cloud computing.
Analysis of Mobile Cellular Radio                                        You may contact her at
System with Two-level Priority Reservation Procedure”, IEICE
Transactions on Communication, Vol E80-B, No 4, 1997, pp.
[11] Goodman D. J, “Trends in Cellular and Cordless
Communication”, IEEE Communications Magazine, Vol. 29, No.
6, 1991, pp.31-40.
[12] Pavlidou F.N, “Two-Dimensional Traffic Models for
Cellular    Mobile      Systems”,    IEEE    Transactions   on
Communications, Vol 42, No 2/3/4, 1994, pp. 1505-1511.
[13] Jabbari B. & Tekinay S., “Handover and Channel
Assignment      in    Mobile     Cellular   Networks”,    IEEE
Communications Magazine, 30 (11),1991, pp.42-46.                                             Mr.K.Murugan is currently a research
 [14] Sirin Tekinay, “A Measurement-Based Prioritization                 Scholar in Karpagam University Coimbatore. He has a
Scheme for Handovers in Mobile Cellular Networks”, IEEE                  teaching experience of 15 years.He has presented various
JSAC, Vol. 10, 1992, pp. 1343-1350.                                      papers in national and international conferences. His
[15] Hou J and Fang Y., “Mobility-Based call Admission Control           research areas include Mobile networks, Grid Computing,
Schemes for Wireless Mobile Networks”, Wireless                          Data Mining.
Communications Mobile Computing, 2001, 1:269-282.
[16] Novella Bartolini, Handoff and Optimal Channel Assignment
in Wireless Networks”, Mobile Networks and Applications, 6,
2001, pp. 511-524.
[17 ] Kundan Kandhway, “Dynamic Priority Queueing of
Handover Calls in Wireless Networks: An Analytical
Framework”                                                               Mr.S.Lokesh is currently a research Scholar in Anna
                                                                         University Trichy.. He has a teaching experience of 5
                                                                         years.He has presented nearly 6 papers in national and
Author Profile:                                                          international conferences. His research areas include
                                                                         Mobile networks, Digital Image Processing, Signal

                Ms.S.Malathy is currently a research
Scholar in Anna University Coimbatore. She has presented
nearly 10 paper in national and international
conferences.Her research areas include Mobile networks
and wireless communication.

                    Dr G Sudha Sadasivam is working as a

                                                                                                      ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 8, No. 1, 2010

        A New Vein Pattern-based Verification System
       Mohit Soni                         Sandesh Gupta                              M.S. Rao                          Phalguni Gupta
        DFS,                             UIET, CSJMU,                                DFS,                               IIT Kanpur,
  New Delhi, INDIA                      Kanpur, UP, INDIA                      New Delhi, INDIA                      Kanpur, UP, INDIA                                

Abstract— This paper presents an efficient human recognition                    Venal patterns, on the other hand, have the potential to
system based on vein pattern from the palma dorsa. A new                   surpass most such problems. Apart from the size of the pattern,
absorption based technique has been proposed to collect good               the basic geometry always stays the same. Unlike fingerprints,
quality images with the help of a low cost camera and light                veins are located underneath the skin surface and are not prone
source. The system automatically detects the region of interest            to external manipulations. Vein patterns are also almost
from the image and does the necessary preprocessing to extract             impossible to replicate because they lie under the skin surface
features. A Euclidean Distance based matching technique has                [6].
been used for making the decision. It has been tested on a data set
of 1750 image samples collected from 341 individuals. The                      It seems, the first known work in the field of venal pattern
accuracy of the verification system is found to be 99.26% with             has been found in [10]. Badawi [1] has also tried to establish
false rejection rate (FRR) of 0.03%.                                       the uniqueness of vein patterns using the patterns from the
                                                                           palma dorsa. The data acquisition technique mentioned in [1] is
    Keywords- verification system; palma dorsa; region of interest;        based on a clenched fist holding a handle to fixate the hand
vein structure; minutiae; ridge forkings                                   during image capture. This method however, has limitations
                                                                           with respect to orientation. Substantial works in this field have
                       I.    INTRODUCTION                                  been done by Leedham and Wang [11] [12] [13]. In these,
    Vein pattern of the palma dorsa can be defined as a random             thermal imaging of the complete non fisted hand has been done
‘mesh’ of blood carrying tubes. The back of the hand veins are             using Infrared light sources. Generally, near infra-red lamps of
not deeply placed and hence these can be made visible with the             intensity-value ranging from 700 to 900 nm in wavelength are
help of a good image acquisition system and technique. The                 used to design such a system [12]. These lamps are found to be
geometry of these veins is found to be unique and universal                costly. Also infra-red light has been used to either reflect or
[14]. Hence, it can be considered as one of the good human                 transmit light to the desired surface [8] [11] [12] [14]. These
recognition systems.                                                       techniques have both their advantages and disadvantages. It has
                                                                           been observed that the images captured through a reflection
    Forensic scientists have always been the biggest reapers of            based system, as proposed in [11], would never produce
successful biometric systems. User authentication, identity                consistent results owing to excessive noise generated due to
establishment, access control and personal verification etc are a          unnecessary surface information. The surroundings have to be
few avenues where forensic scientists employ biometrics. Over              controlled at all times and the skin color or skin abnormalities
time various biometric traits have been used for the above                 are bound to have an effect. The best results can only be
mentioned purposes. Some of them have gained and lost                      expected after exposure from the near infra-red lamps which
relevance in the course of time. Therefore, constant evolution             are costly. A system of capturing images from the front of the
of existing traits and acceptance of new biometric systems is              hand has been proposed in [14]. The palm prints may interfere
inevitable. The existing biometric traits, with varying                    with the pattern of the veins, in this case.
capabilities, have proven successful over the years. Traits like
Face, Ear, Iris, Fingerprints, Signatures etc., have dominated                 Matching technique in a biometric system is a crucial step
the world of biometrics over the years. But each of these                  because the accuracy of the system alone can determine its
biometric traits has its shortcomings. Ear and iris pose a                 effectiveness. There exist various matching techniques for
problem during sample collection. Not only is an expensive                 proving the individuality of a source. Badawi has used a
and highly attended system required for iris but it also has a             correlation based matching algorithm and achieved excellent
high failure to enroll rate. In case of ear data, it is hard to            results. However, correlation-based techniques, though the
capture a non occluded image in real time environment.                     most popular, become costly on larger databases. Wang and
In case of the most well known face recognition systems there              Leedham used a matching technique based on Hausdorff
exist some limitations like aging, background, etc [2].                    distancing, which is limited in principle by the slightest change
Fingerprints, though most reliable, still lack automation and              in orientation.
viability as they are also susceptible to wear and aging.                      This paper proposes an efficient absorption based technique
Signatures, are liable to forgery.                                         for human identification through venal patterns. It makes use of
                                                                           a low cost sensor to acquire images. It uses a fully automated
                                                                           foreground segmentation technique based on active contouring.
                                                                           Reduced manual interference and an automatic segmentation

                                                                                                      ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 8, No. 1, 2010
technique guarantee uniform segmentation of all samples,                    •    The focal length and the exposure time of the camera
irrespective of their size and position. The paper also employs a                lens.
rotation and translation invariant matching technique. It is also
realized that since the collected images are very large in size              In our experiment, a simple digital SLR camera combined
(owing to a high definition camera) slow techniques like                 with an infra-red filter has been used for data acquisition. Also
correlation-based matching would hinder the overall efficiency.          it makes use of a low cost night vision lamp of wavelength 940
Therefore, the proposed system uses critical features of veins to        nm. The proposed set-up is a modest wooden box with a
make the decision. The results, thus far, have been found to be          hollow rod lodged in the middle accommodating the infra-red
encouraging and fulfilling.                                              lamp. The camera is fixed at a perpendicular to the light source
                                                                         pre-adjusted to a fixed height of 10 inches above the light
    Section 2 presents the experimental setup used to acquire            source. The camera is held on a tripod attached to the box. The
images. Next section deals with the proposed venal pattern               robustness and the flat face of the night vision lamp provides
based biometric system. It tries to handle some of the critical          for a sturdy plinth for the subject’s hand. The sensor here is
issues such as use of a low cost sensor for acquiring images,            kept on the opposite side of the light source as shown in Fig. 2.
automatic detection of region of interest, rotation and                  This design has specific advantages. The subject has to place
translation invariant matching etc. This system has been tested          his palm on the plinth surface, to provide image. If the camera
on the IITK database consisting of 1750 image samples                    is not able to pick up the pattern the attendant can immediately
collected from 341 subjects in a controlled environment, over            rectify the hand position. The image can be captured only when
the period of a month. Experimental results have been analyzed           the camera can pick up the veins.
in Section 3. Concluding remarks are given in the last section.

                  II.    PROPOSED SYSTEM
    Like any other biometric system, the venal pattern based
system consists of three major tasks and they are (i) image
acquisition (ii) preprocessing of acquired image data (iii)
feature extraction (iv) matching. The flow diagram of the
proposed system is given in Fig. 1.

                                                                                              Figure 2. Experimental Setup

                                                                             The setup prepared for the proposed system is not only cost
                                                                         effective but also meets the requirement for good quality data
                                                                         acquisition. It is found that through this camera along with the
                                                                         mentioned light the veins appear black. The light source is
                                                                         placed behind the surface to be captured. This helps to make an
                                                                         ideal data scheme and standard, as all parameters can be fixed.
                                                                         Unlike [8], [11], [12] and [14] where infra-red light has been
                                                                         reflected or transmitted to the desired surface, this paper
            Figure 1. Flow Diagram of the Proposed System                proposes an absorption-based technique to acquire images. The
                                                                         proposed technique provides a good quality image regardless
A. Image Acquisition                                                     of the skin color or, any aberrations or discolorations, on the
                                                                         surface of the hand. In this technique the veins pop out when
   Performance of this type of system always depends on the              the hand is fisted and it becomes much easier to capture high
quality of data. Since venal data is always captured under a             contrast images. The time and cost of image processing
controlled environment, there is enough scope to obtain good             therefore can be kept to a minimum. Since the veins are
quality data for veins. Parameters required for getting good             illuminated from the behind and captured from the other side,
quality of data are carefully studied in making the setup and are        any anomalies in the skin of the palm (including the natural
given below:                                                             palm lines) would not interfere in the pattern. The image
   •    The distance between the camera and the lamp.                    capturing however would be limited by anomalies on the palma
                                                                         dorsa itself, like tattoos etc. On the other hand, skin color or the
   •    The position of the lamp.                                        gradual change of it (due to diseases or sunlight, etc.) or the
   •    The fixed area for the placement of the hand.                    gain and loss of weight would not hamper the pattern collection
   •    The orientation of the lamp once clenched within the

                                                                                                      ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 1, 2010
    Since the light illuminates the entire hand, it is a common           are spatial co-ordinates, typical external energy can be defined
notion that the veins in the front of the hand might interfere            as follows to lead the contour towards edges:
with the pattern at the back. However, it is crucial to note, that
infra-red light does not make the hand transparent. It simply
illuminates the hemoglobin in the veins, which appear black.                                     Eext = − ∇I ( x, y)
The partition of the bone between the two planes in the front
and the back of the hand, does not allow interference. And                                                                     (2)
since the sensor is always facing the dorsal surface, it is the           where ∇ is gradient operator. For color images, we estimate
only surface to be captured.                                              the intensity gradient which takes the maximum of the
                                                                          gradients of R, G and B bands at every pixel, using:
    The only factor due to which an inconsistency can occur
during image acquisition is the size of a subject’s hand since
there is no control over the size and thickness of a subject’s                  ∇ I = max( ∇ R , ∇ G , ∇ B )
hand in a practical scenario. Therefore, the exact distance                                                                                   (3)
between the object and the camera’s lens can never be pre-
determined or fixed. To handle this situation, a necessary                      The gradient obtained using the above equation gives
arrangement has been made in the setup. The focal shift of the            better edge information. An active contour that minimizes
camera which can be fine tuned to the order of millimeters                Econtour must satisfy the following Euler equation:
ensures the relative prevalence of the desired conditions.

B. Data Pre-Processing                                                       η1v" (s) − η 2 v iv ( s) − ∇Eext = 0
    The color image acquired through a camera generally
contains some additional information which is not required to             where v”(s) and v””(s) are the second and fourth order
obtain the venal pattern. So there is a need to extract the region
                                                                          derivatives of v(s). The above equation can also be viewed as a
of interest from the acquired image and finally to convert into a
                                                                          force balancing equation, Fint + Fext = 0 where,
noise free thinned image from which one can generate the
venal tree. Badawi [1] has considered the skin component of
the image as the region of interest (ROI). Wang and Leedham                                 Fint = η1v" ( s) − η 2 v iv ( s)
[11] used anthropometric points of a hand to segregate an ROI                                                                            (5)
from the acquired images. Most similar works based on ROI
selection employ arbitrary and inconsistent techniques and so
end up enhancing manual intervention during processing [8].                                        Fext = −∇E ext
This extracted region is used for further processing to obtain                                                                           (6)
the venal pattern. This section presents the method followed to
extract the ROI and then to obtain the venal tree. It consists of
                                                                          Fint, the internal force is responsible for the stretching and
four major tasks and they are (i) Segmentation (ii) Image
                                                                          bending and Fext, the external force, attracts the contour
Enhancement and Binarization (iii) Dilation and
Skeletonization and (iv) Venal pattern generation.                        towards the desired features in the image. The active contour
                                                                          deforms itself with time to exactly fit around the object. It can
    The segmentation technique used in this paper to segregate            thus be represented as a time varying curve.
the skin area from the acquired image selects the ROI in a
systematic manner and it also, successfully gets rid of all                                    v(s, t) = [x(s, t), y(s, t)]
manual intervention. It fuses a traditional technique based on                                                                                      (7)
active contouring [5] with a common cropping technique. It                where s    [0, 1] is arc length and t    R+ is time.
works on the principle of intensity gradient, where the user
initializes a contour around the object, for it to detect the                 Active contouring helps the contours to settle at the object
boundary of the object easily. A traditional active contour is            boundary. It is then followed by the iterative use of a cropping
defined as a parametric curve v(s) = [x(s), y(s)],s     [0, 1],           tool which helps extract the object automatically and al most
which minimizes the following energy functional.                          flawlessly (3). It can be noted, that the active contouring snake
                                                                          has been modified from its regular run. Instead of initiating the
              11                                                          snake from the outside in, it is run in reverse, after initiating it
  Econtour = ∫ (η1 v' (s) +η2 v" (s) ) + Eext (v(s)).ds
                         2          2
                                                                          from the centre of the image. This initiation is always done
                                                            (1)           automatically.
                                                                               The extracted ROI of the colored image is converted into a
where η1 and η2 are weighing constants to control the relative            grey scale image by the technique given in [3] as shown in Fig.
importance of the elastic and bending ability of the active               3.The segmented grey scale has been enhanced using Gaussian
contour respectively; v’(s) and v′′(s) are the first and second           filtering technique and is then normalized by converting it to an
order derivatives of v(s), and Eext is derived from the image so          image having a pre defined mean and variance. The resultant
that it takes smaller values at the feature of interest that is           image is then binarized by mean filtering. However, it may
edges, object boundaries etc. For an image I(x, y), where (x, y)          contain noises like salt and pepper, blobs or stains, etc. Median

                                                                                                       ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                   Vol. 8, No. 1, 2010
filtering is used to remove salt and pepper type noises.                            again to remove all disconnected isolated components from the
Eventually a grey scale image has been denoised and binarized                       final skeleton.
as given in Fig. 4.
                                                                                    C. Feature Extraction
                                                                                        This section presents a technique which extracts the
                                                                                    forkings from the skeleton image by examining the local
                                                                                    neighborhood of each ridge pixel using a 3X3 window. It can
                                                                                    be seen from the preprocessing image that an ROI contains
                                                                                    some thinned lines/ridges. These ridges representing vein
                                                                                    patterns can be used to extract features. Features like ridge
                 Figure 3. Automatically Segmented Image                            forkings are determined by computing the number of arms
                                                                                    originating from a pixel. This can be represented as A. The A
    The image may consist of a few edges of the vein pattern                        for a pixel P can be given as:
that may have been falsely eroded during filtering. These edges
are reconnected by dilation, i.e., running a disk of ascertained
radius over the obtained pattern. Then these obtained images
are skeletonized. Each vein is reduced to its central pixel and
                                                                                                     A = 0.5∑ | Pi − Pi +1 |,P9 = P1
                                                                                                               i =1
their thickness is reduced to 1 pixel size only. A skeletonized
image can hence, be obtained (see Fig. 5).                                                                                                                 (8)
                                                                                         For a pixel P, its eight neighboring pixels are scanned in an
                                                                                    anti-clockwise direction as follows:

                                                                                                               P4      P3     P2
                                                                                                               P5      P      P1
                                                                                                               P6      P7     P8

                                                                                        A given pixel P is termed as a ridge forking for a vein
            Figure 4. Enhanced and Binarized Grey Scale Image                       pattern if the value of A for the pixel is 3 or more. This ridge
                                                                                    forking pixel is considered as a feature point which can be
    In order to obtain only desired components amongst veins,                       defined by (x, y, θ) where x and y are coordinates and θ is the
all connected components are labeled and others are discarded.                      orientation with respect to a reference point.
The CCL (Connected Component Labeling) algorithm [6] is
modified to determine all the connected components in an
image. This modified algorithm detects and removes all
isolated and disconnected components of size less than a
specified threshold.

                                                                                               Figure 6. Four Arms emitting from a forking point
 Figure 5. The Binarized image can be processed to give the vein skeleton in
                                 the hand                                               The proposed method for calculating A can accommodate
                                                                                    three or four or more arms emitting out of a forking point.
    From the skeleton of the hand, the skeletonized veins are                       Cases where four arms emit from a forking point are common,
extracted. A vertical and a horizontal line (one pixel thick) are                   as shown in Fig. 6. Fig.7 shows the final image of the extracted
run through each coordinate of each image alternatively. The                        vein pattern with all forking points marked.
coordinates of the first and the last image pixels encountered by
the line, in both axes, are stored. These coordinates were later
turned black and the venal tree was extracted. The modified
connected component labeling (CCL) algorithm is executed

                                                                                                                    ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                    Vol. 8, No. 1, 2010

                                                                                                   ∑        A[i ]
                                                                                            V =     i =1
                                                                                                                    X 100 %
                                                                                       If V is more than a given threshold then one can draw the
                                                                                   conclusion that both the patterns are matched.

                                                                                                 III.       EXPERIMENTAL RESULTS
                                                                                       The proposed system has been tested against the IITK
                                                                                   database to analyze its performance. The database consists of
        Figure 7. The Final Vein Pattern with all Forking Points Marked            1750 images obtained from 341 individuals under controlled
                                                                                   environment. Out of these 1750 images, 341 are used as query
D. Matching Strategy                                                               samples. A graph is plotted for the achieved accuracy against
    Suppose, N and M are two patterns having n and m features                      the various threshold values as shown in Fig.8. It is observed
respectively. Then the sets N and M are given by:                                  that the maximum accuracy of 99.26% can be achieved at the
                                                                                   threshold value, T, of 25. Graphically, it is also found in Fig. 9
                                                                                   that the value of FRR for which the system achieves maximum
   N= {(x1, y1, θ1), (x2, y2, θ2), (x3, y3, θ3), …, (xn, yn, θn)}                  accuracy is 0.03%. Finally, the ROC curve is taken between the
                                                                                   values of GAR and the FAR is given in Fig. 10.

   M={(a1, b1, φ1), (a2, b2, φ2), (a3, b3, φ3),..., (am, bm, φm)}

     where (xi, yi, θi) and (aj, bj, φj) are the corresponding
features in pattern N and M respectively. For a given minutiae
(xi,yi,θi) in N, it first determines a minutiae (aj, bj, φj) such that
the distance     ( xi − a j ) 2 + ( yi − b j ) 2 is minimum for all j,
j=1,2,3 ...,m. Let the distance be sdi and the corresponding
difference between two directions be ddi, where
ddi = θi − ϕ j .                                                                                                Figure 8. Graph Accuracy

    This is done for all features in N. To avoid the selection of
same feature in M for a given minutiae in N, one can follow the
following procedure. Suppose, for the ith feature in N, one gets
sdi for the jth feature in M. Then, in order to determine sdi+1,
one considers all features in M which are not selected in sd1,
sd2….sdi. Let A be a binary array of n elements satisfying

        A[ i ] = {
                           1      if ( sdi ≤ ti ) and ( ddi ≤ t 2 )

                           0      otherwise
                                                                                                    Figure 9. Graph indicating FAR and FRR
where t1 and t2 are predefined thresholds. The threshold values
defined by t1 and t2 are necessary to compensate for the
unavoidable errors made by feature extraction algorithms and
to account for the small plastic distortions that cause the
minutiae positions to change. These are thresholds determined
by averaging the different feature shifts based on intensive
    Then the percentage of match is obtained for the pattern N
   having n features against the pattern can be computed by

                                                                                                                    ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                    Vol. 8, No. 1, 2010
                                                                                       [10] Sang-Kyun Im, Hyung-Man Park, Young-Woo Kim, Sang-Chan Han,
                                                                                            Soo-Won Kim and Chul-Hee Kang “Improved Vein Pattern Extracting
                                                                                            Algorithm and Its Implementation” in Journal of the Korean Physical
                                                                                            Society, 38/3/pp. 268-272 (2001).
                                                                                       [11] L. Wang,C.G Leedham, “A Thermal Hand Vein Pattern Verification
                                                                                            System” in Pattern Recognition and Image Analysis, of Lecture Notes in
                                                                                            Computer Science, Springer 3687/pp. 58–65 (2005).
                                                                                       [12] L. Wang, C.G. Leedham, “Near and Far-infrared Imaging for Vein
                                                                                            Pattern Biometrics” in Proceedings of IEEE International Conference on
                                                                                            Advanced Video and Signal Based Surveillance/pp. 52 (2006).
                                                                                       [13] L.Wang, G. Leedham, D. Siu-Yeung Cho, “Minutiae feature analysis for
                                                                                            infrared hand vein pattern biometrics” in Pattern Recognition
                                                                                            41/3/pp.920-929 (2008).
                                                                                       [14] M. Watanabe, T. Endoh, M. Shiohara, and S. Sasaki “Palm Vein
                    Figure 10. The ROC Curve- GAR v/s FAR                                   Authentication Technology and its Applications”, Proceedings of the
                                                                                            Biometric Consortium Conference, September, 2005.
                          IV.     CONCLUSION
                                                                                                                   AUTHORS PROFILE
    This paper has proposed a new absorption based vein
pattern recognition system. It has a very low cost data
acquisition set up, compared to that used by others. The system                        Mohit Soni graduated from the Delhi University with an honors degree in
has made an attempt to handle issues such as effects of rotation                       Botany and Biotechnology. He received his Masters degree in Forensic
and translation on acquired images, minimizing the manual                              Science from the National Institute of Criminology and Forensic Sciences,
                                                                                       New Delhi. Thereafter he received a research fellowship from the Directorate
intervention to decide on the verification of an individual. It                        of Forensic Sciences, New Delhi in 2006 and is currently pursuing his
has been tested in a controlled environment and against a                              Doctoral degree in Biometrics and Computer Science from the Uttar Pradesh
dataset of 1750 samples obtained from 341 subjects. The                                Technical University, Lucknow.
experimental results provide an excellent accuracy of 99.26%
with FRR 0.03%. This is found to be comparable to most
previous works [2] [11] [12] [13] and is achieved through a                            Sandesh Gupta received his Bachelors in Technology from the University
technique which is found to be much simpler.                                           Institute of Engineering and Technology, C.S.J.M University Kanpur in 2001.
                                                                                       He is working currently as a lecturer for the computer science department in
                                                                                       the same institution and is pursuing his PhD from the Uttar Pradesh Technical
                                REFERENCES                                             University, Lucknow.
[1]   A.Badawi “ Hand Vein Biometric Verification Prototype: A Testing
      Performance and Patterns Similarity” in Proceedings of the International
      Conference on Image Processing, Computer Vision, and Pattern                     M S Rao is a well known forensic scientist of the country and started his
      Recognition/pp. 3-9 (2006).                                                      career in Forensic Science in the year 1975 from Orissa Forensic Science
                                                                                       Laboratory. He carried extensive R&D work on Proton Induced X-Ray
[2]   R. de Luis-Garcia, C. Alberola-Lopez, O. Aghzoutb, Ruiz-Alzola,                  Emission (PIXE) in Forensic Applications during 1978-1981. He was
      Biometric Identification Systems, Signal Processing, 83/pp. 2539-2557            appointed as Chief Forensic Scientist to the Government of India in 2001. He
      (2003).                                                                          was Secretary and Treasurer for the Indian Academy of Forensic Sciences
[3]   R.C Gonzalez, R.E Woods Digital Image Processing using MATLAB,                   from 1988 to 2000 and is now the President of the Academy. He was convener
      Prentice Hall, 1st Edition, 2003.                                                of the Forum on Forensic Science of the Indian Science Congress during 1992
[4]   U. Halici, L.C Jain, A. Erol “Introduction to Fingerprint Recognition” in        and 2001. He is the Chairman of the Experts Committee on Forensic Science.
      Intelligent Biometric Techniques in Fingerprint and Face
      Recognition/pp. 3–34 (1999).
[5]   M. Kass, A. Witkin, D. Terzopoulos, “Snake: Active Contour Models”               Phalguni Gupta received the Doctoral degree from Indian Institute of
      in International Journal of Computer Vision, 1/5/ pp. 321-331 (1988).            Technology Kharagpur, India in 1986. Currently he is a Professor in the
[6]   I. Khan, “Vein Pattern Recognition – Biometrics Underneath the Skin”             Department of Computer Science & Engineering, Indian Institute of
      in Article 320 on (2006).                                 Technology Kanpur (IITK), Kanpur, India. He works in the field of
                                                                                       biometrics, data structures, sequential algorithms, parallel algorithms, on-line
[7]   V. Khanna, P. Gupta, C.J Hwang, “Finding Connected Components in                 algorithms. He is an author of 2 books and 10 book chapters. He has
      Digital Images by Aggressive Reuse of Labels” in International Vision
                                                                                       published more than 200 papers in International Journals and International
      Computing. 20/8/ pp.557-568 (2002).
                                                                                       Conferences. He is responsible for several research projects in the area of
[8]   C. Lakshmi Deepika, A. Kandaswamy, “An Algorithm for Improved                    Biometric Systems, Image Processing, Graph Theory and Network Flow.
      Accuracy in Unimodal Biometric Systems through Fusion of Multiple
      Feature Sets” in ICGST-GVIP Journal 9/3/pp.33-40 (2009).
[9]   D. Maltoni, D. Maio, A.K Jain, S. Prabhakar, “Handbook of Fingerprint
      Recognition”, Springer, New York (2003).

                                                                                                                         ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 8, No. 1, April 2010

Extending Logical Networking Concepts in Overlay
         Network-on-Chip Architectures
                                                            Omar Tayan
                       College of Computer Science and Engineering, Department of Computer Science,
                                     Taibah University, Saudi Arabia, P.O. Box 30002

   Abstract—System-on-Chip (SoC) complexity scaling driven by             [1-7]. This study summarizes the design challenges of future
the effect of Moore’s Law in Integrated Circuits (ICs) are                NoCs and reviews the literature of (some) emerging NoC
required to integrate from dozens of cores today to hundreds              architectures introduced to enhance on-chip communication.
of cores within a single chip in the near future. Furthermore,
SoC designs shall impose strong requirements on scalability,              An argument is then presented on the scalability and perfor-
reusability and performance of the underlying interconnection             mance benefits obtained in NoCs by using overlay networks of
system in order to satisfy constraints of future technologies.            particular topologies that were previously considered as logical
The use of scalable Network-on-Chip (NoC) as the underlying               networks for use in optical networks.
communications infrastructure is critical to meet such stringent
future demands. This paper focuses on the state-of-the-art in NoC
development trends and seeks to develop increased understanding
                                                                                   II. F UTURE N O C D ESIGN R EQUIREMENTS
of how ideal regular NoC topologies such as the hypercube,                        The benefits introduced by employing the NoC ap-
de-bruijn, and Manhattan Street Network, can scale to meet
the needs of regular and irregular future NoC structures with             proach in SoC designs can be classified as improvements in
increasing numbers of core resources. The contributions of this           structure, performance and modularity [4]. The main challenge
paper are three-fold. First, the study introduces a new design            for NoC designers will be to provide functionally correct,
framework for overlay architectures based on the success of the           reliable operation of the interacting subsystem components.
hypercube, de-bruijn and Manhattan Street Network in NoCs,                On-chip interconnection networks aim to minimize current
providing increased scalability for regular structures, as well as
support for irregular structures. Second, the study proposes how          SoC limitations in performance, energy consumption and
the regular topologies may be combined to form hybrid overlay             synchronization issues. In [10, 11], the globally asynchronous
architectures on NoCs. Third, the study demonstrates how such             and locally synchronous (GALS) synchronization paradigm
overlay and hybrid overlay architectures can be used to extend            was identified as a strong candidate for emerging ICs. GALS
benefits from logical topologies previously considered in optical          eliminates the clock skew in single clock systems by using
networks for use with increased flexibility in the NoC domain.
                                                                          many different clocks in a distributed manner. Thus, the
  Keywords: Network-on-Chip, logical networks, overlay ar-
                                                                          subsystem components become distributed systems that initiate
chitectures, hybrid architectures.
                                                                          data transfers autonomously with little or no global coordina-
                      I. I NTRODUCTION                                    tion [1].
                                                                                  A key issue in SoC design is the trade-off between
       Future performance requirements of networking tech-
                                                                          generality (i.e. the reusability of hardware, operating systems
nologies will be significantly different than current demands
                                                                          and development techniques) and performance (delay, cost and
on performance. Consequently, ultra-fast communication net-
                                                                          power consumption in application specific structures) [5]. An
work technologies such as optical networks have emerged
                                                                          important issue is to consider the implications of the NoC
as a high-bandwidth communication infrastructure for multi-
                                                                          design approach on the design and implementation costs. For
processor interconnection architectures and their presence as
                                                                          instance, [5] emphasizes that increasing non-recurring costs of
an interconnection infrastructure is beginning to emerge in
                                                                          NoC-based ICs requires that the design cost of ICs are shared
the NoC literature. Device scaling trends driven by the ef-
                                                                          across applications, in which case the design methodology
fect of Moore’s Law suggests that future SoC designs must
                                                                          would support product family management.
integrate from several dozen cores to hundreds of resource
cores within a single chip, thereby necessitating the need
                                                                                 III. R EVIEW OF O N -C HIP I NTERCONNECTION
for increased bandwidth and performance requirements. The
                                                                                               T ECHNOLOGIES
literature evidences that SoC designs have moved out of bus-
based approaches towards the acceptance of a variety of NoC                      Various NoC architectures have been proposed to meet
approaches for interconnecting resource cores.                            future performance requirements for intra-chip communica-
        NoC approaches have progressed as the widely adopted              tion. Essentially, an NoC architecture is the on-chip commu-
alternative to shared-bus architectures, with the ability to              nication infrastructure consisting of the physical layer, the data
meet future performance requirements since NoCs support                   link layer and the network layer. In addition, the NoC architec-
reusability and network bandwidth scales with system growth               ture may be characterized by its switching technique, routing

                                                                                                       ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 8, No. 1, April 2010

protocol, topology and node organization. These characteris-
tics comprise the design space of future on-chip networks [10].
Typically, network functionalities and data transfer properties
differ between on-chip and inter-chip networks, and hence the                                               110             111

design space for future SoCs must be explored. This section
reviews the literature of emerging SoC platforms, contrasting                                       100               101
differences in the design space of each architecture.
        One particular architecture commonly used as the basis                                              010             011
of many NoC design proposals is the 2-dimensional mesh,
forming a torus or Manhattan-like topology. In [5], the NoC
                                                                                                    000               001
architecture is an m x n mesh of switches and resources. A
simple 2-dimensional topology was selected for its scalability                  De Bruijn Graph           Hypercube               Manhattan Street Network
and simplistic layout. Consistent with the GALS paradigm,
internal communications within each resource is synchronous                 Fig. 1.   A subset of multi-processor interconnection architectures
and resources operate asynchronously with respect to each
other. Dally et al. [4] presents a 2-dimensional folded torus
topology with the motivation to minimize the total area
                                                                        communications architecture is not new and has been the focus
overhead for an on-chip network implementation. The work
                                                                        of previous studies [15, 16]. Furthermore, the work presented
presented in [10] considers the development of a communica-
                                                                        in [6, 13] describes the hardware emulation of a regular logical
tions protocol for a 2-dimensional mesh topology using point-
                                                                        topology using Field Programmable Gate Array (FPGA) logic
to-point crossbar interconnects, with the assumption that the
                                                                        and a hardware emulator. From the study [6, 13], it is noted
sole user of the network is a programmer. Hence, the network
                                                                        that a number of hardware and software design issues must
must be able to handle the needs of the programmer and the
                                                                        be addressed before realizing the hardware implementation of
surrounding chip environment, therefore requiring support of
                                                                        a logical network as an NoC. In the literature, a number of
static and dynamic traffic. In contrast, [2] presents a compar-
                                                                        comparisons were drawn with related works which have also
ison between a bus architecture and a generic NoC model.
                                                                        explored the NoC implementation of similar torus-like archi-
Principally, the work demonstrates that a NoC-based system
                                                                        tectures [14] and hierarchical bus-based approaches combined
yields significant improvements in performance compared with
                                                                        with a crossbar NoC architecture [10] implemented on FPGAs.
a bus architecture used in SoC systems.
        An interesting alternative to the 2-dimensional mesh                     The literature [17-20] presents the hypercube as a
topology is presented in the literature. For instance, in Hemani        regular 3D architecture for SoC interconnection. Whilst several
et al. [11], the nodes are organized as a honeycomb structure,          studies present arguments of the benefits of such a regular
whereby resources are organized as nodes of the hexagon                 structure as an NoC, other studies focus on improving on
with a local switch at the center that interconnects these              disadvantages associated with the hypercube [20], yet whilst
resources. The proposed NoC architecture in [11] was generic,           others emphasize on the need for irregular NoC structures.
it was not tailored to a specific application domain and was                     The de-bruijn network, on the other hand, presents a
required to support re-configurability at the task or process            stronger case for significant performance improvements, scal-
level. The work in [11] presents arguments to justify that the          ing abilities and support for optimized routing techniques [21-
area and performance penalty incurred using the honeycomb               26]. In the literature, several studies had presented variations of
architecture would be minimum.                                          the de-bruijn network in order to emphasize its superiority in
         More recently, an interesting area of research has             scaling, reliability, routing, performance, power consumption
considered the use of multi-processor interconnection archi-            and complexity [21 -26], whereas other studies used the de-
tectures that were previously considered for use as logical             bruijn network as the benchmark for comparison with other
topologies deployed in optical networks for use in NoCs.                NoCs, including the butterfly and Benes topology [24] and
Figure 1 illustrates a subset of logical topologies considered          with the mesh and torus [23]. All comparative performance
here for use in NoCs. [12] provides an insight as to how                metrics had demonstrated the superiority of the de-bruijn as a
any self-routing logical topology may be applied in optical-            communication architecture for SoCs.
networks.                                                                       This study shall focus on the design and implementation
        [7, 12] had considered one particular logical topology          considerations of general regular logical networks for use in
for use in light-wave physical networks because of its simple           NoCs in order to extend the topological benefits and findings
routing and control mechanism. However, a NoC platform                  from mature work on logical topology deployment into the
would now imply that the use of logical networks would                  NoC domain. In particular, this study extends the work of
require to operate under different constraints and assumptions          logical topology deployment onto general physical networks,
from those considered earlier in an optical-network environ-            and applies an adopted and enhanced concept of overlaying
ment. The principle of applying a regular logical network, such         logical topologies in optical networks for deriving flexible
as the Manhattan Street Network (MSN), for use as an NoC                regular and irregular NoC architectures. Figure 2 illustrates

                                                                                                           ISSN 1947-5500
                                                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                       Vol. 8, No. 1, April 2010

the concept of using logical networks in NoCs.

                                       Micro-Level MSN Implementation                                                                Graph1                               Graph1
                                                  110             111
                                          1             2          3             4

                                          100               101
                                          5             6          7             8
                                                                                                   ,                                                      Graph2                               Graph2
                                                  010             011
                                          9         10            11           12

                                          3          1 4 001      15           16
                  De Bruijn Graph               Hypercube               Manhattan Street Network

                                                                                                                                  co-location                          co-location
                                           The Regular NoC Structure
                                     Regular or Hybrid Irregular Networks can be                                                    of edge                            of internal
                                        Removes the Embedding Problem
                                    Physically Implemented or Mapped to the NoC                                                     nodes                                nodes
                                                                                             NoC Structure
                                                                                                                                                 (a)                                 (b)

                                                                                                                            Fig. 3.         Network Overlay concept applied to the De-Bruijn

            Fig. 2.    Logical Network Implementation on NoCs

        This paper introduces a new design framework for
the overlay of multi-processor interconnection architectures
that supports regular and irregular core-numbers on-chip. A
rich source of literature exists on the use of multi-processor
interconnection architectures as regular logical networks de-
ployed on host optical networks. The motivation here is to
apply such logical networking concepts and benefits, through
the use of the hypercube, de-bruijn and MSN in the SoC                                                                                                 Graph1        Graph2
domain, whilst removing the restriction of the highly course-
granular regular structure associated with NoC topologies as
NoC sizes scale. Therefore, a mechanism for applying overlay                                                            Fig. 4.      The topology produced by overlaying two de-bruijn graphs
architectures to support regular and irregular scalable NoCs
is introduced as follows. When considering the connectivity
and node functionality of each network (see Figure 1), we                                                            a hybrid of overlay architectures. An example in Figure 5
find that the in-degree and out-degree for each node is similar                                                       demonstrates one hybrid of the MSN and the de-bruijn.
throughout within each network. Hence, from Figure 1, the                                                                    The significance of this novel approach to NoC architec-
de-bruijn, hypercube and MSN have a node degree of 2, 3,                                                             ture design is that it supports performance-intensive tasks to be
and 2 respectively (where the in-degree equals the out-degree).                                                      mapped onto the particularly ideal (high-performance) network
Following an overlay of the de-bruijn (as in Figure 3a), for                                                         segments, such as graph-1 or graph-2, whilst other highly-
example, we find two-instances where the functionality of two                                                         localized traffic-generating tasks are mapped onto the MSN
nodes are ’co-located’ onto a single node (e.g. at the interface                                                     portion for instance. Hence, this framework also advances
between two overlay topologies). The additional functionality                                                        optimization techniques for application-mapping of tasks onto
at the co-located nodes may be supported/accommodated                                                                NoCs, providing an insight into further opportunities for
by providing additional buffers that separate network-traffic                                                         progress of key significance in the NoC application-mapping
destined to graph-1 from traffic destined to graph-2 then                                                             literature. This section has demonstrated how overlay networks
routing as normal. Different nodes in graph-1 may be co-                                                             and hybrid overlay networks may be applied to extend the
located to yield various degrees of (comparatively-granular)                                                         benefits of logical networks from the optical networks domain
NoC architectures, therefore providing support for regular                                                           (as evident in the mature literature in this topic) to the NoC
and irregular structures (Figure 3b). Figure 4 illustrates the                                                       domain, while providing support for regular and irregular
new NoC architecture after co-location of edge nodes of the                                                          structures with comparatively granular flexibility in design.
de-bruijn network. 1 This paper also extends the concept                                                             Additionally, the proposed hybrid overlay design enables opti-
of overlay networks to different network-types, producing                                                            mization of application-task-mapping onto particular segments
   1 A similar concept of overlays can also be applied to larger sizes of the
                                                                                                                     of the network architecture based on the relative (and signif-
de-bruijn, hypercube and MSN. However, this paper has applied the concept                                            icance) of properties for each segment and the corresponding
to one-size of each logical topology.                                                                                task-constraints.

                                                                                                                                                           ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                   Vol. 8, No. 1, April 2010

                                                                                    [11] Hemani A., Jantsch A., Kumar S., Postula A., Oberg J., Millberg M.,
                                                                                        Lindqvist D., Network on a Chip: An architecture for billion transistor
                                                                                        era, Source: axel/papers/2000/norchip-noc.pdf.
             De-Bruijn                  MSN                  De-Bruijn
              Graph 1                                         Graph 2               [12] Komolafe, O., High Speed Optical Packet Switching over Arbitrary
                                                                                        Physical Topologies using the Manhattan Street Network, Ph.D Thesis,
                                                                                        Univ. of Strathclyde, U.K., 2001.
                                                                                    [13] Oommen K., Harle D., Evalation of a network on chip architecture
                                                                                        based on the clockwork routed Manhattan street network using hardware
                                                  ,                                     emulation, Proceedings of the 48th Midwest Symposium on Circuits and
                                                                                        Systems, U.S.A, August, 2005.
                                                                                    [14] Lusala A.K., Manet P., Rousseau B., Legat J.-D., NoC Implementation in
                                                                                        FPGA using Torus Topology, Proceedings of the International Conference
                                                                                        on Field Programmable Logic and Applications, 2007.
                                                                                    [15] Tayan O., Harle D., A Manhattan Street Network Implementation for
                                                                                        Networks on Chip, Proceedings of the First International Conference on
                                                                                        Information and Communication Technologies from Theory to Applica-
                                                                                        tions, Damascus, Syria, April 2004.
                                                                                    [16] Tayan O., Networks-on-Chip: Challenges, Trends and Mechanisms for
                                                                                        Enhancements, Proceedings of the Third International Conference on
 Fig. 5.   A hybrid overlay network that combines the de-bruijn and MSN                 Information and Communication Technologies, Karachi, Pakistan, August
                                                                                    [17] Li Ping Sun, El Mostapha Aboulhamid, David J.-P, Networks on
                                                                                        chip using a reconfigurable platform, Proceedings of the International
                V. D ISCUSSION AND C ONCLUSION                                          Symposium on Micro-NanoMechatronics and Human Science, 2003.
                                                                                    [18] Derutin J.P., Damez L., Desportes A., Lazaro Galilea J.L., Design
       This paper has presented state-of-the-art techniques in                          of a Scalable Network of Communicating Soft Processors on FPGA,
NoC developments in order to increase understanding of how                              Proceedings of the International Workshop on Computer Architecture for
ideal regular NoC topologies, including the hypercube de-                               Machine Perception and Sensing, 2007.
                                                                                    [19] Martinez Vallina F., Jachimiec N., Saniie J., NOVA interconnect for
bruijn and MSN can scale to satisfy the demands of regular and                          dynamically reconfigurable NoC systems, Proceedings of the International
irregular NoC structures. Significant contributions of the paper                         Conference on Electro/Information Technology, 2007.
include; introduction of a new design framework for overlay                         [20] Damaj S., Goubier T., Blanc F., Pottier B., A Heuristic (delta D) Digraph
                                                                                        to Interpolate between Hypercube and de Bruijn Topologies for Future
NoC architectures,a proposed design framework for combining                             On-Chip Interconnection Networks, Proceedings on the International
regular topologies to form various hybrid overlay architectures                         Conference on Parallel Processing Workshops, 2009.
that can be optimized for a particular application-workload                         [21] Moussa H., Baghdadi A., Jezequel M., Binary de Bruijn on-chip network
                                                                                        for a flexible multiprocessor LDPC decoder, Proceedings of the 45th
scenario, and finally, by demonstrating how such overlay and                             ACM/IEEE Design Automation Conference, 2008.
overlay hybrid architectures can be used to extend benefits                          [22] Yiou Chen, Jianhao Hu, Xiang Ling, De Briujn graph based 3D Network
adopted from logical networking, previously considered in                               on Chip architecture deign, Proceedings of the International Conference
                                                                                        on Communications, Circuits and Systems, 2009.
optical networks, for use as on-chip network architectures.                         [23] Hosseinabady M., Kakoee M.R., Mathew J., Pradhan D.K., Reliable
                                                                                        network-on-chip based on generalized de Bruijn graph, Proceedings of
                             R EFERENCES                                                the IEEE International High Level Design Validation and Test Workshop,
[1] Benini L., De Micheli G., Networks on chips: a new SoC paradigm,                [24] Moussa H., On-chip communication network for flexible multiprocessor
    Computer , Volume: 35 , Issue: 1, Jan. 2002.                                        turbo coding, Proceedings of the Third International Conference on Infor-
[2] Zeferino C.A., Kreutz M.E., Carro L., Susin A.A., A study on commu-                 mation and Communication Technologies: From Theory to Applications,
    nication issues for systems-on-chip, Proceedings of the 15th Symposium              2008.
    on Integrated Circuits and Systems Design, 9-14 Sept., 2002.                    [25] Sabbaghi-Nadooshan R., Modarressi M., Sarbazi-Azad H., The 2D
[3] Ali M., Welzl M., Zwicknagl M., Networks on Chips: Scalable inter-                  DBM: An Attractive alternative to the simple 2D mesh topology for
    connects for future systems on chips, Proceedings of the 4th European               on-chip networks, Proceedings of the IEEE International Conference on
    conference on Circuits and Systems for Communications, 2008.                        Computer Design, 2008.
[4] Dally W.J., Towles B., Route Packets, Not Wires: On-Chip Interconnection        [26] Hosseinabady M., Kakoee M.R., Mathew J., Pradhan D.K., De Bruijn
    Networks, Proceedings of DAC, June 18-22, 2001, Las Vegas, U.S.A.                   Graph as a Low Latency Scalable Architecture for Energy Efficient
[5] Kumar S., Jantsch A., Soininen J.-P., Forsell M., Millberg M., Oberg                Massive NoCs, Proceedings of the Design, Automation and Test in Europe
    J., Tiensyrja K., Hemani A., A network on chip architecture and design              Conference, 2008.
    methodology, Proceedings of the IEEE Computer Society Annual Sym-
    posium on VLSI, 25-26 April 2002.
[6] Oommen K., Tayan O., Harle D., Network on Chip Architecture using a
    Clockwork Routed Manhattan Street Network, Proceedings of the System
    on Chip Design and Test Conference, Loughborough, U.K, Sept. 2004.
[7] Tayan, O., Exploring the Scalability of a High-Speed Networking Topol-
    ogy in Macro- and Micro-Networks, Ph.D Thesis, Univ. of Strathclyde,
    U.K., December 2005.
[8] Pontes J., Moreira M., Soares R., Calazans N., Hermes-GLP: A GALS
    Network on Chip Router with Power Control Techniques, Proceedings of
    the IEEE Computer Society Annual Symposium on VLSI, 2008.
[9] GaoMing Du., DuoLi Zhang., YongSheng Yin., Liang Ma., LuoFeng
    Geng., YuKung Song., MingLun Gao/. FPGA prototype design of Network
    on Chips, Proceedings of the 2nd International Conference on Anti-
    counterfeiting, Security and Identification, 2008.
[10] Heo S., Kim J., Ma A., Next Generation On-chip Communication
    Networks, check1.pdf.

                                                                                                                      ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No1,April 2010

    Effective Bandwidth Utilization in IEEE802.11 for
        S.Vijay Bhanu                                 Dr.RM.Chandrasekaran                                   Dr.V.Balakrishnan
      Research Scholar, Anna University, Coimbatore Registrar, Anna University, Trichy                     Research Co-Supervisor
      Tamilnadu, India, Pincode-641013              Tamilnadu, India, Pincode: 620024.                     Anna University, Coimbatore
      E-Mail:                     E-mail:                              E-Mail

Abstract -Voice over Internet protocol (VoIP) is one of the most         properly) [2]; they are used for data transmission, and a
important applications for the IEEE 802.11 wireless local area           network only designed for data transmission is not ideal for
networks (WLANs). For network planners who are deploying                 voice transmission. Compare to data packet, voice packets are
VoIP over WLANs, one of the important issues is the VoIP                 small in size. Due to the large overhead involved in
capacity. VoIP bandwidth consumption over a WAN is one of the            transmitting small packets, the bandwidth available for VoIP
most important factors to consider when building a VoIP                  traffic is far less than the bandwidth available for data traffic.
infrastructure. Failure to account for VoIP bandwidth                    This overhead comprises transmitting the extra bytes from
requirements will severely limit the reliability of a VoIP system        various networking layers (packet headers) and the extra time
and place a huge burden on the WAN infrastructure. Less                  (backoff and deferral time) imposed by the Distributed
bandwidth utilization is the key reasons for reduced number of
                                                                         Coordination Function (DCF) of 802.11b.
channel accesses in VOIP. But in the QoS point of view the free
bandwidth of atleast 1-5% will improve the voice quality. This                     This paper experimentally study the relationship
proposal utilizes the maximum bandwidth by leaving 1-5% free             between bandwidth utilization in the wireless LAN and the
bandwidth. A Bandwidth Data rate Moderation (BDM)                        quality of VoIP calls transmitted over the wireless medium.
algorithm has been proposed which correlates the data rate               On an 802.11 b WLAN, frames are transmitted at up to 11
specified in IEEE802.11b with the free bandwidth. At each time           Mbps. There is a lot of overhead before and after the actual
BDM will calculate the bandwidth utilization before sending the          transmission of frame data, however, and the real maximum
packet to improve performance and voice quality of VoIP. The             end-to-end throughput is more on the order of 5 Mbps. So, in
bandwidth calculation in BDM can be done by using Erlang and             theory, 802.11b should be able to support 50-plus
VOIP bandwidth calculator. Finally, ns2 experimental study
                                                                         simultaneous phone calls[1]. But practically it support only 5
shows the relationship between bandwidth utilization, free
bandwidth and data rate. The paper concludes that marginal
                                                                         calls. This proposal improves bandwidth utilization in order to
VoIP call rate has been increased by BDM algorithm.                      achieve maximum channel access and improved QoS by using
                                                                         BDM algorithm. The number of channel access can be
Keywords: WLAN ,VOIP ,MAC Layer, Call Capacity, Wireless                 improved by changing the data rate frequently.
Network                                                                           This paper is structured as follows: Section IA
                                                                         describes about basic history of 802.11 MAC and previous
                     I. INTRODUCTION                                     related work. Section III introduces a method for predicting
          VoIP services have been significantly gaining                  VoIP bandwidth utilization. Section IV shows the BDM
prominence over the last few years because of a number of                algorithm and its functionalities and Section V&VI discuss
impressive advantages over their traditional circuit-switched            about the simulation topology, parameters and results. Final
counterparts including but not limited to high bandwidth                 part contains conclusion and future enhancement.
efficiency, low cost, and flexibility of using various
compression strategies. In contrast to wired networks, the               A. Basic Theory of 802.11 MAC
bandwidth of wireless network is limited. Furthermore, a                          The basic 802.11 MAC protocol is the Distributed
wireless channel is error-prone and packets can be discarded             Coordination Function (DCF), which is based on the Carrier
in transmission due to wireless errors such as signal fading or          Sense Multiple Access/Collision Avoidance (CSMA/CA)
interference. Thus, the efficiency of a wireless channel access          mechanism [3] [4]. A mobile station (STA) is allowed to send
becomes a critical issue.                                                packets after the medium is sensed idle for the duration greater
                                                                         than a Distributed Inter-Frame Space (DIFS). If during
          Currently, the most popular WLAN standard is the               anytime in between the medium is sensed busy, a back-off
IEEE 802.11b, which can theoretically support data rates up to           procedure should be invoked. Specifically, a random variable
11 Mb/s, however, this data rate is for optimal conditions [1].          uniformly distributed between zero and a Contention Window
On the other hand, 802.11a and 802.11g networks have data                (CW) value should be chosen to set a Back-off Timer. This
rates up to 54 Mb/s and they are not designed to support voice           Back-off Timer will start to decrement in units of slot time,
transmission (because of the APs are not distributed in the              provided that no medium activity is indicated during that
most optimum way, communication can be established                       particular slot-time. The back-off procedure shall be

                                                                                                     ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   Vol. 8, No1,April 2010
suspended anytime the medium is determined to be busy and                 plain IEEE802.11 MAC protocol, and adopting an additional
will be resumed after the medium is determined to be idle for             application aware module, logically placed above the MAC
another DIFS period. The STA is allowed to start transmission             layer. In reference [9] proposes two feedback-based bandwidth
as soon as the Back-off Timer reaches zero. A mobile station              allocation algorithms exploiting HCCA to provide service
(STA) shall wait for an ACK when a frame is sent out. If the              with guaranteed bounded delays: (1) the Feedback Based
ACK is not successfully received within a specific ACK                    Dynamic Scheduler (FBDS) and (2) the Proportional Integral
timeout period, the STA shall invoke back-off and                         (PI)-FBDS. They have been designed using classic discrete-
retransmission procedure. The CW value shall be increased                 time feedback control theory. We will assume that both
exponentially from a CWmin value until up to a CWmax value                algorithms, running at the HC, allocate the WLAN channel
during each retransmission.                                               bandwidth to wireless stations hosting real-time applications,
          An additional Request to Send/ Clear To Send                    using HCCA functionalities. This allows the HC to assign
(RTS/CTS) mechanism is defined to solve a hidden terminal                 TXOPs (transmission opportunity) to ACs by taking into
problem inherent in Wireless LAN. The successful the                      account their specific time constraints and transmission queue
exchange of RTS/CTS ensures that channel has been reserved                levels. We will refer to a WLAN system made of an Access
for the transmission from the particular sender to the particular         Point and a set of quality of service enabled mobile stations
receiver. The use of RTS/CTS is more helpful when the actual              (QSTAs). Each QSTA has up to 4 queues, one for each AC in
data size is larger compared with the size of RTS/CTS. When               the 802.11e proposal. FBDS require a high computational
the data size is comparable with the size of RTS/CTS, the                 overhead at the beginning of each service period, due to the
overhead caused by the RTS/CTS would compromise the                       queue length estimation.
overall performance.
                                                                                   By [8] Wireless Timed Token Protocol (WTTP)
                   II. PREVIOUS WORKS                                     provides traffic streams with a minimum reserved rate, as
                                                                          required by the standard, and it accounts for two types of
          This section, reviews the existing literature related to        traffic streams simultaneously, depending on the
enhancing voip call capacity. In reference [5] Aggregation                corresponding application: constant bit rate, which are served
with fragment Retransmission (AFR) scheme, multiple                       according to their rate, and variable bit rate traffic streams.
packets are aggregated into and transmitted in a single large             Additionally, WTTP shares the capacity which is not reserved
frame. If errors happen during the transmission, only the                 for QoS traffic streams transmissions among traffic flows with
corrupted fragments of the large frame are retransmitted.                 no specific QoS requirements. This VAD [10] algorithm is
Clearly, new data and ACK frame formats are a primary                     capable of removing white noise as well as frequency selective
concern in developing a practical AFR scheme. Optimal frame               nose and maintaining a good quality of speech.
and fragment sizes are calculated using this model, and an
algorithm for dividing packets into near-optimal fragments is              III. CALCULATING BANDWIDTH CONSUMPTION FOR
designed. Difficulties for new formats include 1) respecting                                           VOIP
the constraints on overhead noted previously and 2) ensuring                       Bandwidth is defined as the ability to transfer data
that, in an erroneous transmission, the receiver is able to               (such as a VoIP telephone call) from one point to another in a
retrieve the correctly transmitted fragments—this is not                  fixed amount of time.The bandwidth needed for VoIP
straightforward because the sizes of the corrupted fragments              transmission will depends on a few factors: the compression
may be unknown to the receiver.                                           technology, packet overhead, network protocol used and
                                                                          whether silence suppression is used. Voice streams are first
          Extended dual queue scheme (EDQ) provides a QoS                 encapsulated into RTP packets, and they are carried by
for the VoIP service enhancement over 802.11 WLAN. It                     UDP/IP protocol stack [3]. A single voice call consists of two
proposes a simple software upgrade based solution, called an              opposite RTP/UDP flows. One is originated from the AP to a
Extended Dual queue Scheme (EDQ), to provide QoS to real-                 wireless station, and the other oppositely flows. There are two
time services such as VoIP [6]. The extended dual queue                   primary strategies for improving IP network performance for
scheme operates on top of the legacy MAC. The dual queue                  voice: several techniques were proposed for QoS provisioning
approach is to implement two queues, called VoIP queue and                in wireless networks[11] [12]. Allocate more VoIP bandwidth
data queue. Especially, these queues are implemented above                and implement QoS.
the 802.11 MAC controllers, i.e., in the device driver of the             How much bandwidth to allocate depends on:
802.11 network interface card (NIC), such that a packet
scheduling can be performed in the driver level. Packets from                    Packet size for voice (10 to 320 bytes of digital
the higher layer or from the wire line port (in case of the AP)                   voice)
are classified to transmit into VoIP or data types. Packets in
the queues are served by a simple strict priority queuing so                     CODEC and compression technique (G.711, G.729,
that the data queue is never served as long as the VoIP queue                     G.723.1, G.722, proprietary)
is not empty. But the hardware upgrade is undesirable.
                                                                                 Header compression (RTP + UDP + IP), which is
         The cross-layer scheme of [7] [8] is named as                            optional
Vertical Aggregation (VA) since it works along the same flow.
The main advantage is that it enhances voice capacity using a

                                                                                                     ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 8, No1,April 2010
       Layer 2 protocols, such as point-to-point protocol                             IV. PROPOSED ALGORITHM
        (PPP), Frame Relay and Ethernet                                         The main reason for the extra bandwidth usage is IP
                                                                       and UDP headers. VoIP sends small packets and so many
       Silence suppression / voice activity detection                 times, the headers are actually much larger than the data part
                                                                       of the packet. The proposed algorithm based on the following
          Calculating the bandwidth for a VoIP call is not             two factors. 1) A small frame is in error then there is a high
difficult once you know the method and the factors to include.         probability of error for a large frame as well. Similarly when a
The chart below, "Calculating one-way voice bandwidth,"                large frame is successful, there is a very high probability of
demonstrates the overhead calculation for 20 and 40 byte               success for small frames as well. 2) The amount of free
compressed voice (G.729) being transmitted over a Frame                bandwidth decreases as the number of VoIP calls increases. As
Relay WAN connection [13]. Twenty bytes of G.729                       well as the call quality decreases as the number of VoIP calls
compressed voice is equal to 20 ms of a word.                          increases. Free Bandwidth (BWfree) that corresponds to the
                                                                       remaining unused idle time that can be viewed as spare or
Voice digitization and compression:                                    available capacity. In BDM algorithm, at each frame
                                                                       transmission will calculates the free bandwidth availability.
G .711: 64,000 bps or 8000 bytes per second
G.729: 8000 bps or 1000 bytes per second
                                                                        BWfree: Unused idle bandwidth viewed as spare or available
Protocol packet overhead:                                                        capacity
                                                                        BWload: Specifies the bandwidth used for transmission of the
IP = 20 bytes, UDP = 8 bytes, RTP =12 bytes                                                        data frames
Total:40 bytes                                                         Drate: It specifies the data rate
                                                                       Incr: Increment operation
         If one packet carries the voice samples representing          Decr: Decrement operation
20 milliseconds, the 50 such samples are required to be
transmitted in every second.        Each sample carries an             Functions:
IP/UDP/RTP header overhead of 320 bits [14]. Therefore, in             UpperLevel (): upper level according to table 1, 2
each second, 16,000 header bits are sent. As a general rule of         LowerLevel (): lower level according to the table 1, 2
‘thumb’, it can be assumed that header information will add
16kbps to the bandwidth requirement for voice over IP. For             A. BDM ALGORITHM:
example, if an 8kbps algorithm such as G.729 is used, the total        Initial level:
bandwidth required to transmit each voice channel would be             Drate: LowerLevel ()
24kbps.                                                                BWfree: UpperLevel ()
                                                                       S: Previous transmission
The voice transmission requirements are,                               If (S = Success)
       Bandwidth requirements reduced with compression,               Incr Drate to next UpperLevel ()
        G.711, G.729 etc.                                              Decr BWfree to next LowerLevel ()
       Bandwidth requirements reduced when              longer        Else
        packets are used, thereby reducing overhead.                   {
                                                                       Decr Drate to next LowerLevel ()
       Even though the voice compression is an 8 to 1 ratio,          Incr BWfree to next UpperLevel ()
        the bandwidth reduction is about 3 or 4 to 1. The              }
        overhead negates some of the voice compression                            According to IEEE 802.11b only four types of data
        bandwidth savings.                                             rates are available, which are 1, 2, 5.5, 11mbps. When the data
                                                                       rate is high then the throughput increases at the same time the
       Compressing the RTP, UDP and IP headers is most                chance for occurring error also increases [1] [15]. To avoid
        valuable when the packet also carries compressed               this situation BDM allocates some free bandwidth to improve
        voice.                                                         the QoS. This free bandwidth allocation should be at the
                                                                       minimum level otherwise again quality degradation occurs.
A. Packet Overhead
         To support voice over WLANs, it is important to                           TABLE1: LEVELS OF DATA RATE
reduce the overhead and improve the transmission efficiency
over the radio link. Recently, various header compression                                Levels                         Data Rate
techniques for VoIP have been proposed [14]. The                                         Level 0                         1 mbps
RTP/UDP/IP headers can be compressed to as small as 2                                    Level 1                         2 mbps
bytes.                                                                                   Level 2                        5.5 mbps
                                                                                         Level 3                        11 mbps

                                                                                                   ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 8, No1,April 2010
                  Levels             % of Free Bandwidth                TABLE 3: PARAMETERS USED FOR SIMULATION

                  Level 0                       1                                    Parameter                             Value
                  Level 1                       2
                  Level 2                       3                                        DIFS                             50 µsec
                  Level 3                       4                                        SIFS                             10 µsec
                  Level 4                       5
                                                                                       Slot time                         20 µsec

Number of calls = Correc_Fac ( RB-RBT ) / Codec                                        CWmin                                 32
Where,                                                                                 CWmax                               1023
Correc_Fac: Correction factor of real network performance
RB: Real bandwidth usage                                                              Data Rate                     1,2,5.5,11 Mbps
RBT: Real bandwidth used for data transmission
Codec: Bandwidth used by the codec to establish a call                                Basic rate                        1 Mbps
                                                                                     PHY header                          192 µsec
         End-to-end (phone-to-phone) delay needs to be
limited. The shorter the packet creation delay, the more                            MAC header                           34 bytes
network delay the VoIP call can tolerate. Shorter packets                                ACK                             248 µsec
cause less of a problem if the packet is lost. Short packets
require more bandwidth, however, because of increased packet
overhead (this is discussed below). Longer packets that
contain more speech bytes reduce the bandwidth requirements
but produce a longer construction delay and are harder to fix if
lost. By BDM the data rate and free bandwidth will improve
the number of VOIP calls as well as performance.                        TABLE4: DATA RATES AND DISTANCE FOR VOIP

                                                                                Data rate in Mbps                 Distance in meters
         The simulation study is conducted using the ns-2                                   54                             0-27
simulator. The simulation result will be compared with IEEE
802.11 specifications. Any node can communicate with any                                    48                            27-29
other node through base station. The number of stations can be                              36                            29-30
varied from 5 to 50. Wireless LAN networks are set up to                                    24                            30-42
provide wireless connectivity within a finite coverage area of                              18                            42-54
20 to 30m. The network simulator will be used to form an
appropriate network topology under the Media Access Control                                 11                            0 - 48
(MAC) layer of the IEEE 802.11b. According to the IEEE
802.11b protocol specifications [16], the parameters for the
WLAN are shown in Table 3. When calculating bandwidth,
one can't assume that every channel is used all the time.
Normal conversation includes a lot of silence, which often
means no packets are sent at all. So even if one voice call sets
up two 64 Kbit RTP streams over UDP over IP over Ethernet
(which adds overhead), the full bandwidth is not used at all
Based on [2] the data rate and coverage area will be changed.
802.11b standard can cover up to 82 meters of distance,
considering that only the first 48 meters are usable for voice,
the other 34 meters are not usable, therefore cellular area must
be fit only to the 48 meters from the AP in order to avoid
interferences, which is depicted in table 4.

                                                                                      Fig 1: Free bandwidth analysis

                                                                                                   ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 8, No1,April 2010

Free bandwidth= Total bandwidth-bandwidth utilized
          In this sample calculation the free bandwidth is
13.5%. From this 8.5% of bandwidth can be utilized for frame
transmission to achieve maximum throughput and leave 5% to
obtain Qos. Fig 1 shows the difference between bandwidth
utilization and free bandwidth. When the amount of free
bandwidth dropped below 1% call quality became
unacceptable for all ongoing calls. The amount of free
bandwidth is a good indicator for predicting VoIP call quality,
but in the throughput point of view it should be reduced. This
contradiction can be solved by using BDM algorithm.

                VI. SIMULATION RESULTS
                                                                           Fig 3: Variations in packet loss when number of frames
A. Throughput                                                                                      increases

          The throughput (measured in bps) corresponds to the           C. Delay
amount of data in bits that is transmitted over the channel per                   Investigating our third metric, average access delay
unit time. In the following Fig 2 X-axis specifies timeslot and         for high priority traffic, Fig 4 shows that has very low delays
Y-axis specifies the throughput. Consider for each time slot            in most cases, even though the delays increases when the load
the channel receives 10 frames. When the time slot is 4ms, the          gets very high. However, all the schemes have acceptable
throughput is 4200kbps, when it increases into 8ms it is                delays [6], even though EDCA in most cases incur a longer
7000kbps. The graph shows the gradual improvement and the               delay than the other schemes. Even if a scheme can give low
overall throughput is increased upto 87.5%.                             average access delay to high priority traffic, there might still
                                                                        be many packets that get rather high delays. With the number
                                                                        of data stations increasing, the delay performance of the voice
                                                                        stations degrades. This tells that the VoIP performance is
                                                                        sensitive to the data traffic.
                                                                                  In fig-4 graph X-axis specifies the number of frames
                                                                        and Y-axis specifies the delay in ms. When the number of
                                                                        frames is in between 5-10 the delay is gradually increased
                                                                        after that there is no change in the delay. Nearly 30% of the
                                                                        delay is reduced by BDM algorithm.

    Fig 2: Variations in throughput with respect to timeslot

B. Frame Loss
         Frame loss is expressed as a ratio of the number of
frames lost to the total number of frames transmitted. Frame
loss results when frames send are not received at the final
destination. Frames that are totally lost or dropped are rare in
WLANs. In the fig-3 X-axis shows the number of frames and
Y-axis shows the frame loss percentage. The frame loss value
increased upto 0.5 when it reaches a threshold value it slowly
decreasing . These frame loss degradation will improve our
VOIP performance in a great manner.                                             Fig 4: Variations in delay when number of frames

                                                                                                    ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 8, No1,April 2010
D. Bandwidth Utilization                                                 10 stations). This calculation shows that 16% of overall voip
                                                                         call rate is increased by BDM algorithm.
          In most cases, the normal VoIP telephone call will
use up 90Kbps. When calculating bandwidth, one can't assume
that every channel is used all the time. Normal conversation
includes a lot of silence, which often means no packets are
sent at all. So even if one voice call sets up two 64 Kbit RTP
streams over UDP over IP over Ethernet (which adds
overhead), the full bandwidth is not used at all times. Note that
for the graphs where the priority load is low, the utilization
first increases linearly.

                                                                          Fig 6: Variations in throughput when number of frames

             Fig5: Variations in bandwidth utilization

         In fig-5 X-axis specifies the stations and Y-axis
specifies the bandwidth utilization percentage. From the                 F. Comparison of Bandwidth Utilization
number of stations 5 the curve starts incrementing, when the                       VoIP bandwidth consumption over a WAN (wide
stations become 15 the bandwidth utilization percentage is
                                                                         area network) is one of the most important factors to consider
beyond 80%. In the above specified graph 30% of overall                  When building a VoIP infrastructure [9]. Failure to account
bandwidth utilization is increased.
                                                                         for VoIP bandwidth requirements will severely limit the
                                                                         reliability of a VoIP system and place a huge burden on the
E. Throughput Comparison
                                                                         WAN infrastructure. Short packets require more bandwidth,
                                                                         however, because of increased packet overhead (this is
         Here the throughput performance of the EDCA
                                                                         discussed below). Longer packets that contain more speech
algorithm and our proposed BDM algorithms are compared                   bytes reduce the bandwidth requirements but produce a longer
[17]. By using values of maximum achievable throughput
                                                                         construction delay and are harder to fix if lost.
from simulation, VoIP capacity in WLAN can also be
evaluated. The following formula is used for getting the                          In Fig.7 X-axis shows the number of stations and Y-
average packets sent from AP and all VoIP nodes in one                   axis shows the bandwidth utilization percentage. When the
second.                                                                  number of frame is 25 the EDCA algorithm gives only 65% of
           Capacity = Maximum Throughput / Data Rate                     bandwidth utilization and it start to decrease if the number of
                                                                         stations exceeds 40. But BDM algorithm gives 85% of
In BDM algorithm the data rate is changed frequently in an               bandwidth utilization also the curve is gradually increases
effective manner, So that the overall capacity will be                   when the number of stations increased. By BDM algorithm
improved. When data rate increases, automatically the                    20% of bandwidth utilization is increased.
throughput will increase. Due to the low level of data transfer
rate the throughput seldom reach 600 kbps in EDCA. Due to
moderate data rate the maximum throughput in BDM is 1000
kbps, it shows that 15% improvement in overall throughput.
From the graph, the EDCA algorithm can support only 7.5
calls but BDM algorithm can support 9.1 voip calls (only for

                                                                                                    ISSN 1947-5500
                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            Vol. 8, No1,April 2010
                                                                                   [7]. F. Maguolo, F. De Pellegrini, A. Zanella and M. Zorzi,”Cross-Layer
                                                                                   Solutions to Performance Problems In Vein Over WLANs”,14th European
                                                                                   Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-
                                                                                   8, 2006.
                                                                                   [8]. Claudio Cicconetti, Luciano Lenzini, Enzo Mingozzi, Giovanni Stea “An
                                                                                   Efficient Cross Layer Scheduler for Multimedia Traffic in Wireless Local
                                                                                   Area Networks with IEEE 802.11e HCCA” Mobile Computing and
                                                                                   Communications Review, Vol 11, No. 3, Nov 2005.
                                                                                   [9]. Gennaro Boggia, Pietro Camarda, Luigi Alfredo Grieco, and Saverio
                                                                                   Mascolo, “Feedback-Based Control for Providing Real-Time Services with
                                                                                   the 802.11e MAC”, IEEE/ACM Transactions On Networking, Vol. 15, No. 2,
                                                                                   April 2007.
                                                                                   [10]. T. Ravichandran and Durai Samy, “Performance Enhancement on Voice
                                                                                   using VAD Algorithm and Cepstral Analysis”, Journal of Computer Science-
                                                                                   [11]. Antonio grilo, Mario macedo and Mario nunes, “A Scheduling algorithm
                                                                                   for Qos support in 802.11e networks”, IEEE wireless communication, June
                                                                                   [12]. JengFarn Lee, Wanjiun Liao, Jie-Ming Chen, and Hsiu-Hui Lee, “A
                                                                                   Practical QoS Solution to Voice over IP in IEEE 802.11 WLANs”, IEEE
                                                                                   Communications Magazine, April 2009.
                                                                                   [13]. C. Mahlo, C. Hoene, A. Rostami, A. Wolisz, “Adaptive Coding and
                                                                                   Packet Rates for TCP-Friendly VoIP Flows” International Symposium on
                                                                                   Telecommunication, September 2005.
                                                                                   [14]. Ping Wang, Hai Jiang, and Weihua Zhuang, “Capacity Improvement and
                  Fig 7: Variations in bandwidth utilization                       Analysis for Voice/Data Traffic over WLANs”, IEEE Transactions on
                                                                                   Wireless Communications, Vol. 6, No. 4, April 2007.
                                                                                   [15]. Sangho Shin and Henning Schulzrinne, “Measurement and Analysis of
                         VII. CONCLUSION                                           the VoIP Capacity in IEEE 802.11 WLAN”, IEEE Transactions On Mobile
                                                                                   Computing, Vol. 8, No. 9, September 2009.
          This proposal discusses a new association algorithm                      [16]. Ahmed Shawish, Xiaohong Jiang, Pin-Han Ho and Susumu Horiguchi,
called bandwidth and data rate moderation algorithm for IEEE                       “Wireless Access Point Voice Capacity Analysis and Enhancement Based on
802.11 stations. BDM is designed to enhance the performance                        Clients’ Spatial Distribution”, IEEE Transactions On Vehicular Technology,
of an individual station by calculating the free bandwidth                         Vol. 58, No. 5, June 2009.]
                                                                                   [17]. Kei Igarashi, Akira Yamada and Tomoyuki Ohya, “Capacity
availability. When several users are working simultaneously,                       Improvement of Wireless LAN Voip Using Distributed Transmission
the real bandwidth is divided among the whole users. Through                       Scheduling”, 18th Annual IEEE International Symposium on Personal, Indoor
experimentation with a number of VoIP calls and various data                       and Mobile Radio Communications (PIMRC'07), 2007
rates in an 802.11b WLAN shoes a close relationship between
wireless bandwidth utilization and call quality. When the                                                    AUTHORS PROFILE
amount of free bandwidth dropped below 1% call quality
become unacceptable for all ongoing calls. Simulated result
shows that marginal improvement in VOIP call rate is realized
by BDM algorithm. The future work will consider the
                                                                                                                   Mr. S. Vijay Bhanu is currently
different type of codec techniques and coverage area to                                                            working as Lecturer (senior Scale) in
increase the bandwidth utilization.                                                                                the Computer Science & Engineering
                                                                                                                   Wing,     Directorate    of Distance
                                                                                                                   Education, Annamalai University. He
                                                                                                                   is an co-author for a monograph on
REFERENCES                                                                                                         Multimedia. He served as wing Head,
                                                                                                                   DDE,        Annamalai       University,
                                                                                                                   Chidambaram for nearly five years. He
[1]. Wei Wang, Soung Chang Liew, and Victor O. K. Li, “Solutions to                                                served as Additional Controller of
Performance Problems in VoIP Over a 802.11 Wireless LAN” IEEE                                                      Examination at Bharathiar University,
transactions on vehicular technology, Vol. 54, No. 1, January 2005.                                                Coimbatore for two years. He
[2]. Lizzie Narvaez, Jesus Perez, Carlos Garcia and Victor Chi, “Designing                                         conducted a workshop on Business
802.11 WLANs for VoIP and Data”, IJCSNS International Journal of                                                   intelligence in the year 2004. He is a
Computer Science and Network Security, Vol.7 No.7, July 2007.                      life Member in Indian Society for Technical Education.          Email:
[3]. Jong-Ok Kim, Hideki Tode and Koso Murakami, “Friendly Coexistence   
of Voice and Data Traffic in IEEE 802.11 WLANs”, IEEE Transactions on
Consumer Electronics, Vol. 52, No. 2, May 2006.
[4]. Hongqiang Zhai, Xiang Chen, and Yuguang Fang, “How Well Can the
IEEE 802.11 Wireless LAN Support Quality of Service?”, IEEE transactions
on wireless communications, Vol. 4, No. 6, November 2005.
[5]. Tianji Li, Qiang Ni, David Malone, Douglas Leith, Yang Xiao, and
Thierry Turletti, “Aggregation with Fragment Retransmission for Very High-
Speed WLANs”, IEEE/ACM Transactions on Networking, Vol. 17, No. 2,
April 2009.
[6]. S.Vijay Bhanu and Dr.RM.Chandrasekaran, “Enhancing WLAN MAC
Protocol performance using Differentiated VOIP and Data Services Strategy”,
IJCSNS International Journal of Computer Science and Network Security,
Vol.9 No.12, December 2009.

                                                                                                                  ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 8, No1,April 2010
Dr.RM.Chandrasekaran is currently working as a Registrar
                                      of Anna University,
                                      Tiruchirappalli      and
                                      Professor      at     the
                                      Department of Computer
                                      Science              and
                                      Engineering, Annamalai
                                      University, Annamalai
                                      Nagar, Tamilnadu, India.
                                      From 1999 to 2001 he
                                      worked as a software
                                      consultant in Etiam, Inc,
California, USA. He received his Ph.D degree in 2006 from
Annamalai University, Chidambaram. He has conducted
workshops and conferences in the area of Multimedia,
Business Intelligence, Analysis of Algorithms and Data
Mining. Ha has presented and published more than 32 papers
in conferences and journals and is the co-author of the book
Numerical Methods with C++ Program( PHI,2005). His
research interests include Data Mining, Algorithms and
Mobile Computing. He is life member of the Computer
Society of India, Indian Society for Technical Education,
Institute of Engineers and Indian Science Congress Assciation.

Dr.V.Balakrishnan, formerly Director, Anna University,
Coimbatore has got 35 years of service to his credit in
teaching, research, training, extension, consultancy and
                                 administration. He has guided
                                 30 M.Phil Scholars, guiding
                                 15 Ph.D Scholars. He guided
                                 around 1000 MBA projects.
                                 He got 13 international and
                                 national awards including,
                                 ‘Management Dhronocharaya’
                                 and ‘Manithaneya Mamani’.
                                 At Anna University he
                                 introduced 26 branches in
                                 MBA. He published around 82
                                 articles in international and
                                 national Journals. He partook
in about 100 international and national seminars. He served in
various academic bodies like Academic Council, Faculty of
Arts, Board of Selection, and Board of Examiners in most of
the Universities in South India.

                                                                                                ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 8, No. 1, April 2010

        ECG Feature Extraction Techniques - A Survey
S.Karpagachelvi,                         Dr.M.Arthanari, Prof. & Head,                              M.Sivakumar,
Doctoral Research Scholar,               Dept. of Computer Science and Engineering,                 Doctoral Research Scholar,
Mother Teresa Women's University,        Tejaa Shakthi Institute of Technology for Women,           Anna University – Coimbatore,
Kodaikanal, Tamilnadu, India.            Coimbatore- 641 659, Tamilnadu, India.                     Tamilnadu, India
email :         email:                             email :

   Abstract—ECG Feature Extraction plays a significant role in              ECG is essentially responsible for patient monitoring and
diagnosing most of the cardiac diseases. One cardiac cycle in an          diagnosis. The extracted feature from the ECG signal plays a
ECG signal consists of the P-QRS-T waves. This feature                    vital in diagnosing the cardiac disease. The development of
extraction scheme determines the amplitudes and intervals in the          accurate and quick methods for automatic ECG feature
ECG signal for subsequent analysis. The amplitudes and                    extraction is of major importance. Therefore it is necessary
intervals value of P-QRS-T segment determines the functioning
                                                                          that the feature extraction system performs accurately. The
of heart of every human. Recently, numerous research and
techniques have been developed for analyzing the ECG signal.              purpose of feature extraction is to find as few properties as
The proposed schemes were mostly based on Fuzzy Logic                     possible within ECG signal that would allow successful
Methods, Artificial Neural Networks (ANN), Genetic Algorithm              abnormality detection and efficient prognosis.
(GA), Support Vector Machines (SVM), and other Signal
Analysis techniques. All these techniques and algorithms have
their advantages and limitations. This proposed paper discusses
various techniques and transformations proposed earlier in
literature for extracting feature from an ECG signal. In addition
this paper also provides a comparative study of various methods
proposed by researchers in extracting the feature from ECG

  Keywords—Artificial Neural Networks (ANN), Cardiac Cycle,
ECG signal, Feature Extraction, Fuzzy Logic, Genetic Algorithm
(GA), and Support Vector Machines (SVM).

                     I. INTRODUCTION
  The investigation of the ECG has been extensively used for
diagnosing many cardiac diseases. The ECG is a realistic
record of the direction and magnitude of the electrical
commotion that is generated by depolarization and re-                              Figure.1 A Sample ECG Signal showing P-QRS-T Wave
polarization of the atria and ventricles. One cardiac cycle in an
ECG signal consists of the P-QRS-T waves. Figure 1 shows a                  In recent year, several research and algorithm have been
sample ECG signal. The majority of the clinically useful                  developed for the exertion of analyzing and classifying the
information in the ECG is originated in the intervals and                 ECG signal. The classifying method which have been
amplitudes defined by its features (characteristic wave peaks             proposed during the last decade and under evaluation includes
and time durations). The improvement of precise and rapid                 digital signal analysis, Fuzzy Logic methods, Artificial Neural
methods for automatic ECG feature extraction is of chief                  Network, Hidden Markov Model, Genetic Algorithm, Support
importance, particularly for the examination of long                      Vector Machines, Self-Organizing Map, Bayesian and other
recordings [1].                                                           method with each approach exhibiting its own advantages and
  The ECG feature extraction system provides fundamental                  disadvantages. This paper provides an over view on various
features (amplitudes and intervals) to be used in subsequent              techniques and transformations used for extracting the feature
automatic analysis. In recent times, a number of techniques               from ECG signal. In addition the future enhancement gives a
have been proposed to detect these features [2] [3] [4]. The              general idea for improvement and development of the feature
previously proposed method of ECG signal analysis was based               extraction techniques.
on time domain method. But this is not always adequate to
study all the features of ECG signals. Therefore the frequency               The remainder of this paper is structured as follows. Section
representation of a signal is required. The deviations in the             2 discusses the related work that was earlier proposed in
normal electrical patterns indicate various cardiac disorders.            literature for ECG feature extraction. Section 3 gives a general
Cardiac cells, in the normal state are electrically polarized [5].        idea of further improvements of the earlier approaches in ECG

                                                                                                      ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 8, No. 1, April 2010

feature detection, and Section 4 concludes the paper with              one complex is used to trace the peaks of the individual
fewer discussions.                                                     waves, including onsets and offsets of the P and T waves
                                                                       which are present in one cardiac cycle. Their experimental
                  II. LITERATURE REVIEW                                results revealed that their proposed approach for ECG feature
   ECG feature extraction has been studied from early time and         extraction achieved sensitivity of 99.18% and a positive
lots of advanced techniques as well as transformations have            predictivity of 98%.
been proposed for accurate and fast ECG feature extraction.
This section of the paper discusses various techniques and               A Mathematical morphology for ECG feature extraction
transformations proposed earlier in literature for extracting          was proposed by Tadejko and Rakowski in [8]. The primary
feature from ECG.                                                      focus of their work is to evaluate the classification
                                                                       performance of an automatic classifier of the
   Zhao et al. [6] proposed a feature extraction method using          electrocardiogram (ECG) for the detection abnormal beats
wavelet transform and support vector machines. The paper               with new concept of feature extraction stage. The obtained
presented a new approach to the feature extraction for reliable        feature sets were based on ECG morphology and RR-intervals.
heart rhythm recognition. The proposed system of                       Configuration adopted a well known Kohonen self-organizing
classification is comprised of three components including data         maps (SOM) for examination of signal features and clustering.
preprocessing, feature extraction and classification of ECG            A classifier was developed with SOM and learning vector
signals. Two diverse feature extraction methods are applied            quantization (LVQ) algorithms using the data from the records
together to achieve the feature vector of ECG data. The                recommended by ANSI/AAMI EC57 standard. In addition
wavelet transform is used to extract the coefficients of the           their work compares two strategies for classification of
transform as the features of each ECG segment. Concurrently,           annotated QRS complexes: based on original ECG
autoregressive modeling (AR) is also applied to get hold of the        morphology features and proposed new approach - based on
temporal structures of ECG waveforms. Then at last the                 preprocessed ECG morphology features. The mathematical
support vector machine (SVM) with Gaussian kernel is used to           morphology filtering is used for the preprocessing of ECG
classify different ECG heart rhythm. The results of computer           signal.
simulations provided to determine the performance of the
proposed approach reached the overall accuracy of 99.68%.                 Sufi et al. in [9] formulated a new ECG obfuscation method
                                                                       for feature extraction and corruption detection. They present a
  A novel approach for ECG feature extraction was put forth            new ECG obfuscation method, which uses cross correlation
by Castro et al. in [7]. Their proposed paper present an               based template matching approach to distinguish all ECG
algorithm, based on the wavelet transform, for feature                 features followed by corruption of those features with added
extraction from an electrocardiograph (ECG) signal and                 noises. It is extremely difficult to reconstruct the obfuscated
recognition of abnormal heartbeats. Since wavelet transforms           features without the knowledge of the templates used for
can be localized both in the frequency and time domains. They          feature matching and the noise. Therefore, they considered
developed a method for choosing an optimal mother wavelet              three templates and three noises for P wave, QRS Complex
from a set of orthogonal and bi-orthogonal wavelet filter bank         and T wave comprise the key, which is only 0.4%-0.9% of the
by means of the best correlation with the ECG signal. The              original ECG file size. The key distribution among the
foremost step of their approach is to denoise (remove noise)           authorized doctors is efficient and fast because of its small
the ECG signal by a soft or hard threshold with limitation of          size. To conclude, the experiments carried on with
99.99 reconstructs ability and then each PQRST cycle is                unimaginably high number of noise combinations the security
decomposed into a coefficients vector by the optimal wavelet           strength of the presented method was very high.
function. The coefficients, approximations of the last scale
level and the details of the all levels, are used for the ECG             Saxena et al in [10] described an approach for effective
analyzed. They divided the coefficients of each cycle into             feature extraction form ECG signals. Their paper deals with an
three segments that are related to P-wave, QRS complex, and            competent composite method which has been developed for
T-wave. The summation of the values from these segments                data compression, signal retrieval and feature extraction of
provided the feature vectors of single cycles.                         ECG signals. After signal retrieval from the compressed data,
                                                                       it has been found that the network not only compresses the
   Mahmoodabadi et al. in [1] described an approach for ECG            data, but also improves the quality of retrieved ECG signal
feature extraction which utilizes Daubechies Wavelets                  with respect to elimination of high-frequency interference
transform. They had developed and evaluated an                         present in the original signal. With the implementation of
electrocardiogram (ECG) feature extraction system based on             artificial neural network (ANN) the compression ratio
the multi-resolution wavelet transform. The ECG signals from           increases as the number of ECG cycle increases. Moreover the
Modified Lead II (MLII) were chosen for processing. The                features extracted by amplitude, slope and duration criteria
wavelet filter with scaling function further intimately similar        from the retrieved signal match with the features of the
to the shape of the ECG signal achieved better detection. The          original signal. Their experimental results at every stage are
foremost step of their approach was to de-noise the ECG                steady and consistent and prove beyond doubt that the
signal by removing the equivalent wavelet coefficients at              composite method can be used for efficient data management
higher scales. Then, QRS complexes are detected and each               and feature extraction of ECG signals in many real-time

                                                                                                  ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

                                                                         region and suppressed in non-QRS region. The proposed
  A feature extraction method using Discrete Wavelet                     method has detection rate and positive predictivity of 98.56%
Transform (DWT) was proposed by Emran et al. in [11]. They               and 99.18% respectively.
used a discrete wavelet transform (DWT) to extract the
relevant information from the ECG input data in order to                   Xu et al. in [16] described an algorithm using Slope Vector
perform the classification task. Their proposed work includes            Waveform (SVW) for ECG QRS complex detection and RR
the following modules data acquisition, pre-processing beat              interval evaluation. In their proposed method variable stage
detection, feature extraction and classification. In the feature         differentiation is used to achieve the desired slope vectors for
extraction module the Wavelet Transform (DWT) is designed                feature extraction, and the non-linear amplification is used to
to address the problem of non-stationary ECG signals. It was             get better of the signal-to-noise ratio. The method allows for a
derived from a single generating function called the mother              fast and accurate search of the R location, QRS complex
wavelet by translation and dilation operations. Using DWT in             duration, and RR interval and yields excellent ECG feature
feature extraction may lead to an optimal frequency resolution           extraction results. In order to get QRS durations, the feature
in all frequency ranges as it has a varying window size, broad           extraction rules are needed.
at lower frequencies, and narrow at higher frequencies. The
DWT characterization will deliver the stable features to the               A method for automatic extraction of both time interval and
morphology variations of the ECG waveforms.                              morphological features, from the Electrocardiogram (ECG) to
                                                                         classify ECGs into normal and arrhythmic was described by
  Tayel and Bouridy together in [12] put forth a technique for           Alexakis et al. in [17]. The method utilized the combination of
ECG image classification by extracting their feature using               artificial neural networks (ANN) and Linear Discriminant
wavelet transformation and neural networks. Features are                 Analysis (LDA) techniques for feature extraction. Five ECG
extracted from wavelet decomposition of the ECG images                   features namely RR, RTc, T wave amplitude, T wave skew
intensity. The obtained ECG features are then further                    ness, and T wave kurtosis were used in their method. These
processed using artificial neural networks. The features are:            features are obtained with the assistance of automatic
mean, median, maximum, minimum, range, standard                          algorithms. The onset and end of the T wave were detected
deviation, variance, and mean absolute deviation. The                    using the tangent method. The three feature combinations used
introduced ANN was trained by the main features of the 63                had very analogous performance when considering the
ECG images of different diseases. The test results showed that           average performance metrics.
the classification accuracy of the introduced classifier was up
to 92%. The extracted features of the ECG signal using                     A modified combined wavelet transforms technique was
wavelet decomposition was effectively utilized by ANN in                 developed by Saxena et al. in [18]. The technique has been
producing the classification accuracy of 92%.                            developed to analyze multi lead electrocardiogram signals for
                                                                         cardiac disease diagnostics. Two wavelets have been used, i.e.
   Alan and Nikola in [13] proposed chaos theory that can be             a quadratic spline wavelet (QSWT) for QRS detection and the
successfully applied to ECG feature extraction. They also                Daubechies six coefficient (DU6) wavelet for P and T
discussed numerous chaos methods, including phase space and              detection. A procedure has been evolved using
attractors, correlation dimension, spatial filling index, central        electrocardiogram parameters with a point scoring system for
tendency measure and approximate entropy. They created a                 diagnosis of various cardiac diseases. The consistency and
new feature extraction environment called ECG chaos                      reliability of the identified and measured parameters were
extractor to apply the above mentioned chaos methods. A new              confirmed when both the diagnostic criteria gave the same
semi-automatic program for ECG feature extraction has been               results. Table 1 shows the comparison of different ECG signal
implemented and is presented in this article. Graphical                  feature extraction techniques.
interface is used to specify ECG files employed in the
extraction procedure as well as for method selection and                   A robust ECG feature extraction scheme was put forth by
results saving. The program extracts features from ECG files.            Olvera in [19]. The proposed method utilizes a matched filter
                                                                         to detect different signal features on a human heart
   An algorithm was presented by Chouhan and Mehta in [14]               electrocardiogram signal. The detection of the ST segment,
for detection of QRS complexities. The recognition of QRS-               which is a precursor of possible cardiac problems, was more
complexes forms the origin for more or less all automated                difficult to extract using the matched filter due to noise and
ECG analysis algorithms. The presented algorithm utilizes a              amplitude variability. By improving on the methods used;
modified definition of slope, of ECG signal, as the feature for          using a different form of the matched filter and better
detection of QRS. A succession of transformations of the                 threshold detection, the matched filter ECG feature extraction
filtered and baseline drift corrected ECG signal is used for             could be made more successful. The detection of different
mining of a new modified slope-feature. In the presented                 features in the ECG waveform was much harder than
algorithm, filtering procedure based on moving averages [15]             anticipated but it was not due to the implementation of the
provides smooth spike-free ECG signal, which is appropriate              matched filter. The more complex part was creating the
for slope feature extraction. The foremost step is to extort             revealing method to remove the feature of interest in each
slope feature from the filtered and drift corrected ECG signal,          ECG signal.
by processing and transforming it, in such a way that the
extracted feature signal is significantly enhanced in QRS

                                                                                                     ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                          Vol. 8, No. 1, April 2010

  Jen et al. in [20] formulated an approach using neural                   performed other conventional methods of ECG feature
networks for determining the features of ECG signal. They                  extraction.
presented an integrated system for ECG diagnosis. The
integrated system comprised of cepstrum coefficient method                                     III. FUTURE ENHANCEMENT
for feature extraction from long-term ECG signals and                      The electrocardiogram (ECG) is a noninvasive and the record
artificial neural network (ANN) models for the classification.             of variation of the bio-potential signal of the human
Utilizing the proposed method, one can identify the                        heartbeats. The ECG detection which shows the information
characteristics hiding inside an ECG signal and then classify              of the heart and cardiovascular condition is essential to
the signal as well as diagnose the abnormalities. To explore               enhance the patient living quality and appropriate treatment.
the performance of the proposed method various types of ECG                The ECG features can be extracted in time domain [23] or in
data from the MIT/BIH database were used for verification.                 frequency domain [24]. The extracted feature from the ECG
The experimental results showed that the accuracy of                       signal plays a vital in diagnosing the cardiac disease. The
diagnosing cardiac disease was above 97.5%. In addition the                development of accurate and quick methods for automatic
proposed method successfully extracted the corresponding                   ECG feature extraction is of major importance. Some of the
feature vectors, distinguished the difference and classified               features extraction methods implemented in previous research
ECG signals.                                                               includes Discrete Wavelet Transform, Karhunen-Loeve
                                                                           Transform, Hermitian Basis and other methods. Every method
  Correlation analysis for abnormal ECG signal feature                     has its own advantages and limitations. The future work
extraction was explained by Ramli and Ahmad in [21]. Their                 primarily focus on feature extraction from an ECG signal
proposed work investigated the technique to extract the                    using more statistical data. In addition the future enhancement
important     features     from     the   12    lead     system            eye on utilizing different transformation technique that
(electrocardiogram) ECG signals. They chose II for their                   provides higher accuracy in feature extraction. The parameters
entire analysis due to its representative characteristics for              that must be considered while developing an algorithm for
identifying the common heart diseases. The analysis technique              feature extraction of an ECG signal are simplicity of the
chosen is the cross-correlation analysis. Cross-correlation                algorithm and the accuracy of the algorithm in providing the
analysis measures the similarity between the two signals and               best results in feature extraction.
extracts the information present in the signals. Their test
results suggested that the proposed technique could effectively            Table I. Comparison of Different Feature Extraction Techniques from an ECG
extract features, which differentiate between the types of heart                Signal where H, M, L denotes High, Medium and Low respectively
diseases analyzed and also for normal heart signal.
                                                                                 Approach           Simplicity      Accuracy        Predictivity
  Ubeyli et al. in [22] described an approach for feature
                                                                                 Zhao et al.             H               H                H
extraction from ECG signal. They developed an automated
diagnostic systems employing dissimilar and amalgamated                       Mahmoodabadi
                                                                                                         M               H                H
features for electrocardiogram (ECG) signals were analyzed                        et al.
and their accuracies were determined. The classification                       Tadejko and
accuracies of mixture of experts (ME) trained on composite                                               L               M                M
features and modified mixture of experts (MME) trained on                       Tayel and
diverse features were also compared in their work. The inputs                                            M               M                H
of these automated diagnostic systems were composed of                          Jen et al.
diverse or composite features and these were chosen based on                                             H               H                H
the network structures. The achieved accuracy rates of their                   Alexakis et al.
proposed approach were higher than that of the ME trained on                                             H               M                M
composite features.                                                              Ramli and
                                                                                                         M               M                M
   Fatemian et al. [25] proposed an approach for ECG feature                      Xu et al.
extraction. They suggested a new wavelet based framework                                                 M               H                H
for automatic analysis of single lead electrocardiogram (ECG)                      Olvera
for application in human recognition. Their system utilized a                                            H               M                M
robust preprocessing stage, which enables it to handle noise                    Emran et al
and outliers. This facilitates it to be directly applied on the raw                                      H               M                L
ECG signal. In addition the proposed system is capable of
managing ECGs regardless of the heart rate (HR) which
                                                                                                     IV. CONCLUSION
renders making presumptions on the individual's stress level
unnecessary. The substantial reduction of the template gallery                The examination of the ECG has been comprehensively
size decreases the storage requirements of the system                      used for diagnosing many cardiac diseases. Various
appreciably. Additionally, the categorization process is                   techniques and transformations have been proposed earlier in
speeded up by eliminating the need for dimensionality                      literature for extracting feature from ECG. This proposed
reduction techniques such as PCA or LDA. Their experimental                paper provides an over view of various ECG feature extraction
results revealed the fact that the proposed technique out                  techniques and algorithms proposed in literature. The feature

                                                                                                           ISSN 1947-5500
                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                    Vol. 8, No. 1, April 2010

 extraction technique or algorithm developed for ECG must be                      [17]   C. Alexakis, H. O. Nyongesa, R. Saatchi, N. D. Harris, C. Davies, C.
                                                                                         Emery, R. H. Ireland, and S. R. Heller, “Feature Extraction and
 highly accurate and should ensure fast extraction of features
                                                                                         Classification of Electrocardiogram (ECG) Signals Related to
 from the ECG signal. This proposed paper also revealed a                                Hypoglycaemia,” Conference on computers in Cardiology, pp. 537-540,
 comparative table evaluating the performance of different                               IEEE, 2003.
 algorithms that were proposed earlier for ECG signal feature                     [18]   S. C. Saxena, V. Kumar, and S. T. Hamde, “Feature extraction from
                                                                                         ECG signals using wavelet transforms for disease diagnostics,”
 extraction. The future work mainly concentrates on
                                                                                         International Journal of Systems Science, vol. 33, no. 13, pp. 1073-1085,
 developing an algorithm for accurate and fast feature                                   2002.
 extraction. Moreover additional statistical data will be utilized                [19]   Felipe E. Olvera, “Electrocardiogram Waveform Feature Extraction
 for evaluating the performance of an algorithm in ECG signal                            Using the Matched Filter,” 2006.
                                                                                  [20]   Kuo-Kuang Jen, and Yean-Ren Hwang, “ECG Feature Extraction and
 feature detection. Improving the accuracy of diagnosing the
                                                                                         Classification Using Cepstrum and Neural Networks,” Journal of
 cardiac disease at the earliest is necessary in the case of patient                     Medical and Biological Engineering, vol. 28, no. 1, 2008.
 monitoring system. Therefore our future work also has an eye                     [21]   A. B. Ramli, and P. A. Ahmad, “Correlation analysis for abnormal ECG
 on improvement in diagnosing the cardiac disease.                                       signal features extraction,” 4th National Conference on
                                                                                         Telecommunication Technology, 2003. NCTT 2003 Proceedings, pp.
                                                                                         232-237, 2003.
                               REFERENCES                                         [22]   Ubeyli, and Elif Derya, “Feature extraction for analysis of ECG signals,”
 [1]   S. Z. Mahmoodabadi, A. Ahmadian, and M. D. Abolhasani, “ECG                       Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th
       Feature Extraction using Daubechies Wavelets,” Proceedings of the fifth           Annual International Conference of the IEEE, pp. 1080-1083, 2008.
       IASTED International conference on Visualization, Imaging and Image        [23]   Y. H. Hu, S. Palreddy, and W. Tompkins, “A Patient Adaptable ECG
       Processing, pp. 343-348, 2005.                                                    Beat Classifier Using A Mixture Of Experts Approach”, IEEE
 [2]   Juan Pablo Martínez, Rute Almeida, Salvador Olmos, Ana Paula Rocha,               Transactions on Biomedical Engineering vol. 44, pp. 891–900, 1997.
       and Pablo Laguna, “A Wavelet-Based ECG Delineator: Evaluation on           [24]   Costas Papaloukas, Dimitrios I. Fotiadis, Aristidis Likas, Lampros K.
       Standard Databases,” IEEE Transactions on Biomedical Engineering                  Michalis, “Automated Methods for Ischemia Detection in Long Duration
       Vol. 51, No. 4, pp. 570-581, 2004.                                                ECGs”, 2003.
 [3]   Krishna Prasad and J. S. Sahambi, “Classification of ECG Arrhythmias       [25]   S. Z. Fatemian, and D. Hatzinakos, “A new ECG feature extractor for
       using Multi-Resolution Analysis and Neural Networks,” IEEE                        biometric recognition,” 16th International Conference on Digital Signal
       Transactions on Biomedical Engineering, vol. 1, pp. 227-231, 2003.                Processing, pp. 1-6, 2009.
 [4]   Cuiwei Li, Chongxun Zheng, and Changfeng Tai, “Detection of ECG
       Characteristic Points using Wavelet Transforms,” IEEE Transactions on                                   AUTHORS PROFILE
       Biomedical Engineering, Vol. 42, No. 1, pp. 21-28, 1995.
 [5]   C. Saritha, V. Sukanya, and Y. Narasimha Murthy, “ECG Signal
       Analysis Using Wavelet Transforms,” Bulgarian Journal of Physics,                                  Karpagachelvi.S: She received the BSc degree in
       vol. 35, pp. 68-77, 2008.                                                                          physics from Bharathiar University in 1993 and
 [6]   Qibin Zhao, and Liqing Zhan, “ECG Feature Extraction and                                           Masters in Computer Applications from Madras
       Classification Using Wavelet Transform and Support Vector                                          University in 1996. She has 12 years of teaching
       Machines,” International Conference on Neural Networks and Brain,                                  experience. She is currently a PhD student with the
       ICNN&B ’05, vol. 2, pp. 1089-1092, 2005.                                                           Department of Computer Science at Mother Teresa
 [7]   B. Castro, D. Kogan, and A. B. Geva, “ECG feature extraction using                                 University.
       optimal mother wavelet,” The 21st IEEE Convention of the Electrical
       and Electronic Engineers in Israel, pp. 346-350, 2000.
 [8]   P. Tadejko, and W. Rakowski, “Mathematical Morphology Based ECG
       Feature Extraction for the Purpose of Heartbeat Classification,” 6th
       International Conference on Computer Information Systems and
       Industrial Management Applications, CISIM '07, pp. 322-327, 2007.
 [9]   F. Sufi, S. Mahmoud, I. Khalil, “A new ECG obfuscation method: A                                    Dr.M.Arthanari: He has obtained Doctorate in
       joint feature extraction & corruption approach,” International                                      Mathematics in Madras University in the year 1981.
       Conference on Information Technology and Applications in                                            He has 35 years of teaching experience and 25 years
       Biomedicine, 2008. ITAB 2008, pp. 334-337, May 2008.                                                of research experience. He has a Patent in Computer
[10]   S. C. Saxena, A. Sharma, and S. C. Chaudhary, “Data compression and                                 Science approved by Govt. of India.
       feature extraction of ECG signals,” International Journal of Systems
       Science, vol. 28, no. 5, pp. 483-498, 1997.
[11]   Emran M. Tamil, Nor Hafeezah Kamarudin, Rosli Salleh, M. Yamani
       Idna Idris, Noorzaily M.Noor, and Azmi Mohd Tamil, “Heartbeat
       Electrocardiogram (ECG) Signal Feature Extraction Using Discrete
       Wavelet Transforms (DWT).”                                                                           Sivakumar M : He has 10+ years of experience in the
[12]   Mazhar B. Tayel, and Mohamed E. El-Bouridy, “ECG Images                                              software industry including Oracle Corporation. He
       Classification Using Feature Extraction Based On Wavelet                                             received his Bachelor degree in Physics and Masters
       Transformation And Neural Network,” ICGST, International                                             in Computer Applications from the Bharathiar
       Conference on AIML, June 2006.                                                                       University, India. He holds patent for the invention
[13]   Alan Jovic, and Nikola Bogunovic, “Feature Extraction for ECG Time-                                  in embedded technology. He is technically certified
       Series Mining based on Chaos Theory,” Proceedings of 29th                                            by various professional bodies like ITIL, IBM
       International Conference on Information Technology Interfaces, 2007.                                 Rational Clearcase Administrator, OCP - Oracle
[14]   V. S. Chouhan, and S. S. Mehta, “Detection of QRS Complexes in 12-                                   Certified Professional 10g and ISTQB.
       lead ECG using Adaptive Quantized Threshold,” IJCSNS International
       Journal of Computer Science and Network Security, vol. 8, no. 1, 2008.
[15]   V. S. Chouhan, and S. S. Mehta, “Total Removal of Baseline Drift from
       ECG Signal”, Proceedings of International conference on Computing:
       Theory and Applications, ICTTA–07, pp. 512-515, ISI, March, 2007.
[16]   Xiaomin Xu, and Ying Liu, “ECG QRS Complex Detection Using Slope
       Vector Waveform (SVW) Algorithm,” Proceedings of the 26th Annual
       International Conference of the IEEE EMBS, pp. 3597-3600, 2004.

                                                                                                                     ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                           Vol. 8, No. 1, April 2010

             Implementation of the Six Channel Redundancy
           to achieve fault tolerance in testing of satellites.
H S Aravinda                                   Dr H D Maheshappa                                        Dr Ranjan Moodithaya
Dept of ECE, REVA ITM,                         Director & Principal, E PC ET                            Head, KTM Division, NAL,
Bangalore-64, Karnataka, India                 Bangalore- 40, Karnataka, India                          Bangalore-17, Karnataka, India                                               

Abstract:-This    paper aims to implement the six channel                 The Indian Space Research Organization launches number
redundancy to achieve fault tolerance in testing of satellites            of satellites for application in communication [5], remote
with acoustic spectrum. We mainly focus here on achieving                 sensing, meteorology etc. The powerful launch vehicles are
fault tolerance. An immediate application is the microphone               used to accelerate the satellite through the earth’s
data acquisition and to do analysis at the Acoustic Test Facility         atmosphere and to make it an artificial earth satellite. The
(ATF) centre, National Aerospace Laboratories. It has an 1100             Launch Vehicles [6] used will generate high levels of sound
cubic meter reverberation chamber in which a maximum                      during lift-off and Tran’s atmospheric acceleration. The
sound pressure level of 157 dB is generated. The six channel
                                                                          payload satellites experiences mechanical loads of various
Redundancy software with fault tolerant operation is devised
and developed. The data are applied to program written in C               frequencies and load on the vehicle from acoustic sources
language. The program is run using the Code Composer Studio               due to two factors. One is Rocket vehicle generated noise at
by accepting the inputs. This is tested with the TMS 320C 6727            lift-off, and the other is an aerodynamic noise caused by
DSP, Pro Audio Development Kit (PADK).                                    turbulence, particularly at frontal area transition. The
      Key words: Fault Tolerance, Redundancy, Acoustics.                  acoustic field thus created is strong enough to damage the
                                                                          delicate payload. The sources of acoustics, its combined
                      I. INTRODUCTION                                     spectrum are shown in fig.2 and fig.3.
Acoustic Test Facility is a national facility for acoustic
environmental qualification of satellites, launch vehicle
stages and their subsystems for the ISRO [1]. The ATF
has a reverberation chamber (RC) for simulating the
acoustic environment experienced by spacecraft and launch
vehicles during launch [2]. The RC has a diffused uniform
sound pressure level distribution. Its wall surface ensures
reflectance of 99% of the sound energy. It is used for
simulating the acoustic environment experienced by
spacecraft and launch vehicles during the launch. The one
such facility is shown in Fig.1.

                                                                               Fig.2. the load on the vehicle from two acoustic sources.

    Fig.1. View of the Reverberation Chamber

                                  A pdf writer that produces quality PDF files with ease!
         Produce quality PDF files in seconds and preserve the integrity of your original documents. Compatible across                     81
             nearly all Windows platforms, simply open the document you want to convert, click “print”, select the
                                  “Broadgun pdfMachine printer” and that’s it! Get yours now!
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                 Vol. 8, No. 1, April 2010

                                                                                realized in the RC, like the US Delta, Atlas Centaur, Titan
                                                                                IIIC, Space Shuttle, Ariane 4 & 5 of the ESA, Vostok,
                                                                                Soyuz of Russia and ASLV, PSLV and GSLV of India.

                                                                                     A. Requirements For acoustic Testing of Satellite are

                                                                                    Noise generation unit, Spectrum Shapers, Power
                                                                                amplifiers and horns (two), Reverberation Chamber, Micro
                                                                                phones and Multiplexer, Real time frequency (spectrum)
                                                                                analyzer, Graphic recorder and display, Tape recorder for
                                                                                recording, Accelerometers, Charge amplifier and data
                                                                                recorder, Dual channel analyzer and plotter.

    Fig.3. The combined spectra from the two acoustic sources.

Faults and failures are not acceptable due to high cost of
satellites. Hence, all payload satellites should undergo an
acoustic test before launching under simulated conditions
and are tested for their endurance of such dynamic loads.
The satellite is subjected to maximum overall sound
pressure level to ensure the functional aspects of all the test
setup. Acoustic test is a major dynamic test for qualification
of space systems and components. The purpose of the tests
are, Search for weak elements in the subsystem with respect
to acoustic failure. The Qualification tests to demonstrate
spacecraft performance in meeting design goals set.
Acceptance tests to uncover workmanship nature of defects.

                     II. ACOUSTIC TESTING

The acoustic environment inside the Reverberation Chamber
is created by modulating a stream of clean and dry air at
about 30 PSI pressure using electro pneumatic transducers.                      The satellite is kept in RC and the high frequency high level
The drive signal is derived from a random noise generator                       spectrum characteristics of the launch vehicle are generated
and modified by a spectrum shaper. The microphone data                          and its dynamic behavior is studied. It is essential that the
from the RC is observed on a real time analyzer and the                         acoustic noise generated is a true simulation of the launch
spectrum shaper outputs are adjusted to achieve the target                      vehicle acoustic spectrum, and it is the input acoustic load to
spectrum. There are two sets of modulators, one delivering                      be simulated in the RC and is specified as SPL (sound
an acoustic power of 60KW in the 31.5 Hz to 500 Hz and                          pressure level) in dB verses frequency. The spectrum of
the others delivering 20 KW in the 200 to 1200 Hz range,                        various launch vehicles like delta, atlas centaur, titan-IIIC,
the spectrum beyond 1200 HZ is controlled to some extent                        Arianne, vostok, ASLV, PSLV, GSLV.., and Indian
using the effects of the higher harmonics by changing the                       satellites like IRS, INSAT.., are realizable in the
spectral contents of the drive to the modulators. The                           Reverberation Chamber. Each launch vehicle has unique
acoustic excitation is coupled to the RC through optimally                      spectral features and is drawn in Octave Band Centre
configured exponential horns to achieve efficient transfer of                   Frequencies (OBCF), in the range from 31.5 Hz to 16 kHz.
the acoustic energy into the chamber. The chamber wall
surface treatment design ensures reflectance of 99% of the
sound energy incident on them. The chamber has a diffused
uniform, sound pressure level distribution with in  1dB in
the central ten percent of volume of the chamber where the
test specimen is located. The spectrum for almost all
contemporary launch vehicles around the world can be

                                 A pdf writer that produces quality PDF files with ease!
        Produce quality PDF files in seconds and preserve the integrity of your original documents. Compatible across                     82
            nearly all Windows platforms, simply open the document you want to convert, click “print”, select the
                                 “Broadgun pdfMachine printer” and that’s it! Get yours now!
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                               Vol. 8, No. 1, April 2010

    The Three levels of acceptance of acoustic spectrum are ,
    first is Full level or Qualification test, it is normally for 120
    seconds and Maximum of 156dB, second is Acceptance
    level test, it is normally for 60 or 90 seconds and Maximum
    of 153dB, third is a Low level test, it is normally for 30
    seconds and Maximum of 150dB.


Fault tolerant application software to ensure data integrity will
be developed. This paper is implemented by taking the six                     Fig.6. input data 1
channel data from reverberation chamber and is applied as the
input to the program. The six microphones data are connected to               Comment: The six channel input data, indicating channel 5
the TMS 320C 6727 DSP, Pro Audio Development Kit (PADK)                       is going bad (low) from duration 4 -10.
after signal conditioning via analog to digital converters. All six
microphone data is fed to DSP processor as shown in Fig.5. The
FFT is taken for all the six channel data and are compared with
each other to find out which channel microphone data is good or
bad. A threshold level is maintained to check the validity of the
microphone. If the data is well with in the threshold it is
accepted or else it will be rejected. Here if the two channel
microphone data is bad then it will only be identified.
                Sc-1               ADC

                Sc-2               ADC              TMS                       Fig.7. output data 1
                                                    6727                      Comment: The six channel output data, indicating all are
                Sc-3               ADC                                        good except channel 5 is going bad, which is reflected as
                                                    PAD                       low from the duration 4-10.
                Sc-4               ADC              K,
                Sc-5               ADC

                Sc-6               ADC

    Fig.5. Block Diagram representation for six channel redundancy
    management technique.

                       IV. TEST AND RESULTS
    The data is extracted using the six microphone channels. It
                                                                              Fig.8. input data 2
    is fed to the TMS320C 6727 DSP, Pro Audio Development
    Kit (PADK) for further processing using the Code                          Comment: The six channel input data, indicating channel 2
    Composer Studio for different cases. The data is applied as               is going bad (high) from duration 2.5 -10.
    an input to the program written in C language and is run
    using the code composer studio. The results are obtained for
    different cases are shown below.

                                      A pdf writer that produces quality PDF files with ease!
             Produce quality PDF files in seconds and preserve the integrity of your original documents. Compatible across              83
                 nearly all Windows platforms, simply open the document you want to convert, click “print”, select the
                                      “Broadgun pdfMachine printer” and that’s it! Get yours now!
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

Fig.9. output data 2                                                   Fig.12. input data 4

Comment: The six channel output data, indicating all are               Comment: The six channel input data, indicating channel 2
good except channel 2 is going bad, which is reflected as              is going bad (high) from duration 2.5 -10, and channel 5
low        from       the        duration         2.5-10.              going bad (low) from duration 4-10.

Fig.10. input data 3
                                                                       Fig.13. output data 4
Comment: The six channel input data, indicating channel 6
is going bad (low) from duration 3.5 -10.                              Comment: The six channel output data, indicating all are
                                                                       good except channel 2 and channel 4 is going bad, which is
                                                                       reflected as low from the duration 2.5-10 for channel 2 and
                                                                       from the duration 4-10 for channel 4.

Fig.11. output data 3

Comment: The six channel output data, indicating all are
good except channel 6 is going bad, which is reflected as
low from the duration 3.5-10.
                                                                       Fig.14. input data 5
                                                                       Comment: The six channel input data, indicating channel 2
                                                                       is going bad ( low ) from duration 1-10, and channel 6
                                                                       going bad ( low) from duration 3.5-10.

                                   A pdf writer that produces quality PDF files with ease!
          Produce quality PDF files in seconds and preserve the integrity of your original documents. Compatible across          84
              nearly all Windows platforms, simply open the document you want to convert, click “print”, select the
                                   “Broadgun pdfMachine printer” and that’s it! Get yours now!
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                      Vol. 8, No. 1, April 2010

                                                                                                                AUTHORS PROFILE

                                                                                                                      Sri. H.S. Aravinda Graduated in
                                                                                                                      Electronics        &      Communication
                                                                                                                      Engineering      from    University     of
                                                                                                                      Bangalore, India in 1997. He has a Post
                                                                                                                      Graduation       in     Bio-     Medical
                                                                                                                      Instrumentation from University of My
                                                                                                                      sore, India in 1999. His special interests
                                                                                                                      in research are Fault tolerance, signal
                                                                                     processing. He has been teaching engineering for UG & P G for last 12
                                                                                     years. He served various engineering colleges as a teacher and at present
                                                                                     he is an Assistant professor in the Department of Electronics &
                                                                                     Communication in Reva Institute of Technology & Management,
                                                                                     Bangalore, India. He has more than 13 research papers in various National
                                                                                     and International Journals & Conferences. He is a member of ISTE Also
    Fig.15. output data 5                                                            has served on the advisory and technical national conferences.

    Comment: The six channel output data, indicating all are
                                                                                                                         Sri. Dr. H.D. Maheshappa Graduated
    good except channel 4 and channel 6 is going bad, which is                                                           in Electronics & Communication
    reflected as low from the duration 1-10 for channel 4 and                                                            Engineering from University of
    from the duration 3.5-10 for channel 6.                                                                              Mysore, India in 1983. He has a Post
                                                                                                                         Graduation in Industrial Electronics
                                                                                                                         from University of My sore, India in
                             IV.    CONCLUSION                                                                           1987. He holds a Doctoral Degree in
                                                                                                                         Engineering from Indian Institute of
                                                                                                                         Science, Bangalore, India, since 2001.
The data is compared with the results obtained. If we compare                        He is specialized in Electrical contacts, Micro contacts, Signal integrity
the output results with that of the data input, the output is                        interconnects etc.     His special interests in research are Bandwidth
becoming zero whenever there is less amplitude data in the input                     Utilization in Computer Networks. He has been teaching engineering for
and also high amplitude in the input indicating the wrong data.                      UG & P G for last 25 years. He served various engineering colleges as a
                                                                                     teacher and at present he is a Professor & Head of the Department of
The wrong data is identified and are displayed in the plots. Here                    Electronics & Communication in Reva Institute of Technology &
if the two channel microphone data is bad then it will only be                       Management, Bangalore , India. He has more than 35 research papers in
identified. The results are matching with the expected output. It                    various National and International Journals & Conferences . He is a
proves that the algorithm implemented in C language is                               member of IEEE, ISTE, CSI & ISOI. He is a member of Doctoral
                                                                                     Committee of Coventry University UK. He has been a Reviewer of many
effectively working for the given data. It is successfully                           Text Books for the publishers McGraw-Hill Education (India) Pvt., Ltd.,
identifying and detecting the correct and wrong data. Hence we                       Chaired Technical Sessions, and National Conferences and also has served
could verify and prove the redundancy software works better for                      on the advisory and technical national conferences.
achieving fault tolerance for testing of satellites with acoustic
                                                                                                                Sri. Dr.. Ranjan Moodithaya did his MSc from
spectrum.                                                                                                       Mysore University in 1970 and got his Ph D
                            V. REFERENCES                                                                       from Indian Institute of Science in 1986. He
                                                                                                                joined NAL in 1973 after 2 yrs of teaching at
    [1] Aravinda H S, Dr H D Maheshappa, Dr Ranjan Moodithaya ,
    “Verification of the Six Channel Quad Redundancy Management Software                                        Post Graduate centre of Mysore University. At
    with the Fault Tolerant Measurement Techniques of Acoustic Spectrum of                                      present, he is Scientist G and he is incharge of
    Satellites” International conference on PDPTA’09 WORLDCOMP 2009,                                            the NAL-ISRO Acoustic Test Facility and
                                                                                                                NAL’s       Knowledge       and      Technology
    held at lasvegas, Nevada, USA, Vol II, PP 553 to 558, July 13-16 2009.
                                                                                                                Management Division. He is a Life member of
                                                                                                                Acoustic Society of India and Aeronautical
    [2] R.K. Das, S.Sen & S. Dasgupta, “ Robust and fault tolerant controller                                   Society of India. His innovative products are
    for attitude control of a satellite launch vehicle ” IET Control theory &        sold to prestigious institutions like Westinghouse, Lockheed, Boeing and
    applications, PP 304 to 312, Feb 2007.                                           Mitsubishi through M/s. Wyle Laboratories, USA, the world leaders in the
                                                                                     design of acoustic facilities. Dr. Ranjan has more than a dozen publications
     [3] Jan-lung sung “A dynamic slack management for real time distributed
    systems” IEEE Transaction on computers, PP 30 to 39, Feb 2008.                   in international and national journals and more than 30 internal technical
                                                                                     publications. He is a Life member of Acoustic Society of India and
    [4] Louis p Bolduc “Redundancy management system for the x-33 vehicle            Aeronautical Society of India. He is also a Member of Instrument Society
    and mission computer”. IEEE Transaction on computers, PP 31 to 37, May           of India.

    [5] Hilmer, H. , Kochs, H.-D. and Dittmar, E. “A Fault tolerant
    communication architecture for real time control systems”. IEEE
    Transaction on computers, PP 111 to 118, Jun1997.

    [6] Oleg Sokolsky, Mohamed Younisy, Insup Leez, Hee-Hwan Kwakz and
    Jeff Zhouy “Verification of the Redundancy Management System for Space
    Launch Vehicle” IEEE Transaction on computers, PP 42 to 52, Sep 1998.

                                       A pdf writer that produces quality PDF files with ease!
              Produce quality PDF files in seconds and preserve the integrity of your original documents. Compatible across                          85
                  nearly all Windows platforms, simply open the document you want to convert, click “print”, select the
                                       “Broadgun pdfMachine printer” and that’s it! Get yours now!
                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                           Vol. 8, No.1, 2010

                       BASED LOCATION SEARCH ENGINES

 1                                                         2
     Dr.M.Umamaheswari                                         S.Sivasubramanian
     Dept. of Computer Science, Bharath University             Dept. of Computer Science, Bharath University
     Chennai-73, Tamil Nadu, India                             Chennai-73, Tamil Nadu, India                                        sivamdu2001@Yahoo.Com

ABSTRACT                                                  increasingly        challenging    to     satisfy   certain
           Geographic location search engines             information needs. While search engines are still
allow users to constrain and order search                 able to index a reasonable subset of the (surface)
results in an intuitive manner by focusing a              web, the pages a user is really looking for are
query on a particular geographic region.                  often buried under hundreds of thousands of less
Geographic search technology, also called                 interesting results. Thus, search engine users are
location     search,   has    recently    received        in danger of drowning in information. Adding
significant interest from major search engine             additional terms to standard keyword searches
companies. Academic research in this area has             often fails to narrow down results in the desired
focused primarily on techniques for extracting            direction. A natural approach is to add advanced
geographic knowledge from the web. In this                features that allow users to express other
paper, we study the problem of efficient query            constraints or preferences in an intuitive
processing in scalable geographic search                  manner, resulting in the desired documents to be
engines. Query processing is a major bottleneck           returned among the first results. In fact, search
in standard web search engines, and the main              engines have added a variety of such features,
reason for the thousands of machines used by              often   under       a    special   “advanced        search”
the major engines. Geographic search engine               interface, but mostly limited to fairly simple
query processing is different in that it requires         conditions     on       domain,    link    structure,    or
a combination of text and spatial data                    modification date. In this paper we focus on
processing techniques. We propose several                 geographic web search engines, which allow
algorithms for efficient query processing in              users to constrain web queries to certain
geographic search engines, integrate them into            geographic areas. In many cases, users are
an existing web search query processor, and               interested     in    information        with   geographic
evaluate them on large sets of real data and              constraints, such as local businesses, locally
query traces.                                             relevant news items, or Permission to make
Key word: location, search engine, query                  digital or hard copies of all or part of this work
processing                                                for personal or classroom use is granted without
I.INTRODUCTION            The World-Wide Web              fee provided that copies are not made or
has reached a size where it is becoming                   distributed for profit or commercial advantage

                                                                                        ISSN 1947-5500
              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                        Vol. 8, No.1, 2010
and that copies bear this notice and the full              the bridge to Tambaram or nearby in Gundy
citation on the first page. To copy otherwise, tore        might also be acceptable). Second, geographic
publish, to post on servers or to redistribute to          search is a fundamental enabling technology for
lists, requires prior specific tourism information         “location-based services”, including electronic
about a particular region. For example, when               commerce via cellular phones and other mobile
searching for yoga classes, local yoga schools             devices. Third, geographic search supports
are of much higher interest than the web sites of          locally targeted web advertising, thus attracting
the world’s largest yoga schools. We expect that           advertisement budgets of small businesses with a
‘geographic search engine’s, that is, search               local focus. Other opportunities arise from
engines that support geographic preferences, will          mining geographic properties of the web,
have a major impact on search technology and               example, for market research and competitive
their business models. First, geographic search            intelligence. Given these opportunities, it comes
engines provide a very useful tool. They allow             as no surprise that over the last two years leading
users to express in a single query what might              search engine companies such as Google and
take multiple queries with a standard                      Yahoo have made significant efforts to deploy
search engine.                                             their own versions of geographic web search.
A. LOCATION BASED                                          There has also been some work by the academic
A user of a standard search engine looking for a           research community, to mainly on the problem
yoga school in or close to Tambaram, Chennai,              of extracting geographic knowledge from web
might have to try queries such as                          pages and queries. Our approach here is based on
•   yoga ‘‘Delhi’’                                         a setup for geographic query processing that we
•   yoga “Chennai”                                         recently introduced in [1] in the context of a
• yoga‘‘Tambaram’’ (a part of Chennai)                     geographic search engine prototype. While there
                                                           are many different ways to formalize the query
                                                           processing   problem     in   geographic       search
                                                           engines, we believe that our approach results in a
                                                           very general framework that can capture many

but this might yield inferior results as there are
many ways to refer to a particular area, and since         B. QUERY FOOTPRINT

a purely text-based engine has no notion of                We focus on the efficiency of query processing
geographical closeness (example, a result across           in geographic search engines, example, how to

                                                                                    ISSN 1947-5500
              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                        Vol. 8, No.1, 2010
maximize the query throughput for a given                  approaches it is assumed that this information is
problem size and amount of hardware. Query                 provided via meta tags or by third parties. The
processing is the major performance bottleneck             resulting page footprint is an arbitrary, possibly
in current standard web search engines, and the            noncontiguous area, with an amplitude value
main reason behind the thousands of machines               specifying the degree of relevance of each
used by larger commercial players. Adding                  location. Footprints can be represented as
geographic constraints to search queries results           polygons or bitmap-based structures; details of
in additional challenges during query execution            the representation are not important here. A geo
which we now briefly outline. In a nutshell,               search engine computes and orders results based
given a user query consisting of several                   on two factors.
keywords, a standard search engine ranks the               C. KEYWORDS AND GEOGRAPHY.

pages in its collection in terms of their relevance        Given a query, it identifies pages that contain the
to the keywords. This is done by using a text              keywords and whose page footprint intersects
index structure called an inverted index to                with the query footprint, and ranks these results
retrieve the IDs of pages containing the                   according to a combination of a term-based
keywords, and then evaluating a term-based                 ranking function and a geographic ranking
ranking function on these pages to determine the           function that might, example, depend on the
k highest-scoring pages. (Other factors such as            volume of the intersection between page and
hyperlink structure and user behavior are also             query footprint. Page footprints could of course
often used, as discussed later). Query processing          be indexed via standard spatial indexes such as
is highly optimized to exploit the properties of           R∗-trees, but how can such index structures be
inverted index structures, stored in an optimized
                                                           integrated into a search engine query processor,
compressed format, fetched from disk using
                                                           which is optimized towards inverted index
efficient scan operations, and cached in main
                                                           structures? How should the various structures be
memory. In contrast, a query to a geographic
                                                           laid out on disk for maximal throughput, and
search engine consists of keywords and the
                                                           how should the data flow during query execution
geographic area that interests the user, called
                                                           in such a mixed engine? Should we first execute
“query footprint”.
                                                           the textual part of the query, or first the spatial
       Each page in the search engine also has a
                                                           part, or choose a different ordering for each
geographic area of relevance associated with it,
                                                           query? These are the basic types of problems that
called the ‘geographic footprin’t of the page.
                                                           we address in this paper. We first provide some
This area of relevance can be obtained by
                                                           background    on    web    search      engines     and
analyzing the collection in a preprocessing step
                                                           geographic web search technology. We assume
that extracts geographic information, such as city
                                                           that readers are somewhat familiar with basic
names, addresses, or references to landmarks,
                                                           spatial data structures and processing, but may
from the pages and then maps these to positions
                                                           have less background about search engines and
using external geographic databases. In other
                                                           their inner workings. Our own perspective is

                                                                                     ISSN 1947-5500
              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                        Vol. 8, No.1, 2010
more search-engine centric: given a high-                   number of search term occurrences and their
performance search engine query processor                   positions and contexts, to compute a score for
developed in our group, our goal is to efficiently          each document containing the search terms. We
integrate the types of spatial operations arising in        now formally introduce some of these concepts.
geographic search engines                                   A. DOCUMENTS, TERMS, AND QUERIES:

II. BASICS OF SEARCH ENGINE ARCHITECTURE                    We assume a collection D = {d0, d1, . . . dn−1}
The basic functions of a crawl-based web search             of n web pages that have been crawled and are
engine can be divided into ‘crawling, data                  stored on disk. Let W = {w0,w1, . . . , wm−1} be
mining,    index     construction,    and     query         all the different words that occur anywhere in D.
processing’. During crawling, a set of initial seed         Typically, almost any text string that appears
pages is fetched from the web, parsed for                   between separating symbols such as spaces,
hyperlinks, and then the pages pointed to by                commas, etc., is treated as a valid word (or term).
these hyperlinks are fetched and parsed, and so             A query
on, until a sufficient number of pages has been                         q = {t0, t1, . . . , td−1}            (1)
acquired. Second, various data mining operations            is a set1 of words (terms).
are performed on the acquired data, example,                B. INVERTED INDEX:

detection of web spam and duplicates,          link         An inverted index I for the collection consists of
analysis based on Page rank [7], or mining of               a set of inverted lists
word associations. Third, a text index structure is                    Iw0, Iw1, . . . , Iwm−1                (2)
built on the collection to support efficient query          Where list Iw contains a posting for each
processing. Finally, when users issue queries, the          occurrence of word w. Each posting contains the
top-10 results are retrieved by traversing this             ID of the document where the word occurs, the
index structure and ranking encountered pages               position within the document, and possibly some
according to various measures of relevance.                 context (in a title, in large or bold font, in an
Search engines typically use a text index                   anchor text). The postings in each inverted list
structure called an inverted index, which allows            are usually sorted by document IDs and laid out
efficient retrieval of documents containing a               sequentially on disk, enabling efficient retrieval
particular word (term). Such an index consists of           and decompression of the list. Thus, Boolean
many inverted lists, where each inverted list Iw            queries can be implemented as unions and
contains the IDs of all documents in the                    intersections of these lists, while phrase searches
collection that contain a particular word w,                C. TERM-BASED RANKING:

usually sorted by document ID, plus additional              The most common way to perform ranking is
information about each occurrence. Given,                   based on comparing the words (terms) contained
example, a query containing the search terms                in the document and in the query. More
“apple”,“ orange”, and “pear”, a search engine              precisely, documents are modeled as unordered
traverses the inverted list of each term and uses           bags of words, and a ranking function assigns a
the information embedded therein, such as the               score to each document with respect to the

                                                                                       ISSN 1947-5500
              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                        Vol. 8, No.1, 2010
current query, based on the frequency of each              highly efficient scan operations, without any
query word in the page and in the overall                  random lookups.
collection, the length of the document, and                  III. BASICS OF GEOGRAPHIC WEB SEARCH
maybe the context of the occurrence (example,              We now discuss the additional issues that arise in
higher score if term in title or bold face).               a geographic web search engine. Most details of
Formally, given “a query (1) is”, a ranking                the existing commercial systems are proprietary;
function F assigns to each document D a score F            our discussion here draws from the published
(D, q). The system then returns the k documents            descriptions of academic efforts in [1, 3] the first
with the highest score. One popular class of               task, crawling, stays the same if the engine aims
ranking functions is the cosine measure [44], for          to cover the entire web. In our systems we focus
example                                                    on Germany and crawl the de domain; in cases
                                                           where the coverage area does not correspond
                                                           well to any set of domains, focused crawling
                                                           strategies [4 may be needed to find the relevant
In the equation (3) Where fD,ti and fti are the
frequency of term ti in document D and in the
                                                           A. GEO CODING: Additional steps are performed
entire collection, respectively. Many other
                                                           as part of the data mining task in geographical
functions   have    been    proposed,    and    the
                                                           search engines, in order to extract geographical
techniques in this paper are not limited to any
                                                           information from the collection. Recall that the
particular class. In addition, scores based on link
                                                           footprint of a page is a potentially noncontiguous
analysis or user feedback are often added into the
                                                           area of geographical relevance. For every
total score of a document; in most cases this does
                                                           location in the footprint, an associated integer
not affect the overall query execution strategy if
                                                           value expresses the certainty with which we
these contributions can be pre computed offline
                                                           believe the page is actually relevant to the
and stored in a memory-based table or embedded
                                                           location. The process of determining suitable
into the index. For example, the ranking function
                                                           geographic footprints for the pages is called ‘geo
might become something like F (D, q) =
                                                           coding’ [3] In [1], geo coding consists of three
pr(D)+F(D, q) where pr(D) is a pre computed
                                                           steps, geo extraction, geo matching, and geo
and suitably normalized Page rank score of page
                                                           propagation. The first step extracts all elements
D. The key point is that the above types of
                                                           from a page that indicate a location, such as city
ranking functions can be computed by first
                                                           names, addresses, landmarks, phone numbers, or
scanning the inverted lists associated with the
                                                           company names. The second step maps the
search terms to find the documents in their
                                                           extracted elements to actual locations (that is,
intersection, and then evaluating the ranking
                                                           coordinates),   if   necessary      resolving      any
function only on those documents, using the
                                                           remaining ambiguities, example, between cities
information embedded in the index. Thus, at
                                                           of the same name. This results in an initial set of
least in its basic form, query processing with
                                                           footprints for the pages. Note that if a page
inverted lists can be performed using only a few

                                                                                     ISSN 1947-5500
                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                            Vol. 8, No.1, 2010
contains several geographic references, its                                                  example, as polygons, would also work. All of
footprint may consist of several noncontiguous                                               our algorithms approximate the footprints by sets
areas, possibly with higher certainty values                                                 of bounding rectangles; we only assume the
resulting, say, from a complete address at the top                                           existence   of      a    black-box     procedure       for
of a page or a town name in the URL than from a                                              computing     the       precise   geographical      score
single use of a town name somewhere else in the                                              between a query footprint and a document
page text.                                                                                   footprint. During index construction, additional
                                                                                             spatial index structures are created for document
 R*TREE Spatial Index Structures         India                                               footprints as described later.
                                                                                             B. GEOGRAPHIC QUERY PROCESSING: As in
                  North      East           Middle
                                                             West          South zone        [1], each search query consists of a set of
                  zone       zone                            zone

                                                                                             (textual) terms, and a query footprint that

                    Kerala   Karnataka           AP            TamilNadu
                                                                                             specifies the geographical area of interest to the
                                                                                             user. We assume a geographic ranking function
                  Chennai     Coimbat           Madurai       Salem                          that assigns a score to each document footprint
                                                                                             with respect to the query footprint, and that is
                  Tambaram    Puthamallai       Vellachary    T.Nagar
                                                                                             zero if the intersection is empty; natural choices
                Tambaram       Puthamalla       Vellachary    T.Nagar
                Local Data     i                Local Data    Local Data
                                                                                             are the inner product or the volume of the
                                                                                             intersection. Thus, our overall ranking function
Figure 1.1shows an example of a page and its                                                 might be of the form F (D, q) = g(fD, fq) + pr(D)
split footprint.                                                                             + F(D, q), with a term-based ranking function
The third step, geo propagation, improves                                                    F(D, q), a global rank pr(D) (example,
quality and coverage often initial geo coding by                                             Pagerank), and a geographic score g(fD, fq)
analysis of link structure and site topology. Thus,                                          computed from query footprint fq and document
a page on the same site as many pages relevant                                               footprint fD (with appropriate normalization of
to Chennai City, or with many hyperlinks to or                                               the three terms). Our focus in this paper is on
from such pages, is also more likely to be                                                   how to efficiently compute such ranking
relevant to Chennai and should inherit such a                                                functions using a combination of text and spatial
footprint (though with lower certainty). In                                                  index structures. Note that the query footprint
addition, geo coding might exploit external data                                             can be supplied by the user in a number of ways.
sources such as whois data, yellow pages, or                                                 For mobile devices, it seems natural to choose a
regional web directories. The result of the data                                             certain area around the current location of the
mining phase is a set of footprints for the pages                                            user as a default footprint. In other cases, a
in the collection. In [30], footprints were                                                  footprint could be determined by analyzing a
represented as which were stored in a highly                                                 textual query for geographic terms, or by
compressed quad-tree structure, but this decision                                            allowing the user to click on a map. This is an
is not really of concern to us here. Other
reasonably compact and efficient representations,

                                                                                                                          ISSN 1947-5500
                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                          Vol. 8, No.1, 2010
interface issue that is completely orthogonal to           fetches the remaining footprints from disk, in
our approach.                                              order to score documents precisely.
IV. ALGORITHMS                                             C. K-SWEEP ALGORITHM
A. TEXT-FIRST BASELINE: This algorithm first               The main idea of the first improved algorithm is
filters results according to textual search terms          to retrieve all required toe print data through a
and thereafter according to geography. Thus, it            fixed number k of contiguous scans from disk. In
first accesses the inverted index, as in a standard        particular, we build a grid-based spatial structure
search engine, retrieving a sorted list of the             in memory that contains for each tile in a 1024
docIDs (and associated data) of documents that             1024 domain a list of m toe print ID intervals.
contain all query terms. Next, it retrieves all            For example, for m = 2 a tile T might have two
footprints of these documents. Footprints are              intervals [3476, 3500] and [23400, 31000] that
arranged on disk sorted by docID, and a                    indicate that all toe prints that intersect this tile
reasonable disk access policy is used to fetch             have toe print IDs in the ranges [3476, 3500] and
them: footprints close to each other are fetched                                                      
                                                           [23400, 31000]. In the case of a 1024 1024 grid,
in a single access, while larger gaps between              including about 50% empty tiles, the entire
footprints on disk are traversed via a forward             auxiliary structure can be stored in a few MB.
seek. Note that in the context of a DAAT text              This could be reduced as needed by compressing
query processor, the various steps in fact overlap.        the data or choosing slightly larger tiles (without
The inverted index access results in a sorted              changing the resolution of the actual footprint
stream of docIDs for documents that contain all            data). Given a query, the system first fetches the
query terms, which is directly fed into the                interval information for all tiles intersecting the
retrieval of document footprints, and precise              query footprint, and then computes up to
scores are computed as soon as footprints arrive           k ≥ m larger intervals called sweeps that cover
from disk.                                                 the union of the intervals of these tiles. Due to
B. GEO-FIRST BASELINE: This algorithm uses a               the characteristics of space filling curves, each
spatial data structure to decrease the number of           interval is usually fairly small and intervals of
footprints fetched from disk. In particular,               neighboring      tiles   overlap       each       other
footprints are approximated by MBRs that                   substantially. As a result, the k generated sweeps
(together with their corresponding docIDs) are             are much smaller than the total toe print data.
kept in a small (memory-resident) R∗-tree. As              The system next fetches all needed toe print data
before, the actual footprints are stored on disk,          from disk, by means of k highly efficient scans.
sorted by docID. The algorithm first accesses the          The IDs of the encountered toe prints are then
                                                           translated into doc IDs and sorted. Using the
R∗-tree to obtain the docIDs of all documents
                                                           sorted list of docIDs, we then access the inverted
whose footprint is likely to intersect the query
                                                           index to filter out documents containing the
footprint. It sorts the docIDs, and then filters
                                                           textual query terms. Finally we evaluate the
them by using the inverted index. Finally, it
                                                           geographic score between the query footprint

                                                                                     ISSN 1947-5500
              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                        Vol. 8, No.1, 2010
and   the   remaining     documents     and    their        geographic query processing can be performed at
footprints. The algorithm can be summarized as              about the same level of efficiency as text-only
follows:                                                    queries. There are a number of open problems
K-SWEEP ALGORITHM:                                          that we plan to address. Moderate improvements
(1) Retrieve the toe print ID intervals of all tiles        in performance should be obtainable by further
intersecting the                                            tuning of our implementation. Beyond these
Query footprint.                                            optimizations,     we     plan     to   study    pruning
(2) Perform up to k sweeps on disk, to fetch all            techniques for geographic search engines that
toe prints in the union of intervals from Step (1).         can produce top-k results without computing the
(3) Sort the doc IDs of the toe prints retrieved in         precise scores of all documents in the result set.
Step (2) and access the inverted index to filter            Such techniques could combine early termination
these doc IDs.                                              approaches from search engines with the use of
(4) Compute the geo scores for the remaining                approximate (lossy-compressed) footprint data.
doc IDs using the toe prints retrieved in Step (2).         Finally, we plan to study parallel geographic
One limitation of this algorithm is that it fetches         query processing on clusters of machines. In this
the complete data of all toe prints that intersect          case, it may be preferable to assign documents to
the query footprint (plus other close by toe                participating nodes not at random, as commonly
prints), without first filtering by query terms.            done by standard search engines, but based on an
Note that this is necessary since our simple                appropriate partitioning of the underlying
spatial data structure does not contain the actual                                  Draw back         Advantage of
                                                             Searching of
docIDs for toe prints intersecting the tile. Storing                                  of old            Proposed
a list of docIDs in each tile would significantly                                     system              system
increase the size of the structure as most docIDs             Accuracy of            Very less        Accuracy and
would appear in multiple tiles. Thus, we have to               Local data            local data      more local data
first access the toe print data on disk to obtain                                      0.65
                                                            Processing time                           0.34 seconds
candidate docIDs that can be filtered through the                                    seconds
inverted index                                                                                           Splitting
                                                                Regional            --------NIL-
CONCLUSIONS                                                                                           different type
                                                              specification             ----
We discussed a general framework for ranking                                                            of Region
search results based on a combination of textual                                      No link
and spatial criteria, and proposed several                                                              Good link
                                                              Spatial data           between
algorithms for efficiently executing ranked                                                           between text
                                                                structure            text and
queries on very large collections. We integrated                                                     and spatial data
                                                                                    spatial data
our algorithms into an existing high-performance
search engine query processor and evaluated                 REFERENCES
                                                            [1]. A. Markowetz, Y.-Y. Chen, T. Suel, X. Long, and B.
them on a large data set and realistic geographic
                                                            Seeger. Design and implementation of a geographic search
queries. Our results show that in many cases

                                                                                         ISSN 1947-5500
                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                           Vol. 8, No.1, 2010
engine. In 8th Int. Workshop on the Web and Databases                  projects,23 M.Tech projects and 6 PhD
(WebDB), June 2005.                                                    research works.153 -
[2]. V. Anh, O. Kretser, and A. Moffat.Vector-space ranking
with effective early termination. In Proc. of the 24th Annual          2)
SIGIR Conf. on Research and Development in Information
Retrieval, pages 35–42, September 2001.
[3]. K. McCurley.      Geospatial       mapping
and navigation of the web. In Proc. of the 10th Int. World
Wide Web Conference,                pages 221–229,May 2001.
[4]. S. Chakrabarti,     M. van den Berg,     and
B. Dom. Focused crawling: Anew approach to topic-specific              Mr.S.Sivasubramanian received his
web resource discovery. In Proc. of     the 8th Int. World Wide        Diploma       in  Hardware    Software
Web Conference, May 1999.                                              installing in ECIL-BDPS, Govt of India,
[5]. Y. Zhou, X. Xie, C. Wang, Y. Gong, and W. Ma. Hybrid              and Advanced Diploma in computer
index structures for location-based web search. In Proc. of            Application in UCA, Madurai, and
the 14th Conf.on Information and Knowledge Management                  Bachelor of Science in Physics from
(CIKM), pages 155–162, November 2005.                                  Madurai Kamaraj University in 1995,
[6]. Reference for query processing in web search engine               Master of Science in Physics from
based on the Journal for Yen-Yu Chen Polytechnic                       Madurai Kamaraj University in 1997,
University Brooklyn, NY 11201, USA Torsten Suel                        Post Graduate Diploma in Computer and
Polytechnic University Brooklyn, NY 11201, USA June 2006               Application in Government of India
              AUTHOR’S PROFILE:                                        2000. Master of Technology in
                                                                       Computer Science and engineering from
                                                                       Bharath University Chennai 2007.
                                                                       Pursing PhD in Bharath University
                                                                       Chennai. He has more than 5 years of
                                                                       teaching experience and guided 20
                                                                       B.Tech projects, 11 M.Tech projects

Dr.M.Uma Maheswari received her
Bachelor of science in Computer
Science from Bharathidasan university
in 1995, Master of Computer
Applications in Computer Science from
Bharathidasan University in1998,M.Phil
in Computer Science from Alagappa
University, Karaikudi, Master of
Technology in Computer Science from
Mahatma Gandhi Kasi Vidyapeeth
university in 2005 and Ph.D in
Computer Science from Magadh
Universty, Bodh Gaya in 2007.She has
more than 10 years of teaching
experience and guided 150 M.C.A

                                                                                          ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 8, No. 1, 2010

                        Tunable Multifunction Filter Using
                                Current Conveyor

           Manish Kumar                                    M.C. Srivastava                                       Umesh Kumar
  Electronics and Communication                     Electronics and Communication                     Electrical Engineering Department
      Engineering Department                            Engineering Department                          Indian Institute of Technology
   Jaypee Institute of Information                   Jaypee Institute of Information                              Delhi, India
            Technology                                        Technology                    
            Noida, India                                      Noida, India                 

Abstract—The paper presents a current tunable multifunction                single input and three multiple outputs using three OTAs and
filter using current conveyor. The proposed circuit can be                 current follower (CF) [6].
realized as on chip tunable low pass, high pass, band pass and
elliptical notch filter. The circuit employs two current conveyors,             In the recent years there has been emphasis on
one OTA, four resistors and two grounded capacitors, ideal for             implementation of the voltage mode/current mode active filters
integration. It has only one output terminal and the number of             using second generation current conveyors (CCIIs) which
input terminals may be used. Further, there is no requirement              provide simple realization with higher bandwidth, greater
for component matching in the circuit. The resonance frequency             linearity and larger dynamic range. Kerwin-Huelsman-
(ω0) and bandwidth (ω0 /Q) enjoy orthogonal tuning. The cutoff             Newcomb (KHN) biquad realization of low-pass, band-pass
frequency of the filter is tunable by changing the bias current,           and high-pass filters with single input and three outputs,
which makes it on chip tunable filter. The circuit is realized by          employing five current conveyor (CCII), two capacitors and six
using commercially available current conveyor AD844 and OTA                resistors was proposed by Soliman in 1995 [7]. A universal
LM13700. A HSPICE simulation of circuit is also studied for the            voltage-mode filter proposed by Higasimura et al. employs
verification of theoretical results.                                       seven current conveyors, two capacitors and eight resistors [8].
                                                                           Realization of high-pass, low-pass and band-pass filters using
   Keywords- Active filter; Current Conveyor; Voltage- mode filter         three positive current conveyor and five passive components
                                                                           was reported by Ozoguz et. al.[9]. Chang and Lee [10] and
                       I.    INTRODUCTION                                  subsequently Toker et. al. [11] proposed realization of low-
     Active filters with current/voltage controllable frequency            pass, high-pass and band-pass filters employing current
have a wide range of applications in the signal processing and             conveyors and passive components with specific requirements.
instrumentation area. Tsividis et. al. employed the realization            Manish et. al. [12] proposed the realization of multifunction
of on chip MOSFET as voltage controlled resistor [1]. Their                filter (low-pass, high-pass, band-pass and notch filters) with
contributions and several other research papers may be                     minimum current conveyors and passive components. The
considered to be motivation for the VLSI industry to make on               central/cutoff frequency of these realizations could be changed
chip tunable filters [2],[3]. These realizations have small range          by changing the passive components.
of variation in the frequency. The OTA-C structure is highly                      In 2001 Wang and Lee implemented insensitive current
suitable for realizing electronically tunable continuous time              mode universal biquad MIMO realization using three balanced
filters. A number of voltage mode/current mode OTA-C biquad                output current conveyors and two grounded capacitors [13]. In
have been reported in the literature. Multiple-input multiple-             2004 Tangsrirat and Surakampontorn proposed electronically
output (MIMO), multiple-input single-output (MISO) and                     tunable current mode filters employing five current controlled
single-input multiple-output (SIMO) type circuits have also                current conveyors and two grounded capacitors [14]. A tunable
appeared in the literature. In 1996, Fidler and Sun proposed               current mode multifunction filter was reported in 2008 using
realization of current mode filter with multiple inputs and two            five universal current conveyors and eight passive components
outputs at different nodes using four dual output OTA’s and                [15]. Recently Chen and Chu realized universal electronically
two grounded capacitors [4]. Later, Chang proposed                         controlled current mode filter using three multi-output current
multifunctional biquadratic filters, using three operational               controlled conveyors and two capacitors, however the
transconductance amplifiers and two grounded capacitors [5].               frequency and quality factor of their realizations are not
In 2003, Tsukutani et. al proposed current mode biquad with                independent [16].

                                                                                                      ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 8, No. 1, 2010
    The proposed realization in this paper employs two current                  The current Iabc and parameter gm may be expressed as
conveyors, one OTA, five resistors and two grounded                         follows
capacitors with one output terminal and three input terminals.
All the basic low-pass, high-pass, band-pass and notch filters                                                         I abc
may be realized by the proposed circuit by selecting proper
                                                                                                            gm =
input terminals. The frequency of the filter can be changed by
changing the control voltage of the OTA.
    The following section presents circuit description of the
current conveyor. The sensitivity analysis, simulation results                 Where VT = KT                is the thermal voltage. The routine
and conclusion are discussed in the subsequent sections.
                                                                            analysis yields the following transfer function:
                    II.   CIRCUIT DESCRIPTION                                                                                                                (2)
                                                                                         1 ⎛ s C 2 C5 R1 R3 R4 R6 g mV2 + R3V1 + ⎞
                                                                               Vout =          ⎜                                 ⎟
    The first and second generation current conveyors were                              D( s ) ⎜ sC5 g m R1 R4 R6V3
introduced by Sedra and Smith in 1968, 1970 respectively;
these are symbolically shown in fig 1 and are characterized by
the port relations given by “(1)”                                              Where

                                                                               D( s ) = s 2 C 2 C 5 g m R1 R3 R4 R6 + sC 5 g m R1 R4 R6 + R3                  (3)

                                                                                Thus by using “(2)” we can realize low-pass, band-pass,
                                                                            high-pass and notch filter responses at the single output
                                                                            terminal by applying proper inputs at different nodes as shown
                                                                            in table1.

                                                                                          TABLE I.           VARIOUS FILTER RESPONSES
                                                                                        Filter\Input             V1            V2         V3
                                                                                         Low-pass                1             0          0
              Figure 1. Symbol of Current Conveyor II
                                                                                         High-pass               0             1          0

                ⎡V x ⎤ ⎡ 0           B 0⎤ ⎡ I x ⎤                (1)                     Band-pass               0             0          1

                ⎢I ⎥ = ⎢ 0           0 0⎥ ⎢V y ⎥
                                                                                           Notch                 1             1          0
                ⎢ y⎥ ⎢                  ⎥⎢ ⎥
                ⎢ I z ⎥ ⎢± K
                ⎣ ⎦ ⎣                0 0⎥ ⎢V z ⎥
                                        ⎦⎣ ⎦                                     The denominators for the all filter responses are same. The
    The values of B and K are frequency dependent and ideally               filtering parameter cutoff frequency (ωo), bandwidth (ωo/Q) and
B=1 and K=1. The ±K indicates the nature of current conveyor.               quality factor (Q) are given by
+ve sign indicates positive type current conveyor while –ve                                                                                            (4)
sign indicates negative.                                                                 ω0 =
    The proposed circuit shown in fig 2 employs only two                                                   R1 R 4 R 6 C 2 C 5 g m
current conveyors, five resistors and two capacitors. The
grounded capacitors are particularly very attractive for the                                               ω0           1                              (5)
integrated circuit implementation.                                                                               =
                                                                                                           Q          R3 C 2

                                                                                                                       C2                              (6)
                                                                                              Q = R3
                                                                                                                 g m R1 R 4 R 6 C 5

                                                                                 It can be seen from a perusal of “(4)” - “(6)” that the center
                                                                            frequency and bandwidth ω0/Q can be controlled independently
                                                                            through R6 and /or C5 and R3. The transconductance of the OTA
                                                                            is independently tuned by varying the bias current of the OTA.
        Figure 2.   Proposed Voltage Mode Multifunction Filter

                                                                                                                ISSN 1947-5500
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                Vol. 8, No. 1, 2010
                  III.    SENSITIVITY ANALYSIS                                                            IV.     SIMULATION RESULT
                                                                                      The complete circuit is simulated using commercially
  The sensitivity analysis of the proposed circuit is presented                   available AD844 and LM13700. The AD844 is used for the
in terms of the sensitivity of ω0 and Q with respect to the                       realization of CCII+ and CCII-. Figure 3 displays the
variation in the passive components as follows:                                   simulation result for the proposed filter. The circuit is designed
                                                                                  for ω0 = 8.7 KHz and Q=0.12 by considering R1 = R4 = R6 =
                      ω                  1                  (7)                10KΩ, C2 = C5 = 10nF, R3 = 14KΩ and gm=13.2mS. The
                    SC20,C5 , R1 , R4 , R6 , g m = −
                                                       2                          theoretical results have been are verified to match with
                                                                       (8)        simulation result. Figure 3 shows that the quality factor of the
                               S   Q
                                   R3   =1                                        notch filter is very high. It is due to the transfer function of the
                                                                                  notch filter is having complex conjugate zeros with zero real
                                                    1                  (9)        values. Figure 4 shows the cutoff/center frequency of the filter
                     S g m , R1 , R4 , R6 ,C5 = −

                                                    2                             with respect to the changes in bias current of the OTA. The
                                                                      (10)        response is showing the when the bias current is higher than the
                              S C2 =
                                Q                                                 output current of the OTA than the frequency variation is linear
                                          2                                       and circuit will be stable.

As per these expressions, both the ω0 and Q sensitivities are                                                  V.     CONCLUSION
less than ± ½ with a maximum value of S
                                                           R3   =1.                   The circuit proposed in this paper generates low-pass, high-
                                                                                  pass, band-pass and notch filter using two current conveyors,
                                                                                  four resistors and two capacitors. The circuit provides more
                                                                                  number of filter realizations at the single output terminal. In
                                                                                  addition of this proposed circuit does not have any matching
                                                                                  constraint/cancellation condition. The circuit employs’
                                                                                  grounded capacitor, suitable for IC fabrication. The circuit
                                                                                  enjoys the othogonality between the cutoff frequency and the
                                                                                  bandwidth. The OTA is linearly tunable when the bias current
                                                                                  is higher than the output current. It has low sensitivities figure
                                                                                  of both active and passive components.

                                                                                  [1]   Y. Tsividis, M. Banu and J. Khoury,” Continious –time MOSFET-C
                                                                                        filters in VLSI,” IEEE journal of solid-state circuits, vol. sc-21, no.1, pp.
                                                                                        15-30, 1986.
                                                                                  [2]   M. Ismail, S. V. Smith and R. G. Beale, “.A new MOSFET-C universal
                                                                                        filter structure for VLSI,” IEEE journal of solid-state circuits, vol. sc-23,
                                                                                        no.1, pp. 182-194, 1988.
                Figure3: Multifunction Filter Response
                                                                                  [3]   Jaap van der Plas“MOSFET-C Filter with Low Excess Noise and
                                                                                        Accurate Automatic Tuning”,IEEE Journal of Solid State Circuits, vol.
                                                                                        26, no. 7, pp.922-929, 1991.
                                                                                  [4]   Yichuang Sun and J. K. Fidler ,”Structure Generation of Current-Mode
                                                                                        Two Integrator Loop Dual Output-OTA Grounded Capacitor Filters,”
                                                                                        IEEE trans. on cas-II: analog and digital signal processing, vol.43, no.9,
                                                                                        pp. 659-663, 1996.
                                                                                  [5]   Chun-Ming Chang,” New Multifunction OTA-C Biquads,” IEEE trans.
                                                                                        on cas-II: analog and digital signal processing, vol..46, no.6, pp. 820-824,
                                                                                  [6]   T. Tsukutani, Y. Sumi, M. Higashimura and Y. Fukui,” Current-mode
                                                                                        biquad using OTAs and CF,” Electronics letters, vol. 39, no-3, pp 262-
                                                                                        263, 2003.
                                                                                  [7]   A. M. Soliman, “Kerwin–Huelsman–Newcomb circuit using current
                                                                                        conveyors,” Electron. Lett., vol. 30, no. 24, pp. 2019–2020, Nov. 1994.
                                                                                  [8]   M. Higasimura and Y. Fukui, “Universal filter using plus-type CCII’s,”
                                                                                        Electron. Lett. vol. 32, no. 9, pp. 810-811, Apr. 1996.
              Figure4: Frequency Vs Control Bias Current
                                                                                  [9]   S. Ozoguz, A. Toker and O. Cicekoglu, “High output impedance current-
                                                                                        mode multifunction filter with minimum number of active and reduced
                                                                                        number of passive elements,” Electronics Letters, vol. 34, no 19, pp.
                                                                                        1807-1809, 1998

                                                                                                                     ISSN 1947-5500
                                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                     Vol. 8, No. 1, 2010
[10] Chun-Ming Chang and Ming- Jye Lee, “Voltage mode multifunction
     filter with single input and three outputs using two compound current
     conveyors,” IEEE Trans. On Circuits and Systems-I: vol. 46, no. 11,                                   AUTHORS PROFILE
     pp.1364-1365, 1999.
[11] A. Toker, O. Çiçekoglu, S. Özcan and H. Kuntman ,” High-output-                                  Manish Kumar was born in India in 1977. He received
     impedance transadmittance type continuous-time multifunction filter with                         his B.E. in electronics engineering from S.R.T.M.U.
     minimum active elements,” International Journal of Electronics, Volume                           Nanded in 1999 and M.E. degree from Indian Institute
     88, Number 10, pp. 1085-1091, 1 October 2001.                                                    of Science, Bangalore in 2003. . He is perusing Ph.D.
                                                                                                      He is working as faculty in Electronics and
[12] Manish Kumar, M.C. Srivastava and Umesh Kumar,” Current conveyor
                                                                                                      Communication Engineering Department of Jaypee
     based multifunction filter,” International Journal of Computer Science
     and Information Security, vol. 7 no. 2, pp. 104-107, 2009.                                       Institute of Information Technology, Noida He is the
                                                                                                      author of 10 papers published in scientific journals and
[13] Hung-Yu Wang and Ching-Ting Lee, “Versatile insensitive current-mode                             conference proceedings. His current area of research
     universal biquad implementation using current conveyors,” IEEE Trans.                            interests includes analogue circuits, active filters and
     On Circuits and Systems-I: vol. 48, no. 4, pp.409-413, 2001.                                     fuzzy logic.
[14] Worapong Tangsrirat and Wanlop Surakampontorn,” Electronically
     tunable current-mode universal filter employing only plus-type current-                             M. C. Srivastava received his B.E. degree from
     controlled conveyors and grounded capacitors, “Circuits systems signal                              Roorkee University (now IIT Roorkee), M.Tech.
     processing, circuits systems signal processing, vol. 25, no. 6, pp. 701–                            from Indian Institute of Technology, Mumbai and
     713, 2006.                                                                                          Ph.D from Indian Institute of Technology, Delhi in
                                                                                                         1974. He was associated with I.T. BHU, Birla
[15] Norbert Herencsar and Kamil Vrba,”Tunable current-mode multifunction                                Institute of Technology and Science Pilani, Birla
     filter using universal current conveyors,” IEEE third international                                 Institute of Technology Ranchi, and ECE Dept. JIIT
     conference on systems, 2008.                                                                        Sector-62 Noida. He has published about 60 research
[16] H.P. Chen and P.L.Chu,”Versatile universal electronically tunable                                   papers. His area of research is signal processing and
     current-mode filter using CCCIIs,” IEICE Electronics Express, vol. 6, no.                           communications. He was awarded with Maghnad
     2 pp. 122-128, 2009.                                                                                Saha Award for his research paper.

[17] A. M. Soliman, “Current mode universal filters using current conveyors:                             Umesh Kumar is a senior member, IEEE. He
     classification and review,” Circuits syst Signal Process, vol. 27, pp. 405-                         received B.Tech and Ph.D degree from IIT Delhi. He
     427, 2008.                                                                                          has published about 100 research papers in various
                                                                                                         journals and conferences. He is working as faculty in
[18] P. V. Anada Mohan, Current Mode VLSI Analog Filters, Birkhauser,
                                                                                                         Electrical Engineering Department, IIT Delhi.
     Boston, 2003.

                                                                                                                 ISSN 1947-5500
                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                    Vol. 8, No. 1, April 2010

      Artificial Neural Network based Diagnostic
       Model for Causes of Success and Failures
Bikrampal Kaur,                                                     Dr.Himanshu Aggrawal,
Deptt.of Computer Science & Engineering,                            Deptt.of Computer Engineering,
Chandigarh Engineering College,                                     Punjabi Unniversity,
Mohali,India.                                                       Patiala,India.                                 

Abstract— Resource management has always been an area of                management of the human resources. In this paper an
prime concern for the organizations. Out of all the resources           attempt have been made to identify and suggest HR factors
human resource has been most difficult to plan, utilize and             and propose a model to determine the influence of HR
manage. Therefore, in the recent past there has been a lot of           factors leading to failure. It is particularly important as the
research thrust on the managing the human resource. Studies
                                                                        neural networks have proved their potential in several fields
have revealed that even best of the Information Systems do fail
due to neglect of the human resource. In this paper an attempt          such as Industry, transport, dairy sectors etc.. India has
has been made to identify most important human resource                 distinguished IT strength in global scenario and using
factors and propose a diagnostic model based on the back-               technologies like neural networks is extremely important
propagation and connectionist model approaches of artificial            due to their decision making capabilities like human brain.
neural network (ANN). The focus of the study is on the mobile
-communication industry of India. The ANN based approach is                  In this paper a Neuro-Computing approach has been
particularly important because conventional approaches (such            proposed with some metrics collected through pre
as algorithmic) to the problem solving have their inherent              acquisition step from the communication industry. In this
disadvantages. The algorithmic approach is well-suited to the
                                                                        study, a coding of backpropagation algorithium have been
problems that are well-understood and known solution(s). On
the other hand the ANNs have learning by example and                    used to predict success or failure of company and also a
processing capabilities similar to that of a human brain. ANN           comparison is made with the connectionist model for
has been followed due to its inherent advantage over                    predicting the results. The back-propagation learning
conversion algorithmic like approaches and having                       algorithm based on gradient descent method with adaptive
capabilities, training and human like intuitive decision making         learning mechanism.. The configuration of the connectionist
capabilities. Therefore, this ANN based approach is likely to           approach has also been designed empirically. To this effect,
help researchers and organizations to reach a better solution to        several architectural parameters such as data pre-processing,
the problem of managing the human resource. The study is                data partitioning scheme, number of hidden layers, number
particularly important as many studies have been carried in
                                                                        of neurons in each hidden layer, transfer functions, learning
developed countries but there is a shortage of such studies in
developing nations like India. Here, a model has been derived           rate, epochs and error goal have been empirically explored
using connectionist-ANN approach and improved and verified              to reach an optimum connectionist network.
via back-propagation algorithm. This suggested ANN based
model can be used for testing the success and failure human                         II.       REVIEW OF LITERATURE
factors in any of the communication Industry. Results have
been obtained on the basis of connectionist model, which has                 The review of IS literature suggests that for the past 15
been further refined by BPNN to an accuracy of 99.99%. Any              years, the success and the failure HR factors in information
company to predict failure due to HR factors can directly               systems have been major concern for the academics,
deploy this model.
                                                                        practitioners,   business     consultants    and      research
Keywords— Neural Networks, Human resource factors, Company
success and failure factors.                                                 A number of researchers and organizations throughout
                                                                        the world have been studying that why information systems
                   I. INTRODUCTION                                      do fail, some important IS failure factors identified by [6,7]
     Achieving the information system success is a major                are:
issue for the business organizations. Prediction of a                        • Critical Fear-based culture.
company’s success or failure is largely dependent on the                     •     Technical fix sought.
management of human resource (HR). Appropriate                               •     Poor reporting structures
utilization of human resource may lead to the success of the                 •     Poor consultation.
company and their underutilization may lead to its failure.
                                                                             •     Over commitment.
                                                                             •     Changing requirements.
    In most of the organizations management makes use of                     •     Political pressures.
conventional Information System (IS) for predicting the
                                                                             •     Weak procurement.

                                                                                                  ISSN 1947-5500
                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                   Vol. 8, No. 1, April 2010

    •     Technology focused.                                                       III. OBJECTIVES OF STUDY
    •     Development sites split.
    •     Leading edge system                                       (i)       To design a HR model of factors affecting
    •    Project timetable slippage                                           success and failure in Indian Organisations of
    •    Complexity underestimated                                            Telecom sectors.
    •    Inadequate testing.                                        (ii)      To propose a diagnostic ANN based model of
                                                                              the prevailing HR success/failure factors in
    •    Poor training
                                                                              these organizations.
     Six major dimensions of IS viz. superior quality (the
measure of IT itself), information quality (the measure of
                                                                        A model depicting important human resources factors
information    quality),    information      use    (recipient
                                                                    has been designed on the basis of literature survey and
consumption of IS output), user satisfaction (recipient
                                                                    researchers experiences in the industry under this study
response to use of IS output), individual impact (the impact
                                                                    has been in figure1.
of information on the behavior of the recipient) and
organizational impact (the impact of information on
organizational performance) had already been proposed [8]
All these dimensions directly or indirectly are related to HR
of IS.

   Cancellation of IS projects [11] are usually due to a
combination of:
   • Poorly stated project goals
   • Poor project team composition
   • Lack of project management and control
   • Little technical know-how
   • Poor technology base or infrastructure
   • Lack of senior management involvement
   • Escalating project cost and time of completion

Some of the other elements of failure [12] identified were:
   • Approaches to the conception of systems;
   • IS development issues (e.g. user involvement)
   • Systems planning
   • Organizational roles of IS professionals
   • Organizational politics
   • Organizational culture
   • Skill resources
   • Development practices (e.g. participation)
   • Management of change through IT
   • Project management
   •     Monetary impact of failure
   • “Soft” and Hard” perceptions of technology
                                                                                       Fig.-1 Exhaustive View of HR Model
   • Systems accountability
   • Project risk
   • Prior experience with IT
                                                                              IV.       EXPERIMENTAL SCHEME
   • Prior experience with developing methods
   • Faith in technology                                         A. Using ANN
   • Skills, attitude to risk
                                                                     Neural networks differ from conventional approach of
All the studies predict that during the past two decades,        problem solving in a way similar to the human brain. The
investment in Information technology and Information             network is composed of a large number of highly
system have increased significantly in the organization. But     interconnected processing elements (neurons) working in
the rate of failure remains quite high. Therefore an attempt     parallel to solve a specific problem. Neural networks learn
is made to prepare the HR model for the prediction of the        by example. Differences in ANN and conventional systems
success or failure of the organization.                          have been given below in TABLE I.

                                                                                              ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                            Vol. 8, No. 1, April 2010

                                                                                  a)  Universe of study : All managers working at the
                                                                                      three levels of the selected organizations.
                        TABLE-1                                                   b) Sample Selection: A number of respondents based
       COMPARISON OF ANN AND CONVENTIONAL SYSTEM                                      on proportional stratified sampling from all of
                                                                                      these organizations will be selected. The
S.No    ANN                             Conventional Systems                          respondents will be identified from various levels
1       Learn by examples               Solve     problems    by                      in each organization. The sample size from a
                                        algorithmic approach
2       Unpredictable                   Highly predictable & well
                                                                                      stratum was determined on the basis of the
                                        defined                                       following criterion:
3       Better decision making due to   No decision making                           50% of the population where sample size > 5
        human like intelligence                                                      100% of the population where sample size < 5.
4       Trial and error method of       No learning method
5       Combination of IT & Human       Only deal with IT                 B. Data collection tools
6       Cannot be programmed            Can be programmed                     Primary data has been collected through a
                                                                          questionnaire-cum-interview method from the selected
                                                                          respondents (Appendix C). The questionnaire was designed
   Henceforth from TABLE I it can be seen that ANN are                    based on the literature survey, and detailed discussion with
better suited for the problem that are not so well defined and            many academicians, professionals and industry experts. The
predictable. Further ANN’s advantage is due to its                        detailed sampling plan of both the organizations has been
clustering unlike other conventional systems .Hence ANN is                shown in Table II.
betted suited for the problems that are not so well defined
and predictable.
   Applying ANN to HR factors graphically has been shown                                               TABLE II
                                                                                            DETAILS OF THE SAMPLING PLAN
in fig 2.                                                                              PUNCOM, MOHALI AND RELIANCE, CHANDIGARH
                                                                              Lev        Designation   Universe    Sample        % age         Total
                                                                              el                                                               Sampl

                                                                              I          Executive
                                                                                         Director        2              2          100
                                                                                         General                                                17
                                                                                         Manager             7          7          100
                 Fig. 2 Levels of HR of IS with ANN
                                                                                         General             6          8           50
                                                                              II         Assistant       10             7           70
A. Sampling scheme                                                                       General
   The research involves the collection of data from the                                 Senior          10             5           50
managers working at various levels within the selected                                   Manager
                                                                                                         10             5           50
enterprises. The total number of respondents in these                                    Manager
enterprises, the sample size selection and application of the
                                                                                         Deputy           30           15           50
neural network approaches has been followed. The study                        III
comprises      of   survey     of    employees     of    two
                                                                                         Senior           90           45           50          135
telecommunication companies. With this aim, two
prestigious     companies     (first   one    is    Reliance                                             150           75           50
Communications, Chandigarh and the other one is Puncom,                                  Officer
Mohali) have been considered in this study. The details of
the research methodology adopted in this research are given
                                                                          C. Processing of data
    1) For the Organisation:                                                   The responses of the 169 managers of the selected
      a) Universe of study: Telecommunication industry                    organizations under study were recorded on five-point likert
          comprises of Reliance InfoCom Vodafone, Essar,                  scale with scores ranging from 1 to 5. The mean scores of
          Idea, and Bharti-Airtel.                                        the managers of the two organizations and those over the
      b) Sample Selection: Reliance InfoCom, Chandigarh                   whole industry considering all the managers included in the
          and Punjab Communication Ltd(Puncom) Mohali.                    study.
    2) For the Respondents

                                                                                                        ISSN 1947-5500
                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                     Vol. 8, No. 1, April 2010

     The valid responses were entered in Microsoft Excel                        data(Appendix B). The N/W used was
software. Thus, this data formed has been the basis for the                     backpropagtion with training function,traingda
corresponding files on the ANN software. The 70%                                and adaptation learing function, learngdm. The
responses of total inputs scores along with their known                         mean square error MSE was found to be
target from MS-Excel sheet were fed for training the neural                     0.096841. The accuracy of connectionist model
network. The remaining scores of 30% responses were fed                         for the prediction of success and/or failure of
during the testing. Then the error_min of testing found to be                   company results out to be 99.90%
less than the error_min of training data. The accuracy of
99.90% is shown in Table III.
                                                                      Before analysis it is important to define:
     VI.       EXPERIMENT RESULT AND DISCUSSION                       HL: Hidden Layer (e.g. HL1: first Hidden Layer; HL2:
                                                                              second hidden Layer)
A.       Dataset                                                      Epoch: During iterative training of Neural Network, an
    The investigations have been carried out on the data                      epoch is a single pass through the entire training
obtained from telecommunication sector industry. This                         set, Followed by the testing of the verification
industry comprises of Reliance Communication, Vodafone,                       set.
Essay, Idea, and Bharti-Airtel. But the data has been
                                                                      MSE: Mean Square Error Learning Rate Coefficient η -It
undertaken at Reliance Communication, Chandigarh and
                                                                              determines the size of the weight adjustments
Punjab Communication Ltd. (Puncom) Mohali. The specific
                                                                              made at each iteration which influence the rate
choice has been made because:
                                                                              of convergence.
       •     The Telecom sector is very dynamic and fast           The description of the Simulation Results of the Table III
             growing. India is the second largest country of the           has been explained as
             world in mobile usage.
       •     The first industry is the early adopters of IT and    Col-1      It includes the configuration of the network having
             has by now, gained a lot of growth and experience                hidden layers 1(HL1) with 1 neuron and training
             in IS development and whereas the other one lag                  function tansigmoidal, which remain same from
             behind and leads to its failure.                                 35-1000 epochs. Then 2/logsig tried for HL1 in the
                                                                              network, it has 2 neurons and HL2 i.e. hidden layer
                                                                              2 having training faction tansigmodal tried for 35-
   One industry is considered for the study because of the                    1000 epoch. In this way the no. of neurons, training
fact that the working constraints of various organizations                    functions and hidden layers have been changed
under one industry are similar and hence adds to the                          during trial and error method.
reliability of the study finding. The input and output             Col-2     No. of epoch (defined earlier) varies from 35-1000
variables, considered in the study, include strategic                         per cycle
parameter(x1), tactical parameter (x2), operational                Col-3     Error goal is predefined for its tolerance.
parameter (x3), employee outcome (y). The dataset                  Col-4     Learning Coefficient
comprises of 52 patterns has been considered for the               Col-5     Mean Square Error for which the network is trained
training purpose of ANN and the remaining 23 patterns for
testing the network.
                                                                        Supervised       feed-forward    back      propagation
B.       Connectionist model                                       connectionist models based on viz., gradient descent
   The well versed ‘trial and error’ approach has been used        algorithm has been used. The network was investigated
throughout in this study. The Neural Network Toolbox               empirically with a single hidden layer containing different
under MATLAB R2007a is used for all training as well as            numbers of hidden neurons and gradually more layers has
simulation experiments.                                            added as depicted in the Table III. Several combinations of
                                                                   different parameters such as the data partitioning strategy;
                                                                   the number of epochs; the performance goal; transfer
           1) Collected scores for both input data and known       functions in hidden layers are explored on trial and error
              target were entered in MS-Excel as an input.         basis so as to reach the optimum combination.
           2) The input data of 70% of total data were
              imported to MAT lab’s workspace for training
              the ANN as depicted in Appendix A.                        The performance of the models developed in this study
           3) The known target data has been also imported to      is evaluated in terms of mean square error (MSE) for the
              Mat lab’s workspace.                                 connectionist model using the neural tool kit. The mean
           4) Then both the input data & the target data were      square error indicates the accuracy for the prediction of
              entered in the ANN toolbox and network is            success and/or failure of the organization comes out to be
              created using back propagation neural network.       99.90% through this model. The experimental results of
           5) The training has been done using 70% of the          simulation of data of success or Failure Company through
              input data and then testing (simulation) has been    this model are summarized in Table III.
              done on the rest of the 30% of the available

                                                                                                ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

                       TABLE-III                                      epoc 1405 and its weights has been saved for feeding to
                                                                      testing algorithm.
                  USING ANN TOOLKIT
                                                                           The algorithm has been tested with 30% of data
   Network                     Epoc   Error   Learnin   MSE           selected randomly from the given data which results in
   Configuration               hs     Goal    g                       error=0.009174, no. of epochs=13 by doing programming of
                                              rate                    BPNN algorithium using Mat lab as shown in Table IV.
   HL1             HL2                                                     The results from the programming code have been
                                                                      shown through Matlab.
                               35                       0.738111                                TABLE-IV
   1/ tansig       -           40     0.01    0.01      0.617595                        BPNN CODE TESTING RESULTS
                                                                                             ` MSE
   1/ tansig       -           45     0.01    0.01      0.634038
   1/ tansig       -           50     0.01    0.01      1.33051                       error=1.664782 no.of epoches=1
                                                                                      error=1.496816 no.of epoches=2
   1/ tansig       -           80     0.01    0.01      0.580224                      error=1.093136 no.of epoches=3
   1/ tansig       -           400    0.01    0.01      0.466721                      error=0.547380 no.of epoches=4
                                                                                      error=0.476718 no.of epoches=5
   1/ tansig       -           1000   0.01    0.01      0.421348                      error=0.429089 no.of epoches=6
                                                                                      error=0.370989 no.of epoches=7
   2/ logsig       1/ tansig   35     0.01    0.01      1.22608
                                                                                      error=0.303513 no.of epoches=8
   2/ logsig       1/ tansig   100    0.01    0.01      0.735229                      error=0.225575 no.of epoches=9
                                                                                      error=0.142591 no.of epoches=10
   2/ logsig       1/ tansig   200    0.01    0.01      0.402351                      error=0.071286 no.of epoches=11
                                                                                      error=0.027775 no.of epoches=12
   2/ logsig       1/ tansig   500    0.01    0.01      0.282909
                                                                                      error=0.009174 no.of epoches=13
   2/ logsig       1/ tansig   1000   0.01    0.01      0.138904

   3/ logsig       1/ tansig   35     0.01    0.01      0.81653
                                                                            During testing the BPNN coding, error_minima has
   3/ logsig       1/ tansig   1000   0.01    0.01      0.143183      found to be less than error_minima of training, which
                                                                      validates the algorithm .It, has been further added that this
   4/ logsig       1/ tansig   1000   0.01    0.01      0.096841
                                                                      accuracy of BPNN algorithium is found to be 99.99%
                                                                      whereas it was 99.90 in the connectionist model. Therefore
                                                                      this result is better than result obtained through hit and trail
HL1: First hidden layer                                               method (connectionist model) using neural network toolkit
HL2: Second hidden layer                                              and hence BPNN algorithm’s coding has fast performance
                                                                      and better results i.e.better prediction on low number of
     The table-III shows when first hidden layer has 4                epochs at the time of testing could be achieved. During
neurons and second hidden layer has 1 neuron with 1000                testing error_minima is less than error_minima of training
epochs, error goal 0.01, learning rate 0.01 mean square root          for remaining 30% data, which validates algorithm. It
is 0.096841, therefore accuracy of connectionist model for            comes out to be error=0.009174, at no. of
the prediction of failure company becomes 99.90%.                     epochs=13.Therefore the accuracy of the coding of the
     For further improvement Back propagation Approach                BPNN algorithium for the failure model comes out to be
has been deployed to reach better results. BPNN Code was              99.99%.
written that generates error value for 1 to 2000 epochs and
has shown the change in mean square error value.                                               VII CONCLUSION

                                                                           HR factors have strong influence over company success
C. Back Propagation Algorithm
                                                                      and failure. Earlier HR factors were measured through
    For each input pattern do the following steps.
                                                                      variance estimation and statistical software’s, Due to the
    Step 1. Set parameters eata η (…………),
                                                                      inherent advantages of the Artificial Neural networks ,they
           emax(maximum error value) and e(error between
                                                                      are being used to replace the existing statistical models.
           output and desired output).
                                                                      Here, the ANN based model has been proposed that can be
    Step2. Generate weights for hidden to output and input
                                                                      used for testing the success and/or failure of human factors
           to hidden layers.
                                                                      in any of the communication Industry. Results have been
    Step 3. Compute Input to Output nodes
                                                                      obtained on the basis of connectionist model, which has
    Step4. Compute error between output and desired
                                                                      been further refined by BPNN to an accuracy of 99.99%.
                                                                      Any company on the basis of this model can diagnose
    Step5. Modify weights from hidden to output and input
                                                                      failure due to HR factors by directly deploying this model.
           to hidden nodes.
                                                                      The limitation of the study is that it only suggests a
                                                                      diagnostic model of success/failure of HR factors but it
Result: This gives us error value of error=0.014289, no.of
                                                                      does not pin point them.
epochs=1405 during training It showed error_minima at

                                                                                                   ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                           Vol. 8, No. 1, April 2010

                       REFERENCES                                                                                  [2]    Dr. Himanshu Aggarwal, is
   [1]   OASIG Report:The Organizational Aspects of Information                                                           Associate Professor (Reader)
         Technology(1996),The report entitled: “The Performance of                                                        in Computer Engineering at
         Information Technology and Role of Organizational                                                                University     College     of
         Factors”,                                                                                                        Engineering,          Punjabi                                                            University, Patiala. He had
   [2]   Millan Aikem, University of Mississippi, USA-1999, “Using                                                        completed his Bachelor’s
         a Neural Network to forecast inflation, Industrial                                                               degree in Computer Science
         Management & Data Systems 99/7, 1999, 296-301”.                                                                  from Punjabi University
                                                                                                                          Patiala in 1993. He did his
                                                                                                                          M.E. in Computer Science in
   [3]  G. Bellandi, R. Dulmin and V. Mininno,“Failure rate neural
                                                                                                                          1999 from Thapar Institute of
        analysis in the transport sector,” University of Pisa, Italy,
                                                                                                                          Engineering & Technology,
        International Journal of Operations & Production
                                                                                                                          Patiala. He had completed his
        Management, Vol. 18 No. 8, 1998, pp. 778-793,© MCB
                                                                                                                          Ph.D.       in      Computer
        University Press, 0144-3577,New York, NY.                                                                         Engineering from Punjabi
   [4] Sharma, A. K., Sharma, R. K., Kasana, H. S., 2006.,                       University Patiala in 2007.He has more than 16 years of teaching
        “Empirical comparisons of feed-forward connectionist and                 experience. He is an active researcher who has supervised 15 M.Tech.
        conventional regression approaches for prediction of first               Dissertations and guiding Ph.D. to seven scholars and has contributed
        lactation 305-day milk yield in Karan Fries dairy cows”.                 more than 40 articles in International and National Conferences and
        Neural Computing and Applications 15(3–4), 359–365.                      22 papers in research Journals. Dr. Aggarwal is member of many
   [5] Sharma, A. K., Sharma, R. K., Kasana, H. S .2007.,                        professional societies such as ICGST, IAENG. His areas of interest
        “Prediction of first lactation 305-day milk yield in Karan               are Information Systems, ERP and Parallel Computing. He is on the
        Fries dairy cattle using ANN approaching”, Applied Soft                  review and editorial boards of several refereed Research Journals.
        Computing 7(3), 1112–1120.
   [6] J Jay Liebowitz , “A look at why information systems fail
        Department of Information Systems,” Kybernetes, Vol. 28                                        APPENDIX A
        No. 1, 1999,pp. 61-67, © MCB University Press,0368-492X,              Table for training data is as following(70% data used for TRAINING)
        University of Maryland-BaltimoreCounty, Rockville,
        Maryland, USA .                                                                   Strategic    Tactical    Operational
   [7] Flowers, S. (1997), “Information systems failure: identifying            Emp1      1            2           1
        the critical failure factors,” Failure and Lessons Learned in           Emp2      2            3           1.9
        Information Technology Management: An International                     Emp3      4            1.5         1.5
        Journal,Cognizant Communication Corp., Elmsford, New                    Emp4      2            3           4
        York, NY, Vol. 1 No. 1, pp. 19-30.                                      Emp5      1.7          1.6         2.5
   [8] DeLone, W.H., and McLean, E.R. 2004. "Measuring E-                       Emp6      1            1           1
        Commerce Success: Applying the DeLone & McLean
                                                                                Emp7      1.2          1.3         1.4
        Information Systems Success Model," International Journal
                                                                                Emp8      1.7          1.8         3
        of Electronic Commerce (9:1), Fall, pp 31-47.
                                                                                Emp9      1.8          2           4
   [9] Bruce Curry and Luiz Moutinho, “Neural networks in
        marketing: Approaching consumer responses to advertising                Emp10     4            1.8         2
        stimuli”, European Journal of Marketing, Vol 27 No 7, 1993              Emp11     2            5           1
        pp 5- 20.                                                               Emp12     2.5          2.2         2
   [10] Demuth, H. B., Beale, M., 2004. User’s Guide for Neural                 Emp13     2.5          2           1.6
        Network Toolbox( version 4) for use with MATLAB 6.1.                    Emp14     1.6          2           2.5
        The MathWorks Inc., Natick, MA.                                         Emp15     1            -1          1
   [11] Kweku Ewusi Mensah, “Critical issues in the abandoned                   Emp16     -1           -1          1
        information system development projects”, Loyola                        Emp17     -1           1           -1
        Marymount University,Los Angeles, CA, Volume 40,Issue                   Emp18     1            1           -1
        9(September 1997)pages 74-80,1997,ISSN :0001-7082.                      Emp19     1.2          -1          1
   [12] Angeliki           Poulymenakou1            and        Vasilis          Emp20     -1           1.2         1.5
        Serafeimidis2,Volume1, number 3, 1997, “Failure &                       Emp21     3.6          1.2         4
        Lessons Learned in Information Technology Management”,                  Emp22     3.6          3.6         3.6
        Vol. 1, pp. 167-177.                                                    Emp23     4            4           5
                                                                                Emp24     5            4           2
                                                                                Emp25     5            5           1
                                                                                Emp26     4            5           -1
                     AUTHORS PROFILE                                            Emp27     3            2           -1
                                                                                Emp28     0.5          1.5         0.5
                       [1]    Bikram Pal Kaur is           an Assistant         Emp29     2.1          3.1         4.1
                              Professor in the Deptt. of Computer               Emp30     5            5           5
                              Science & Information Technology and
                                                                                Emp31     0.1          0.2         0.5
                              is also heading the Deptt. Of Computer
                                                                                Emp32     0.5          0.7         1.5
                              Application in Chandigarh Engineering
                              College,Landran,Mohali. She holds the             Emp33     4.1          4.2         4.3
                              degrees of,M.Tech,M.Phil.. and is         Emp34     5            0.1         0.2
                              currently pursuing her the field of       Emp35     0.1          2           0.1
                              Infornation Systems        from Punjabi           Emp36     1.4          5           -1
                              University,Patiala. She has more than 11          Emp37     1.5          4           1
                              years of teaching experience and served           Emp38     1.6          3           2
                              many academic institutions. She is an             Emp39     2.1          2           3
                              Active Researcher who has supervised              Emp40     2.1          1           4
many B.Tech.Projects and MCA Dissertations and also contributed                 Emp41     2.3          5           5
12 research papers in various national & international conferences.             Emp42     2.5          4           -1
Her areas of interest are Information System, ERP.                              Emp43     3.3          3           1
                                                                                Emp44     3.5          2           2

                                                                                                           ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                Vol. 8, No. 1, April 2010

     Emp45    4            1           3                                      The balance between cost and benefit of computer based
     Emp46    4.9          5           4                                      information product/services
     Emp47    4.1          4           5                                      User training
     Emp48    4.3          3           -1                                     The flexibility in system for corrective action in case of
     Emp49    3.01         2           -1                                     problematic output
     Emp50    2.01         -1          1                                      Testing of system before implementation
     Emp51    2.03         5           1                                                                  Operational factors
     Emp52    5            4           1                                      Professional standard maintenance (H/W, S/W, O.S, User
                                                                              Accounts, Maintenance of system)
                                                                              The response of staff to the changes in existing system
                      APPENDIX B                                              Trust of staff in the change for the betterment of the system
Table shows data used for testing neural network (30% data                    The way users input data and receive output
used for TESTING)                                                             The accuracy (Correctness) of the output
                                                                              The completeness (Comprehensiveness) of the information
              Strategic    Tactical     Operational                           The well defined language for interaction with computers
                                                                              The volume of output generated by the system for a user
    Empt1     1.3          1.2          1.1                                   Faith in technology/system by the user
    Empt2     1.5          1.5          1.5
    Empt3     1.7          1.5          1.6
    Empt4     2            1            0
    Empt5     3            2            2
    Empt6     1.6          1.6          1.6
    Empt7     4            1            2
    Empt8     1            4            1.6
    Empt9     2            4            4
    Empt10    3.3          3.1          3.4
    Empt11    2.5          3.5          2
    Empt12    4.1          3.5          2.1
    Empt13    1            1            1
    Emptl4    1.3          1.1          1.9
    Emptl5    1.8          2.3          2.1
    Emptl6    0            0            0
    Emptl7    0            1            0
    Emptl8    0            0            1
    Emptl9    3.5          4.5          5
    Emptl20   1.8          1.6          2.9
    Emptl21   1            0            0
    Emptl22   2.5          1            1
    Emptl23   3.5          1.6          1.7

                  APPENDIX -C
Questionnaire used for survey (containing scores from 1-5)
1-not      important,2-slightly      important,3-moderately
important,4-fairly important,5-most important

                            Factors                                 Score
                             Strategic Factors
Support of the Top management
Working relationship in a team(Users & Staff)
project goals clearly defined to the team
Thorough Understanding of business environment
User involvement in development issues
Attitude towards risk (Changes in the job profile due to the
introduction of the computers)
Adequacy of computer facility to meet functional
requirements(quality and quantity both)
Company technology focused
Over commitment in the projects
                             Tactical Factors
Organizational politics
Priority of the organizational units to allocate resources to
Organizational culture
Skilled resources (Ease in the use of system by users)
The consistency and reliability of information
To obtain highest returns on investment through system usage
Realization of user requirements
Security of data and models from illegal users
Documentation ( formal instructions for the usage of IS)

                                                                                                              ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 8, No. 1, April 2010

            Detecting Security threats in the Router using
                     Computational Intelligence
                           J.Visumathi                                                    Dr. K. L. Shunmuganathan
                         Research Scholar                                             Professor & Head, Department of CSE
                      Sathyabama University,                                               R.M.K. Engineering College
                         Chennai-600 119                                                        Chennai-601 206

                                                                         system storage, operating system data structures, protocol
                                                                         data structures and software vulnerabilities. DoS can be a
   Information security is an issue of global concern. As the
Internet is delivering great convenience and benefits to the             single source attack, originating at a single host, or can be a
modern society, the rapidly increasing connectivity and                  multi-source source attack, where multiple hosts and
accessibility to the Internet is also posing a serious threat to         networks are involved. The DoS attacks can take an
security and privacy, to individuals, organizations, and nations         advantage form the distributed nature of the Internet by
alike. Finding effective ways to detect, prevent, and respond to         launching a multiplicative effect, resulting in distributed DoS.
intrusions and hacker attacks of networked computers and                 Due to the use of dynamic protocols and address spoofing,
information systems. This paper presents a knowledge discovery           detecting distributed and automated attacks still remains a
frame work to detect DoS attacks at the boundary controllers             challenge.
(routers). The idea is to use machine learning approach to
discover network features that can depict the state of the
network connection. Using important network data (DoS                          Efforts on how to define and characterize denial of
relevant features), we have developed kernel machine based and           service attacks through a collection of different perspectives
soft computing detection mechanisms that achieve high detection          such as bandwidth, process information, system information,
accuracies. We also present our work of identifying DoS                  user information and IP address is being proposed by several
pertinent features and evaluating the applicability of these             researchers [1,6]. Using the defined characteristics a few
features in detecting novel DoS attacks. Architecture for                signature-based and anomaly based detection techniques are
detecting DoS attacks at the router is presented. We                     proposed [2,9]. Recent malware and distributed DoS attacks
demonstrate that highly efficient and accurate signature based           proved that there exists no effective means to detect, respond
classifiers can be constructed by using important network
                                                                         and mitigate availability attacks.
features and machine learning techniques to detect DoS attacks
at the boundary controllers.
                                                                               In this paper we propose a router based approach to
Keywords: Denial of service attacks, information assurance,              detect denial of service attacks using intelligent systems. A
intrusion detection, machine learning, feature ranking, data             comparative study of support vector machines (SVMs), Multi
reduction                                                                adaptive regression splines (MARSs) and linear genetic
                                                                         programs (LGPs) for detecting denial of service attacks is
1    Introduction                                                        performed through a variety of experiments performed on a
       By nature Internet is public, connected, distributed,             well know Lincoln Labs data set that consists of more than
open, and dynamic. Phenomenal growth of computing                        80% of different denial of service attacks described in section
devices, connectivity speed, and number of applications                  2. We address the use of machine learning approach to
running on networked systems posed engendering risk to the               discover network features that can depict the state of the
Internet. Malicious usage, attacks, and sabotage have been on            network connection. We also present our work of identifying
the rise as more and more computing devices are put into use.            DoS pertinent features from a publicly available intrusion
Connecting information systems to networks such as the                   detection data set and evaluating the applicability of these
Internet and public telephone systems further magnifies the              features in detecting novel DoS attacks on a live performance
potential for exposure through a variety of attack channels.             network. Architecture for detecting DoS attacks at the routers.
These attacks take advantage of the flaws or omissions that
exist within the various information systems and software that                 In the rest of the paper, a brief introduction to the data
run on many hosts in the network.                                        used and DoS attacks is given in section 2. An overview of
                                                                         soft computing paradigms used is given in section 3.
     In DoS attacks the adversary mainly targets a few                   Experiments for detecting DoS attacks using MARs, SVMs
services like network bandwidth, router or server CPU cycles,            and LGPs are given in section 4. Significant feature
                                                                         identification techniques are presented in section 5. In section

                                                                                                    ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 8, No. 1, April 2010

6 we present the architecture and the applicability of DoS                                                                      Slows down
                                                                                            Udpstrom                   Echo
significant features in detecting DoS attacks at the routers.                                                                   the network
Conclusions are presented in section 7.

2     Intrusion detection data                                            3     Soft computing paradigms
      A sub set of the DARPA intrusion detection data set is                    Soft computing was first proposed by Zadeh to construct
used for off-line analysis. In the DARPA intrusion detection              new generation computationally intelligent hybrid systems
evaluation program, an environment was set up to acquire raw              consisting of neural networks, fuzzy inference system,
TCP/IP dump data for a network by simulating a typical U.S.               approximate reasoning and derivative free optimization
Air Force LAN. The LAN was operated like a real                           techniques. It is well known that the intelligent systems,
environment, but being blasted with multiple attacks [5,11].              which can provide human like expertise such as domain
For each TCP/IP connection, 41 various quantitative and                   knowledge, uncertain reasoning, and adaptation to a noisy and
qualitative features were extracted [16]. The 41 features                 time varying environment, are important in tackling practical
extracted fall into three categorties, “intrinsic” features that          computing problems. In contrast with conventional Artificial
describe about the individual TCP/IP connections; can be                  Intelligence (AI) techniques which only deal with precision,
obtained form network audit trails, “content-based” features              certainty and rigor the guiding principle of hybrid systems is
that describe about payload of the network packet; can be                 to exploit the tolerance for imprecision, uncertainty, low
obtained from the data portion of the network packet, “traffic-           solution cost, robustness, partial truth to achieve tractability,
based” features, that are computed using a specific window.               and better rapport with reality

2.1    Denial of service attacks                                          3.1    Support vector machines
      Attacks designed to make a host or network incapable of                   The SVM approach transforms data into a feature space
providing normal services are known as denial of service                  F that usually has a huge dimension. It is interesting to note
attacks. There are different types of DoS attacks: a few of               that SVM generalization depends on the geometrical
them abuse the computers legitimate features; a few target the            characteristics of the training data, not on the dimensions of
implementations bugs; and a few exploit the                               the input space [3,4]. Training a support vector machine
misconfigurations. DoS attacks are classified based on the                (SVM) leads to a quadratic optimization problem with bound
services that an adversary makes unavailable to legitimate                constraints and one linear equality constraint. Vapnik shows
users. A few examples include preventing legitimate network               how training a SVM for the pattern recognition problem leads
traffic, preventing access to services for a group or                     to the following quadratic optimization problem .
individuals. DoS attacks used for offline experiments and                 Minimize:
identifying significant features are presented in table 1 [5,11].                       l                 l      l
                                                                                       ∑                 ∑ ∑ y i y jα iα j k ( xi , x j ) (1)
                                                                          W (α ) = −          αi +
                                                                                       i =1              i =1   j =1
            TABLE 1: DoS Attack Description                                              l
           Attack Type
                                  Effect of the
                                      attack                              Subject to   ∑ y iα i                 (2)
                                                                                       i =1
            Apache2       http    Crashes httpd                                      ∀i : 0 ≤ α i ≤ C
                                   Freezes the
              Land        http
                                    machine                               Where l is the number of training examples α is a vector of l
           Mail bomb      N/A      Annoyance                              variables and each component α i corresponds to a training
                                                                          example (xi, yi). The solution of (1) is the vector α for which
                                 Denies service                                                                                *
           SYN Flood      TCP       on one or
                                                                          (1) is minimized and (2) is fulfilled.
                                   more ports
          Ping of Death        Icmp          None
                                                                          3.2    Linear genetic programs
                                          Denies new
           Process table       TCP                                             LGP is a variant of the Genetic Programming (GP)
                                          Slows down                      technique that acts on linear genomes . The linear genetic
              Smurf            Icmp                                       programming technique used for our current experiment is
                                          the network
                                            Kills the                     based on machine code level manipulation and evaluation of
              Syslogd         Syslog                                      programs. Its main characteristics in comparison to tree-
                                                                          based GP lies is that the evolvable units are not the
                                          Reboots the
             Teardrop           N/A                                       expressions of a functional programming language (like
                                                                          LISP), but the programs of an imperative language (like C)

                                                                                                                ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

are evolved. In the Automatic Induction of Machine Code by
Genetic Programming, individuals are manipulated directly as
binary code in memory and executed directly without passing
an interpreter during fitness calculation. The LGP tournament
selection procedure puts the lowest selection pressure on the                TABLE 2: Classifier Evaluation for Offline DoS Data
individuals by allowing only two individuals to participate in                                    Classifier Accuracy (%)
a tournament. A copy of the winner replaces the loser of each                    Class
tournament. The crossover points only occur between                                           SVM          LGP         MARS
instructions. Inside instructions the mutation operation
randomly replaces the instruction identifier, a variable or the                   Normal         98.42           99.64            99.71
constant from valid ranges. In LGP the maximum size of the
program is usually restricted to prevent programs without
                                                                                   DoS           99.45           99.90              96
bounds. As LGP could be implemented at machine code
level, it will be fast to detect intrusions in a near real time
                                                                         5     Significant feature identification

3.3    Multi adaptive regression splines                                       Feature selection and ranking is an important issue in
                                                                         intrusion detection. Of the large number of features that can
                                                                         be monitored for intrusion detection purpose, which are truly
      Splines can be considered as an innovative mathematical            useful, which are less significant, and which may be useless?
process for complicated curve drawings and function                      The question is relevant because the elimination of useless
approximation. To develop a spline the X-axis is broken into a           features enhances the accuracy of detection while speeding up
convenient number of regions. The boundary between regions               the computation, thus improving the overall performance of
is also known as a knot. With a sufficiently large number of             an IDS. In cases where there are no useless features, by
knots virtually any shape can be well approximated. While it             concentrating on the most important ones we may well
is easy to draw a spline in 2-dimensions by keying on knot               improve the time performance of an IDS without affecting the
locations (approximating using linear, quadratic or cubic                accuracy of detection in statistically significant ways.
polynomial etc.), manipulating the mathematics in higher
dimensions is best accomplished using basis functions. The               •     Having a large number of input variables x = (x1, x2, …,
MARS model is a regression model using basis functions as                      xn) of varying degrees of importance to the output y; i.e.,
predictors in place of the original data. The basis function                   some elements of x are essential, some are less important,
transform makes it possible to selectively blank out certain                   some of them may not be mutually independent, and
regions of a variable by making them zero, and allows MARS                     some may be useless or irrelevant (in determining the
to focus on specific sub-regions of the data. It excels at                     value of y)
finding optimal variable transformations and interactions, and
the complex data structure that often hides in high-                     •     Lacking an analytical model that provides the basis for a
dimensional data .                                                             mathematical formula that precisely describes the input-
                                                                               output relationship, y = F (x)
4     Offline evaluation
                                                                         •     Having available a finite set of experimental data, based
                                                                               on which a model (e.g. neural networks) can be built for
      We partition the data into the two classes of “Normal”                   simulation and prediction purposes
and “DoS” patterns, where the DoS attack is a collection of
six different attacks (back, neptune, ping of death, land,               5.1     Support vector decision function ranking
smurf, and teardrop). The objective is to separate normal and                  Information about the features and their contribution
DoS patterns. The (training and testing) data set contains               towards classification is hidden in the support vector decision
11982 randomly generated from data described in section 3,               function. Using this information one can rank their
with the number of data from each class proportional to its              significance, i.e., in the equation
size, except that the smallest class is completely included. A
different randomly selected set of 6890 points of the total data
set (11982) is used for testing different soft computing                 F (X) = ΣWiXi + b        (3)
paradigms. Results of SVM, MARS and LGP classifications
are given in Table 2.
                                                                               The point X belongs to the positive class if F(X) is a
                                                                         positive value. The point X belongs to the negative class if
                                                                         F(X) is negative. The value of F(X) depends on the
                                                                         contribution of each value of X and Wi. The absolute value of

                                                                                                        ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                          Vol. 8, No. 1, April 2010

Wi measures the strength of the classification. If Wi is a large
positive value then the ith feature is a key factor for positive
class. If Wi is a large negative value then the ith feature is a
key factor for negative class. If Wi is a value close to zero on                             TABLE 3: Classifier Evaluation for Offline DoS Data
either the positive or the negative side, then the ith feature
does not contribute significantly to the classification. Based                                   Rank
on this idea, a ranking can be done by considering the support                                               Feature Description
vector decision function.                                                                                    count:
                                                                                                             service count:
5.2        Linear genetic ranking algorithm                                                                  dst ost rv_serror_rate
      In the feature selection problem the interest is in the                                                dst_host_same_src_port_rate
representation of the space of all possible subsets of the given                                             dst_host_serror_rate: %
input set. An individual of length d corresponds to a d-                                                     count:
dimensional binary feature vector Y, where each bit                                                          compromised conditions:
represents the elimination or inclusion of the associated                                                    wrong fragments:
feature. Then, yi = 0 represents elimination and yi = 1                                          LGP
indicates inclusion of the ith feature. Fitness F of an individual                                           logged in:
program p is calculated as the mean square error (MSE)                                                       hot:
between the predicted output ( Oij ) and the desired output                                                  count:
    des                                                                                                      service count:
( Oij ) for all n training samples and m outputs .
                  n         m                                                                                source bytes:
F ( p) =
           n⋅m   ∑ ∑ (O
                 i =1       j =1
                                   ij    − Oij ) 2 + CE = MSE + w ⋅ MCE
                                                                          (4)                                destination bytes:

Classification Error (CE) is computed as the number of
misclassifications. Mean Classification Error (MCE) is added                                         TABLE 4: Significant feature evaluation
to the fitness function while its contribution is proscribed by                                                             Features
an absolute value of Weight (W).                                                             Classifier   Normal        DoS        Normal     DoS
                                                                                                            41           41          6         6
5.3        Multi adaptive regression spines ranking                                           SVM         98.42        99.45       99.23     99.16
      Generalized cross-validation is an estimate of the actual
                                                                                               LGP         99.64         99.90          99.77          99.14
cross-validation which involves more computationally
intensive goodness of fit measures. Along with the MARS
                                                                                             MARS          99.71           96           99.80          95.47
procedure, a generalized cross-validation procedure is used to
determine the significant input features. Non-contributing
input variables are thereby eliminated .
                                                                                         6     Real time router based DoS detection
      1                 N
                            y − f ( xi ) 2
                      ∑ 1 − k ] (5)
                      i =1
                           [ i                                                                 A passive sniffer can be placed at the router to collect
                                   N                                                     data for detecting DoS attacks.
      where N is the number of records and x and y are
independent and dependent variables respectively. k is the                                     The architecture comprises of three components: a
effective number of degrees of freedom whereby the GCV                                   packet parser, classifier and a response module. The network
adds penalty for adding more input variables to the model.                               packet parser uses the WINPCAP library to capture packets
                                                                                         and extracts the relevant features required for DoS detection.
                                                                                         The output of the parser includes the twelve DoS-relevant
5.4        Significant feature off line evaluation                                       features as selected by our ranking algorithm [7,8].
      Description of most important features as ranked by
three feature-ranking algorithms (SVDF, LGP, and MARS) is                                      The output summary of the parser includes the eleven
given in table 3. Classifier performance using all the 41                                features of duration1 of the connection to the target machine,
features and most important 6 features as inputs to the                                  protocol2 used to connect, service type3, status of the
classifier is given in table 4.                                                          connection4 (normal or error), number of source bytes5,
                                                                                         number of destination bytes6, number of connections to the

                                                                                                                     ISSN 1947-5500
                                                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                    Vol. 8, No. 1, April 2010

same host as the current one during a specified time window7                                                                                538            2141             79.91
(in our case .01seconds), number of connections to the same                                                                   DoS           153            2526             94.28
host as the current one using same service during the past                                                                                   0             2679              100
0.01 seconds8, percentage of connections that have SYN                                                                     Accuracy
errors during the past .01 seconds9, percentage of connections                                                                             83.77           80.44
that have SYN errors while using the same service during the                                                                               99.08           99.06
past .01 seconds10, and percentage of connections to the same                                                                               63.9            73.2
service during the past .01 seconds11.

      We experimented with more than 24 types of DoS                                                                     The top-left entry of Table 5 shows that 2692, 2578, and
attacks, including 6 types of DoS described in section 3 and                                                       1730 of the actual “normal” test set were detected to be
17 additional types. In the experiments performed we used                                                          normal by SVM, LGP and MARS respectively; the last
different types of DoS attacks: SYN flood, SYN full,                                                               column indicates that 99.46, 95.26 and 63.9 % of the actual
MISFRAG, SYNK, Orgasm, IGMP flood, UDP flood,                                                                      “normal” data points were detected correctly. In the same
Teardrop, Jolt, Overdrop, ICMP flood, FIN flood, and                                                               way, for “DoS” 2141, 2526 and 2679 of the actual “attack”
Wingate crash, with different service and port options.                                                            test set were correctly detected; the last column indicates that
Normal data included multiple sessions of http, ftp, telnet,                                                       79.91, 94.28 and 100 % of the actual “DoS” data points were
SSH, http, SMTP, pop3 and imap. Network data originating                                                           detected correctly. The bottom row shows that 83.77, 99.08
from a host to the server that included both normal and DoS is                                                     and 63.0 % of the test set said to be “normal” indeed were
collected for analysis; for proper labeling of data for training                                                   “normal” and 83.77, 99.06 and 73.2 % of the test set
the classifier normal data and DoS data are collected at                                                           classified, as “DoS” indeed belongs to DoS as classified by
different times.                                                                                                   SVM, LGP and MARS respectively.

                                                                                                                   7      Conclusions
         DoS Monitor                                                  Mail Server
                                                                                                                        A number of observations and conclusions are drawn
                                                                                                                   from the results reported:

                                                                                           Apache                         A comparison of different soft computing techniques is
                                                                                                                          given. Linear genetic programming technique
                                                                                                                          outperformed SVM and MARS with a 94.28 % detection
               Communication Link                    Server
                                                                                                                          rate on the real time test dataset.
                                                                                                                           Regarding significant feature identification, we observe
                    Communication Link


                                                                                                                          The three feature ranking methods produce largely
                                                                                                                          consistent results.
                                                                                                                   •      The most significant features for the two classes of

                                                                                                                          ‘Normal’ and ‘DOS’ heavily overlap.
                                                                                                                   •      Using the 6 important features for each class gives the
                                                                                                                          remarkable performance.
                                                                                                                        Regarding real time router based DoS detection, we
Figure 2. Architecture for detecting DoS attacks at the routers
                                                                                                                   observe that
                                                                                                                   •      DoS attacks can be detected at the router, thus pushing
                                                                                                                          the detection as far out as possible in the network
       TABLE 5: Router based detection accuracies
                                      Normal                      DoS               Accuracy
                                       SVM                       SVM                 SVM                           •      “Third generation worms” can be detected by tuning the
      Learning                                                                                                            time based features.
                                       LGP                        LGP                 LGP
                                      MARS                       MARS                MARS                          •      “Low and slow” DoS attacks can be detected by
                                       2692                            14            99.48                                judiciously selecting the time based and connection based
       Normal                          2578                           128            95.26                                features.
                                       1730                           976             63.9

                                                                                                                                                  ISSN 1947-5500
                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                     Vol. 8, No. 1, April 2010

  We take immense pleasure in thanking our Chairman Dr.               [10] J. Stolfo, F. Wei, W. Lee, A. Prodromidis, and P. K.
Jeppiaar M.A, B.L, Ph.D, the Directors of Jeppiaar                    Chan, “Cost-based Modeling and Evaluation for Data Mining
Engineering College Mr. Marie Wilson, B.Tech,                         with Application to Fraud and Intrusion Detection”, Results
MBA.,(Ph.D) Mrs. Regeena Wilson, B.Tech, MBA., (Ph.D)                 from the JAM Project by Salvatore, 1999.
and the Principal Dr. Sushil Lal Das M.Sc(Engg.), Ph.D for
their continual support and guidance. We would like to extend         [11] S. E. Webster, “The Development and Analysis of
our thanks to my guide, our friends and family members                Intrusion Detection Algorithms”, S.M. Thesis, Massachusetts
without whose inspiration and support our efforts would not           Institute of Technology, 1998.
have come to true. Above all, we would like to thank God for
making all our efforts success.
                                                                                          J.Visumathi B.E.,M.E.,(Ph.D) works as
                                                                                          Assistant     Professor  in   Jeppiaar
References                                                                                Engineering College , Chennai and She
[1] W. J. Blackert, D. C. Furnanage, and Y. A. Koukoulas,                                 has more than 10 years of teaching
“Analysis of Denial of service attacks Using An address                                   experience and her areas              of
Resolution Protocol Attack”, Proc. of the 2002 IEEE                                       specializations are Networks, Artificial
Workshop on Information Assurance, US Military Academy,                                   Intelligence, and DBMS.
pp. 17-22, 2002.

[2] D. W. Gresty, Q. Shi, and M. Merabti, “Requirements
for a general framework for response to distributed denial of                             Dr. K.L. Shunmuganathan B.E,
service,” Proc. Of Seventeenth Annual Computer Security                                   M.E.,M.S.,Ph.D       works    as    the
Applications Conference, pp. 422-229, 2001.                                               Professor & Head of CSE Department
                                                                                          of RMK Engineering College,
[3] T. Joachims, “Making Large-Scale SVM Learning                                         Chennai, TamilNadu, India. He has
Practical”, LS8-Report, University of Dortmund, LS VIII-                                  has more than18 years of teaching
Report, 2000.                                                                             experience and his areas              of
                                                                                          specializations are Networks, Artificial
[4] T. Joachims, “SVMlight is an Implementation of                                        Intelligence, and DBMS.
Support Vector Machines (SVMs) in C”, University of
Dortmund. Collaborative Research Center on Complexity
Reduction in Multivariate Data (SFB475), 2000.

[5] K. Kendall, “A Database of Computer Attacks for the
Evaluation of Intrusion Detection Systems”, Master's Thesis,
Massachusetts Institute of Technology, 1998.

[6] J. Mirkovic, J. Martin, and P. Reiher, “A Taxonomy of
DDoS Attacks and DDoS Defense Mechanisms”, Technical
Report # 020017, Department of Computer Science, UCLA,

[7] S. Mukkamala, and A. H. Sung, “Identifying Key
Features for Intrusion Detection Using Neural Networks”,
Proc. ICCC International Conference on Computer
Communications, pp. 1132-1138, 2002.

[8] S. Mukkamala, and A.H. Sung “Feature Selection for
Intrusion Detection Using Neural Networks and Support
Vector Machines”, Journal of the Transportation Research
Board of the National Academics, Transportation Research
Record No 1822, pp. 33-39, 2003.

[9] C. Shields, “What do we mean by network denial of
service?”, Proc. of the 2002 IEEE workshop on Information
Assurance. US Military Academy, pp. 196-203, 2002.

                                                                                                ISSN 1947-5500
                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                Vol. 8, No. 1, April 2010

  A Novel Algorithm for Informative Meta
    Similarity Clusters Using Minimum
               Spanning Tree
                S. John Peter                                                          S. P. Victor
             Assistant Professor                                                   Associate Professor
    Department of Computer Science and                                    Department of Computer Science and
              Research Center                                                        Research Center
         St. Xavier’s College, Palayamkottai                          St. Xavier’s College, Palayamkottai
             Tamil Nadu, India.                                                    Tamil Nadu, India.

ABSTRACT - The minimum spanning tree clustering                  number of vertices. More efficient algorithm for
algorithm is capable of detecting clusters with                  constructing MSTs have also been extensively
irregular boundaries. In this paper we propose two               researched [13, 5, 8]. These algorithms promise
minimum spanning trees based clustering                          close to linear time complexity under different
algorithm. The first algorithm produces k clusters
                                                                 assumptions. A Euclidean minimum spanning tree
with center and guaranteed intra-cluster similarity.
The radius and diameter of k clusters are computed               (EMST) is a spanning tree of a set of n points in a
to find the tightness of k clusters. The variance of             metric space (En), where the length of an edge is
the k clusters are also computed to find the                     the Euclidean distance between a pair of points in
compactness of the clusters. The second algorithm                the point set.
is proposed to create a dendrogram using the k                   The hierarchical clustering approaches are related
clusters as objects with guaranteed inter-cluster                to graph theoretic clustering. Clustering
similarity. The algorithm is also finds central cluster          algorithms using minimal spanning tree takes the
from the k number of clusters. The first algorithm               advantage of MST. The MST ignores many
uses divisive approach, where as the second
                                                                 possible connections between the data patterns, so
algorithm uses agglomerative approach. In this
paper we used both the approaches to find                        the cost of clustering can be decreased. The MST
Informative Meta similarity clusters.                            based clustering algorithm is known to be capable
                                                                 of detecting clusters with various shapes and size
Key Words: Euclidean minimum spanning tree,                      [24]. Unlike traditional clustering algorithms, the
Subtree, Eccentricity, Center, Tightness. Hierarchical           MST clustering algorithm does not assume a
clustering, Dendrogram, Radius, Diameter, Central                spherical shapes structure of the underlying data.
clusters, Compactness.                                           The EMST clustering algorithm [20, 24] uses the
                                                                 Euclidean minimum spanning tree of a graph to
                                                                 produce the structure of point clusters in the n-
                 I INTRODUCTION
                                                                 dimensional Euclidean space. Clusters are
Given a connected, undirected graph G=(V,E),                     detected to achieve some measures of optimality,
where V is the set of nodes, E is the set of edges               such as minimum intra-cluster distance or
between pairs of nodes, and a weight w (u , v)                   maximum inter-cluster distance [2]. The EMST
specifying weight of the edge (u, v) for each edge               algorithm has been widely used in practice.
(u, v) E. A spanning tree is an acyclic subgraph
of a graph G, which contain all vertices from G.                 Clustering by minimal spanning tree can be
The Minimum Spanning Tree (MST) of a                             viewed as a hierarchical clustering algorithm
weighted graph is minimum weight spanning tree                   which follows a divisive approach. Using this
of that graph. Several well established MST                      method firstly MST is constructed for a given
algorithms exist to solve minimum spanning tree                  input. There are different methods to produce
problem [21, 15, 17]. The cost of constructing a                 group of clusters. If the number of clusters k is
minimum spanning tree is O (m log n), where m is                 given in advance, the simplest way to obtain k
the number of edges in the graph and n is the                    clusters is to sort the edges of minimum spanning

                                                                                           ISSN 1947-5500
                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                             Vol. 8, No. 1, April 2010

tree in descending order of their weights and                 An important objective of hierarchical cluster
remove edges with first k-1 heaviest weights [2,              analysis is to provide picture of data that can
23].                                                          easily be interpreted. A picture of a hierarchical
                                                              clustering is much easier for a human being to
Geometric notion of centrality are closely linked             comprehend than is a list of abstract symbols. A
to facility location problem. The distance matrix             dendrogram is a special type of tree structure that
D can computed rather efficiently using Dijkstra’s            provides a convenient way to represent
algorithm with time complexity O( | V| 2 ln | V | )           hierarchical clustering. A dendrogram consists of
[22].                                                         layers of nodes, each representing a cluster.
The eccentricity of a vertex x in G and radius
ρ(G), respectively are defined as                             In this paper we propose two EMST based
e(x) = max d(x , y)and     ρ(G) = min e(x)                    clustering algorithm to address the issues of
       y V                        x V                         undesired clustering structure and unnecessary
The center of G is the set                                    large number of clusters. Our first algorithm
             C (G) = {x V | e(x) = ρ (G)}                     assumes the number of clusters is given. The
C (G) is the center to the “emergency facility                algorithm constructs an EMST of a point set and
location problem” which is always contain single              removes the inconsistent edges that satisfy the
block of G. The length of the longest path in the             inconsistence measure. The process is repeated to
graph is called diameter of the graph G. we can               create a hierarchy of clusters until k clusters are
define diameter D (G) as                                      obtained. In section 2 we review some of the
               D (G) = max e(x)                               existing works on graph based clustering
                       x V                                    algorithm and central tree in a Minimum
The diameter set of G is                                      Spanning Trees. In Section 3 we propose
          Dia (G) = {x V | e(x) = D (G)}                      EMSTRD algorithm which produces k clusters
                                                              with center, radius, diameter and variance. We
All existing clustering Algorithm require a                   also    propose     another    algorithm     called
number of parameters as their inputs and these                EMSTUCC for finding cluster of clusters using
parameters can significantly affect the cluster               the k clusters which are from previous EMSTRD
quality. In this paper we want to avoid                       algorithm. The algorithm also finds central
experimental methods and advocate the idea of                 cluster. Hence we named this new cluster as
need-specific as opposed to care-specific because             Informative Meta similarity clusters. The radius,
users always know the needs of their applications.            diameter and variance of sub tree(cluster) is used
We believe it is a good idea to allow users to                to find tightness and compactness of clusters.
define their desired similarity within a cluster and          Finally in conclusion we summarize the strength
allow them to have some flexibility to adjust the             of our methods and possible improvements.
similarity if the adjustment is needed. Our
Algorithm produces clusters of n-dimensional                                 II. RELATED WORK
points with a given cluster number and a naturally
approximate intra-cluster distance.                           Clustering by minimal spanning tree can be
                                                              viewed as a hierarchical clustering algorithm
Hierarchical clustering is a sequence of partitions           which follows the divisive approach. Clustering
in which each partition is nested into the next in            Algorithm based on minimum and maximum
sequence. An Agglomerative algorithm for                      spanning tree were extensively studied. Avis [3]
hierarchical clustering starts with disjoint                  found an O (n2 log2 n) algorithm for the min-max
clustering, which places each of the n objects in             diameter-2      clustering    problem.     Asano,
an individual cluster [1]. The hierarchical                   Bhattacharya, Keil and Yao [2] later gave
clustering algorithm being employed dictates how              optimal O (n log n) algorithm using maximum
the proximity matrix or proximity graph should be             spanning trees for minimizing the maximum
interpreted to merge two or more of these trivial             diameter of a bipartition. The problem becomes
clusters, thus nesting the trivial clusters into              NP-complete when the number of partitions is
second partition. The process is repeated to form a           beyond two [12]. Asano, Bhattacharya, Keil and
sequence of nested clustering in which the number             Yao also considered the clustering problem in
of clusters decreases as a sequence progress until            which the goal to maximize the minimum inter-
single cluster containing all n objects, called the           cluster distance. They gave a k-partition of point
conjoint clustering, remains[1].                              set removing the k-1 longest edges from the

                                                                                        ISSN 1947-5500
                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                             Vol. 8, No. 1, April 2010

minimum spanning tree constructed from that                   simply removes k-1 longest edges so that the
point set [2]. The identification of inconsistent             weight of the subtrees is minimized. The second
edges causes problem in the MST clustering                    objective function is defined to minimize the total
algorithm. There exist numerous ways to divide                distance between the center and each data point in
clusters successively, but there is not suitable              the cluster. The algorithm removes first k-1 edges
choice for all cases.                                         from the tree, which creates a k-partitions.

Zahn [24] proposes to construct MST of point set              The     clustering    algorithm    proposed    by
and delete inconsistent edges – the edges, whose              S.C.Johnson [11] uses proximity matrix as input
weights are significantly larger than the average             data. The algorithm is an agglomerative scheme
weight of the nearby edges in the tree. Zahn’s                that erases rows and columns in the proximity
inconsistent measure is defined as follows. Let e             matrix as old clusters are merged into new ones.
denote an edge in the MST of the point set, v1 and            The algorithm is simplified by assuming no ties in
v2 be the end nodes of e, w be the weight of e. A             the proximity matrix. Graph based algorithm was
depth neighborhood N of an end node v of an                   proposed by Hubert [7] using single link and
edge e defined as a set of all edges that belong to           complete link methods. He used threshold graph
all the path of length d originating from v,                  for formation of hierarchical clustering. An
excluding the path that include the edge e. Let N1            algorithm for single-link hierarchical clustering
and N2 be the depth d neighborhood of the node v1             begins with the minimum spanning tree (MST) for
and v2. Let ŴN1 be the average weight of edges in             G (∞), which is a proximity graph containing n(n-
N1 and σN1 be its standard deviation. Similarly, let          1)/2 edge was proposed by Gower and Ross [9].
ŴN2 be the average weight of edges in N2 and σN2              Later Hansen and DeLattre [6] proposed another
be its standard deviation. The inconsistency                  hierarchical algorithm from graph coloring.
measure requires one of the three conditions hold:
                                                              Given n d-dimensional data objects or points in a
1. w > ŴN1 + c x σN1 or w > ŴN2 + c x σN2                     cluster, we can define the centroid x0, radius R,
                                                              diameter D and variance of the cluster as
2. w > max(ŴN1 + c x σN1 , ŴN2 + c x σN2)

3.               w                    >f
     max (c x σN1 , c x σN2)

where c and f are preset constants. All the edges
of a tree that satisfy the inconsistency measure are
considered inconsistent and are removed from the
tree. This result in set of disjoint subtrees each
represents a separate cluster. Paivinen [19]
proposed a Scale Free Minimum Spanning Tree
(SFMST) clustering algorithm which constructs
scale free networks and outputs clusters
containing highly connected vertices and those
connected to them.                                            where R is the average distance from member
                                                              objects to the centroid, and D is the average
The MST clustering algorithm has been widely                  pairwise distance within a cluster. Both R and D
used in practice. Xu (Ying), Olman and Xu                     reflect the tightness of the cluster around
(Dong) [23] use MST as multidimensional gene                  centroid[25].
expression data. They point out that MST- based
clustering algorithm does not assume that data                The cospanning tree of a tree spanning tree T is
points are grouped around centers or separated by             edge complement of T in G. Also the rank ρ(G)
regular geometric curve. Thus the shape of the                of a Graph G with n vertices and k connected
cluster boundary has little impact on the                     components is n-k.
performance of the algorithm. They described                  A central tree[4] of a graph is a tree T0 such that
three objective functions and the corresponding               the rank r of its cospanning tree To is minimum.
cluster algorithm for computing k-partition of
spanning tree for predefined k > 0. The algorithm                 r = ρ (T0) ≤ ρ (T), T G.

                                                                                        ISSN 1947-5500
                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                              Vol. 8, No. 1, April 2010

Deo[4] pointed out that , if r is the rank of the              clusters, with center, radius, diameter and
cospanning tree of T, then there is no tree in G at a          variance of each cluster. We also present another
distance greater than r from T and there is at least           algorithm to find the hierarchy of k clusters and
one tree in G at distance exactly r from T. A direct           central cluster.
consequence of this is following characterization
of central tree.                                               A. Cluster Tightness Measure and Compactness
A Spanning tree T0 is a central tree of G if and
only if the largest distance from T0 to any other              In order to measure the efficacy of clustering, a
tree in G is minimum[4], ie,                                   measure based upon the radius and diameter of
                                                               each subtree (cluster) is devised. The radius and
max d(T0,Ti) ≤ max d(T,Ti),     Ti   G                         diameter values of each cluster are expected low
                                                               value for good cluster. If the values are large that
The maximally distant tree problem, for instance ,             the points (objects) are spread widely and may
which ask for a pair of spanning tree(T 1,T2) such             overlap. The cluster tightness measure is a within
that d(T1,T2) ≥ d(Ti,Tj), Ti,Tj G, can be solved               – cluster estimate of clustering effectiveness ,
in polynomial time[14] . Also, as pointed out in               however it is possible to devise inter- cluster
[16], the distance between tree pairs a graph G are            measure also, to better measure the separation
in a one-to-one correspondence with the distance               between the various clusters.
between vertex pairs in the tree-graph G. Thus
finding a central tree in G is equivalent to finding           The Cluster compactness measure is based on the
a central vertex in a tree of G. However, while                variance of the data points distributed in the
central vertex problem is known to have a                      subtrees (clusters). The variance of cluster T is
polynomial time algorithm (in number of                        computed as
vertices), such an algorithm can not be used
efficiently find a central tree, since the number of
vertices in a tree of G can be exponential.


A tree is a simple structure for representing binary           Where d(xi, xj) is distance metric between two
relationship, and any connected components of                  points(objects) xi and xj, where n is the number of
tree is called subtree. Through this MST                       objects in the subtree Ti., and x0 is the mean of the
representation, we can convert a multi-                        subtree T. A smaller the variance value indicates,
dimensional clustering problem to a tree                       a higher homogeneity of the objects in the data
partitioning problem, i.e., finding particular set of          set, in terms of the distance measure d ( ). Since
tree edges and then cutting them. Representing a               d ( ) is the Euclidean distance, v (Ti) becomes the
set of multi-dimensional data points as simple tree            statistical variance of data set σ (Ti). The cluster
structure will clearly lose some of the inter data             compactness for the out put clusters generated by
relationship. However many clustering algorithm                the algorithm is the defined as
proved that no essential information is lost for the
purpose of clustering. This is achieved through
rigorous proof that each cluster corresponds to
one subtree, which does not overlap the
representing subtree of any other cluster.
Clustering problem is equivalent to a problem of
identifying these subtrees through solving a tree              Where k is the number of clusters generated on
partitioning problem. The inherent cluster                     the given data set S, v (Ti) is the variance of the
structure of a point set in a metric space is closely          clusters Ti and V(S) is the variance of data set S.
related to how objects or concepts are embedded                [10]
in the point set. In practice, the approximate
number of embedded objects can sometimes be                    The cluster compactness measure evaluates how
acquired with the help of domain experts. Other                well the subtrees (clusters) of the input is
times this information is hidden and unavailable               redistributed in the clustering process, compared
to the clustering algorithm. In this section we                with the whole input set, in terms of data
preset clustering algorithm which produce k                    homogeneity reflected by Euclidean distance

                                                                                         ISSN 1947-5500
                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                Vol. 8, No. 1, April 2010

metric used by the clustering process. Smaller the               10. ST = ST U {T’} // T’’ is new disjoint
cluster compactness value indicates a higher                          subtree
average compactness in the out put clusters.                     11. nc = nc+1
                                                                 12. Compute the center Ci of Ti using
B. EMSTRD Clustering Algorithm                                       eccentricity of points
                                                                 13. Compute the diameter of Ti using
Given a point set S in En and the desired number                     eccentricity of points
of clusters k, the hierarchical method starts by                 14. Compute the variance of Ti
constructing an MST from the points in S. The                    15. C = UTi ST {Ci}
weight of the edge in the tree is Euclidean                      16 Until nc = k
distance between the two end points. Next the                    17. Return k clusters with C
average weight Ŵ of the edges in the entire
EMST and its standard deviation σ are computed;                  Euclidean Minimum Spanning Tree is constructed
any edge with (W > Ŵ + σ) or (current longest                    at line 1. Average of edge weights and standard
edge) is removed from the tree. This leads to a set              deviation are computed at lines 2-3. The variance
of disjoint subtrees ST = {T1, T2 …} (divisive                   of input data set S is computed at line 4. Using the
approach). Each of these subtrees Ti is treated as               average weight and standard deviation, the
cluster. Oleksandr Grygorash et al proposed                      inconsistent edge is identified and removed from
algorithm [18] which generates k clusters. We                    Euclidean Minimum Spanning Tree (EMST) at
modified the algorithm in order to generate k                    lines (8-9). Subtree (cluster) is created at line 10.
clusters with centers. The algorithm is also find                Radius, diameter and variance of subtree are
radius, diameter and variance of subtrees, which is              computed at lines 12-14. Lines 9-15 in the
useful in finding tightness and compactness of                   algorithm are repeated until k number of subtrees
clusters. Hence we named the new algorithm as                    (clusters) are produced. The radius and diameter
Euclidean Minimum Spanning Tree for Radius                       are good measure to find the tightness of clusters.
and Diameter (EMSTRD). Each center point of k                    The radius and diameter values of each cluster are
clusters is a representative point for the each                  expected low value for good cluster. If the values
subtree ST. A point ci is assigned to a cluster i if ci          are large that the points (objects) are spread
   Ti. The group of center points is represented as              widely. However if the value of k increases the
C = {c1, c2……ck}                                                 radius and diameter decreases.

Algorithm: EMSTRD(k)                                             The variance for each subtree (cluster) is
                                                                 computed to find the compactness of clusters. A
 Input  : S the point set                                        smaller the variance value indicates, a higher
 Output : k number of clusters with C (set                       homogeneity of the objects in the data set. The
          of center points)                                      cluster compactness measure evaluates how well
                                                                 the subtrees (clusters) of the input is redistributed
 Let e be an edge in the EMST constructed                        in the clustering process, compared with the
 from S                                                          whole input set, in terms of data homogeneity
 Let We be the weight of e                                       reflected by Euclidean distance metric used by the
 Let σ be the standard deviation of the edge                     clustering    process.     Smaller     the     cluster
 weights                                                         compactness value indicates a higher average
 Let ST be the set of disjoint subtrees of the                   compactness in the out put clusters.
 Let nc be the number of clusters                                Figure 1 illustrate a typical example of cases in
1. Construct an EMST from S                                      which simply remove the k-1 longest edges does
2. Compute the average weight of Ŵ of all                        not necessarily output the desired cluster
    the edges                                                    structure. Our algorithm finds the center, radius,
3. Compute standard deviation σ of the edges                     diameter and variance of the each cluster which
4. Compute variance of the set S                                 will be useful in many applications. Our algorithm
5. ST = ø; nc = 1; C= ø;                                         will find 7 cluster structures (k=7). Figure 2
6. Repeat                                                        shows the possible distribution of the points in the
7. For each e EMST                                               two cluster structures with their radius; diameter
8. If (We > Ŵ + σ) or (current longest edge e)                   and also theirs center points 5 and 3. Figure 3
9. Remove e from EMST                                            shows a graph which shows the relationship

                                                                                           ISSN 1947-5500
                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                   Vol. 8, No. 1, April 2010

between radius and diameter with subtrees
(clusters). Lower the radius and diameter value
means higher the tightness. The compactness of
the subtrees is shown in the figure 4. Lower the
values of variance means higher the homogeneity
of the objects in subtree (cluster).

                                                                                        Figure 4. Compactness of clusters

Figure 1. Clusters connected through a point

                                                                               Figure 5. EMST From 7 cluster center points
                                                                               with central cluster 3

                                                                    C. EMSTUCC Algorithm for Central cluster

                                                                    The distance between two sub trees (clusters) of
                                                                    an EMST T is defined as the number of edges
                                                                    present in one sub tree (cluster) but not present in
                                                                    the other.

                                                                     d (T1, T2) = |T1-T2| = |T2-T1|

Figure 2. Two Clusters with radius and diameter (5 and              Definition 1: A sub tree (cluster) is a tree T0 is a
3 as center point)                                                  central sub tree (central cluster) of EMST T if and
                                                                    only if the largest distance from T0 to any other
                                                                    sub tree (cluster) in the EMST T is minimum.

                                                                    The result of the EMSTRD algorithm consists of
                                                                    k number clusters with their center, radius,
                                                                    diameter and variance. These center points c1, c2
                                                                    ….ck are connected and again minimum spanning
                                                                    tree is constructed is shown in the Figure 5. A
                                                                    Euclidean distance between pair of clusters can be
                                                                    represented by a corresponding weighted edge.
                                                                    Our Algorithm is also based on the minimum
                                                                    spanning tree but not limited to two-dimensional
Figure 3: Tightness of clusters using radius and                    points. There were two kinds of clustering
diameter                                                            problem; one that minimizes the maximum intra-
                                                                    cluster distance and the other maximizes the
                                                                    minimum inter-cluster distances. Our Algorithms

                                                                                              ISSN 1947-5500
                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                             Vol. 8, No. 1, April 2010

produces clusters with both intra-cluster and inter-          central cluster. It places the entire disjoint cluster
cluster similarity. We propose Euclidean                      at level 0 (line 3). It then checks to see if T still
Minimum Spanning Tree Updation Central                        contains some edge (line 4). If so, it finds
Cluster algorithm (EMSTUCC) converts the                      minimum edge e (line 5). It then finds the vertices
minimum spanning tree into dendrogram, which                  i, j of an edge e (line 6). It then merges the
can be used to interpret about inter-cluster                  vertices (clusters) and forms a new vertex
distances. This new algorithm is also finds central           (agglomerative approach). At the same time the
cluster from set of clusters.                                 sequence number is increased by one and the level
                                                              of the new cluster is set to the edge weight (line
The algorithm is neither single link clustering               7). Finally, the Updation of minimum spanning
algorithm (SLCA) nor complete link clustering                 tree is performed at line 8. The lines 4-8 in the
algorithm (CLCA) type of hierarchical clustering,             algorithm are repeated until the minimum
but it is based on the distance between centers of            spanning tree T has no edge. The algorithm takes
clusters. This approach leads to new                          O (| E | 2) time.
developments in hierarchical clustering.      The
level function, L, records the proximity at which
each clustering is formed. The levels in the
dendrogram tell us the least amount of similarity
that points between clusters differ. This piece of
information can be very useful in several medical
and image processing applications.

Algorithm: EMSTUCC

      Input: the point set C = {c1, c2……ck}
      Output: central cluster and dendrogram

     1. Construct an EMST T from C                                    Figure 6. Dendrogram for Meta cluster
     2. Compute the radius of T using
        eccentricity of points // for central
        cluster                                                                IV. CONCLUSION
     3. Begin with disjoint clusters with
        level L (0) = 0 and sequence number                   Our EMSTRD clustering algorithm assumes a
        m=0                                                   given cluster number. The algorithm gradually
     4. If (T has some edge)                                  finds k clusters with center for each cluster. These
     5. e = get-min-edge(T) // for least                      k clusters ensures guaranteed intra-cluster
        dissimilar pair of clusters                           similarity. The algorithm finds radius, diameter
     6. (i, j) = get-vertices (e)                             and variance of clusters using eccentricity of
     7. Increment the sequence number                         points in a cluster. The radius and diameter value
        m = m + 1, merge the clusters (i) and                 gives the information about tightness of clusters.
        (j), into single cluster to form next                 The variance value of the cluster is useful in
        clustering m and set the level of this                finding the compactness of cluster. Our algorithm
        cluster to L(m) = e;                                  does not require the users to select and try various
     8. Update T by forming new vertex by                     parameters combinations in order to get the
        combining the vertices i, j;                          desired output. All of these look nice from
        go to step 4.                                         theoretical point of view. However from practical
     9. Else Stop.                                            point of view, there is still some room for
                                                              improvement for running time of the clustering
We use the graph of Figure 5 as example to                    algorithm. This could perhaps be accomplished by
illustrate the EMSTUCC algorithm. The                         using some appropriate data structure. Our
EMSTUCC algorithm construct minimum                           EMSTUCC clustering algorithm generates
spanning tree T from the points c1, c2, c3….ck (line          dendrogram which is used to find the relationship
1) and convert the T into dendrogram is shown in              between the k number clusters produced from the
figure 6. The radius of the tree T is computed at             EMSTRD algorithm. The inter-cluster distance
line 2. This radius value is useful in finding the            between k clusters are shown in the Dendrogram.

                                                                                        ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 8, No. 1, April 2010

The algorithm is also finds central cluster. This
                                                                        [14] G.Kishi, Y.Kajitani, “Maximally distant trees and
information will be very useful in many
                                                                        principal partition of linear graph”. IEEE Trans. Circuit
applications. In the future we will explore and test                    Theory CT-16(1969) 323-330.
our proposed clustering algorithm in various
domains. The EMSTRD algorithm uses divisive                             [15] J.Kruskal. “On the shortest spanning subtree and the
                                                                        travelling salesman problem”. In Proceedings of the American
approach, where as the EMSTUCC algorithm
                                                                        Mathematical Society, Pages 48-50, 1956.
uses agglomerative approach. In this paper we
used both the approaches to find Informative Meta                       [16] N.Malik, Y.Y.Lee, “Finding trees and singed tree pairs by
similarity clusters. We will further study the rich                     the compound method”, in Tenth Midwest Symposium on
                                                                        Circuit Theory, 1967, pp. V1-5-1-V1-5-11.
properties of EMST-based clustering methods in
solving different clustering problems.                                  [17] J.Nesetril, E.Milkova and H.Nesetrilova. Otakar boruvka
                                                                        “on minimum spanning tree problem: Translation of both the
                                                                        1926 papers, comments, history. DMATH:” Discrete
                                                                        Mathematics, 233, 2001.
                                                                        [18] Oleksandr Grygorash, Yan Zhou, Zach Jorgensen.
[1] Anil K. Jain, Richard C. Dubes “Algorithm for Clustering
                                                                        “Minimum spanning Tree Based Clustering Algorithms”.
Data” Michigan State University, Prentice Hall, Englewood
                                                                        Proceedings of the 18th IEEE International conference on tools
Cliffs, New Jersey 07632.1988.
                                                                        with Artificial Intelligence (ICTAI’06) 2006.
[2] T.Asano, B.Bhattacharya, M.Keil and F.Yao. “Clustering
                                                                        [19] N.Paivinen. “Clustering with a minimum spanning of
Algorithms based on minimum and maximum spanning trees”.
                                                                        scale-free-like structure”.Pattern Recogn. Lett.,26(7): 921-930,
In Proceedings of the 4th Annual Symposium on
Computational Geometry,Pages 252-257, 1988.
                                                                        [20] F.Preparata and M.Shamos. “Computational Geometry:
[3] D.Avis “Diameter partitioning”.            Discrete   and
                                                                        An Introduction”. Springer-Verlag, Newyr, NY ,USA, 1985
Computational Geometry, 1:265-276, 1986
                                                                        [21] R.Prim. “Shortest connection networks and some
[4] N.Deo, “A central tree”, IEEE Trans. Circuit theory CT-13
                                                                        generalization”. Bell systems Technical Journal,36:1389-1401,
(1966) 439-440, correspondence.
[5]     M.Fredman and D.Willard. “Trans-dichotomous
                                                                        [22] Stefan Wuchty and Peter F. Stadler. “Centers of Complex
algorithms for minimum spanning trees and shortest paths”. In
                                                                        Networks”. 2006
Proceedings of the 31st Annual IEEE Symposium on
Foundations of Computer Science,pages 719-725, 1990.
                                                                        [23] Y.Xu, V.Olman and D.Xu. “Minimum spanning trees for
                                                                        gene expression data clustering”. Genome Informatics,12:24-
[6] P. Hansen and M. Delattre “ Complete-link cluster analysis
                                                                        33, 2001.
by graph coloring” Journal of the American Statistical
Association 73, 397-403, 1978.
                                                                         [24] C.Zahn. “Graph-theoretical methods for detecting and
                                                                        describing gestalt clusters”. IEEE Transactions on Computers,
[7] Hubert L. J “ Min and max hierarchical clustering using
                                                                        C-20:68-86, 1971.
asymmetric similarity measures ” Psychometrika 38, 63-72,
                                                                        [25] T.Zhang, R.Ramakrishnan and M.Livny. “BIRCH: an
                                                                        efficient data clustering method for very large databases”. In
[8] H.Gabow, T.Spencer and R.Rajan. “ Efficient algorithms
                                                                        Proc.1996 ACM-SIGMOD Int. Conf. Management of Data (
for finding minimum spanning trees in undirected and directed
                                                                        SIGMOD’96), pages 103-114, Montreal, Canada, June 1996.
graphs”. Combinatorica, 6(2):109-122, 1986.

[9] J.C. Gower and G.J.S. Ross “Minimum Spanning trees and                              BIOGRAPHY OF AUTHORS
single-linkage cluster analysis” Applied Statistics 18, 54-64,
1969.                                                                                        S. John Peter is working as Assistant
                                                                                             professor     in     Computer      Science,
[10] Ji He, Ah-Hwee Tan, Chew- Lim Tan, Sam- Yuan Sung.                                      St.Xavier’s      college    (Autonomous),
“On Quantitative Evaluation of clustering systems.                                           Palayamkottai, Tirunelveli. He earned his
Information Retrieval and Clustering”. W.Wu and H.Xiong                                      M.Sc degree from Bharadhidasan
(Eds.). Kluwer Academic Publishers. 2002                                                     University, Trichirappli. He also earned
                                                                                             his M.Phil from Bhradhidasan University,
[11] S. C. Johnson “Hierarchical clustering schemes”                                         Trichirappli. Now he is doing Ph.D in
Psychometrika 32, 241-254, 1967.                                                             Computer Science at Manonmaniam
                                                                        Sundranar University, Tirunelveli. He has publised research
[12] D,Johnson. “The np-completeness column: An ongoing                 papers on clustering algorithm in international journals.
guide”. Journal of Algorithms,3:182-195, 1982.
[13] D.Karger, P.Klein and R.Tarjan. “A randomized linear-
time algorithm to find minimum spanning trees”. Journal of
the ACM, 42(2):321-328, 1995.

                                                                                                     ISSN 1947-5500
                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                     Vol. 8, No. 1, April 2010

                         Dr. S. P. Victor earned his M.C.A.
                         degree       from      Bharathidasan
                         University, Tiruchirappalli. The M.
                         S. University, Tirunelveli, awarded
                         him Ph.D. degree in Computer
                         Science for his research in Parallel
                         Algorithms. He is the Head of the
                         department of computer science, and
                         the Director of the computer science
research centre, St. Xavier’s college (Autonomous),
Palayamkottai, Tirunelveli. The M.S. University, Tirunelveli
and Bharathiar University, Coimbatore have recognized him as
a research guide. He has published research papers in
international, national journals and conference proceedings.
He has organized Conferences and Seminars at national and
state level.

                                                                                                ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

     Adaptive Tuning Algorithm for Performance tuning of
               Database Management System
                                                  S.F.Rodd1, Dr. U.P.Kulkrani2
                                Asst. Prof., Gogte Institute of Technology, Belgaum, Karnataka, INDIA
                                          Prof., SDMCET Dharwar, Karnataka, INDIA.
                                                                   system resource bottlenecks. The performance of these
 Abstract - Performance tuning of Database                         systems is affected by several factors. The important among
                                                                   them include database size which grows with its usage over
Management Systems(DBMS) is both complex and
                                                                   a period of time, increased user base, sudden increase in the
challenging as it involves identifying and altering several        user processes, improperly or un-tuned DBMS. All of these
                                                                   tend to degrade the system response time and hence call for
key performance tuning parameters. The quality of
                                                                   a system that anticipates performance degradation by
tuning and the extent of performance enhancement                   carefully monitoring the system performance indicators and
                                                                   auto tune the system.
achieved greatly depends on the skill and experience of
the Database Administrator(DBA). As neural networks                         Maintaining a database of an enterprise involves
have the ability to adapt to dynamically changing inputs               considerable effort on part of a Database Administrator
                                                                       (DBA) as, it is a continuous process and requires in-depth
and also their ability to learn makes them ideal                       knowledge, experience and expertise. The DBA has to
candidates for employing them for tuning purpose. In                   monitor several system parameters and fine tune them to
                                                                       keep the system functioning smoothly in the event of
this paper, a novel tunig algorithm based on neural                    reduced performance or partial failure. It is therefore
network estimated tuning parameters is presented. The                  desirable to build a system that can tune itself and relieve
                                                                       the DBA of the tedious and error prone task of tuning.
key performance indicators are proactively monitored                   Oracle 9i and 10g have built in support for tuning in the
and fed as input to the Neural Network and the trained                 form of tuning advisor. The tuning advisor estimates the
                                                                       optimal values of the tuning parameters and recommends
network estimates the suitable size of the buffer cache,               them to the DBA. A similar advisor is also available in SQL
shared pool and redo log buffer size. The tuner alters                 Server 2005 which is based on what-if analysis. In this
                                                                       approach, the DBA provides a physical design as input and
these tuning parameters using the estimated values using               the Tuning Advisor performs the analysis without actually
a rate change computing algorithm. The preliminary                     materializing the physical design. However, the advisor
                                                                       available in 2005 recommends the changes needed at the
results show that the proposed method is effective in                  physical level such as creation of index on tables or views,
improving the query response time for a variety of                     restructuring of tables, creation of clustered index etc. which
                                                                       are considered to be very expensive in terms of Database
workload types.                                                        Server down time and the effort on part of the DBA.
Keywords : DBA, Buffer Miss Ratio, Data Miner, Neural
Network, Buffer Cache.                                                                             II. RELATED WORK

                                                                            Several methods have been proposed that proactively
                      I. INTRODUCTION                                  monitor the system performance indicators analyze the
    Database Management Systems are an integral part of                symptoms and auto tune the DBMS to deliver enhanced
any corporate house, the online systems, and e-commerce                performance. Use of Materialized views and Indexes,
applications. For these systems, to provide reliable services          Pruning table and column sets[1-2], Use of self healing
with quick query response times to their customers, the                techniques[3-4], use of physical design tuning are among
Database Management Systems(DBMS) must be                              the proposed solutions. The classical control is modified and
functioning efficiently and should have built-in support for           a three stage control involving Monitor, Analyze and
quick system recovery time in case of partial failure or               Tune[6] is employed to ensure system stability. The

                                                                                                   ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

architecture presented in [5] for self healing database forms                             IV NEURAL NETWORK
the basis for the new architecture presented here in this
paper. This paper presents a new DBMS architecture based                    As neural networks are best suited to handle complex
on modular approach, where in each functional module can               systems and also have ability to learn based on the trained
be monitored by set of monitoring hooks. These monitoring              data set, the same is used in the proposed architecture. As
hooks are responsible for saving the current status                    shown in Fig. 1, Neural Network will have p inputs, a
information or a snapshot of the server to the log. This               specified number of nodes in the hidden layer and one or
architecture has high monitoring overhead, due to the fact             more output nodes. The neural network used in this control
that when large number of parameters to be monitored,                  architecture is a feed forward network. The activation
almost every module's status information has to be stored on           function used is sigmoid function. It is this function that
to the log and if done frequently may eat up a lot of CPU              gives the neural network the ability to learn and produce an
time. Moreover, this architecture focuses more on healing              output for which it is not trained. However, the neural
the system and does not consider tuning the DBMS for                   networks need a well defined training data set for their
performance improvement.                                               proper functioning.

    Ranking of various tuning parameters based on statistical
analysis is presented in[6]. The ranking of parameters is
based on the amount of impact they produce on the system
performance for a given workload. A formal knowledge                                                          Db_cache_size
framework for self tuning database system is presented in[7] Buffer Hit Ratio
that defines several knowledge components. The knowledge
components include Policy knowledge, Workload                       Avg_table_size
knowledge, Problem diagnosis knowledge, Problem
Resolution     Knowledge, Effector knowledge, and
Dependency knowledge. The architecture presented in this
paper involves extracting useful information from the system                                                 Shared_pool_size
log and also from the DBMS using system related queries.            Avg_table_size
This information gathered over a period of time is then used
to train a Neural Network for a desired output response time.
The neural network would then estimate the extent of
correction to be applied to the key system parameters that
                                                                           Figure 1. Basic Neural Network Structure
help scale up the system performance.
                                                                           The neural networks work in phases. In the first phase,
                                                                       the network is trained using a well defined training set for a
                                                                       desired output. In the second phase a new input is presented
                                                                       to the network that may or may not be part of the training
    Calibrating the system for desired response time is called         data set and network produces a meaningful output. For the
performance tuning. The objective of this system is to                 proper working of the neural network, it is important to
analyze the DBMS system log file and apply information                 choose a proper activation function, learning rate, number of
extraction techniques and also gather key system parameters            training loops and sizeable number of nodes in the hidden
like buffer miss ratio, number of active processes and the             layer.
tables that are showing signs of rapid growth. The control
architecture presented in this paper, only one parameter                              V. PROPOSED ARCHITECTURE
namely, the buffer cache is tuned. Using the statistical
information of these three parameters to train the Neural                    Fig. 2 Shows the architecture employed for identifying
Network and generate an output that gives an estimate of the           the symptoms and altering key system parameters. The
optimal system buffer size. Since, the DBMS are dynamic                DBMS system log file will be the primary source of
and continuously running around the clock, the above                   information that helps in checking the health of the system.
information must be extracted without causing any                      Since, the log file contains huge of amount of data, the data
significant system overhead.                                           is first compressed into smaller information base by using a
                                                                       data mining tool. The architecture has Data Miner, Neural
   Extracting information from system log ensures that                 Network aggregator and Tuner as the basic building blocks.
there is no overhead. The queries that are used to estimate            After, extracting meaningful information, the next step is to
buffer miss ratio, table size and number of user processes             estimate the extent of correction required.
are carefully timed and their frequency is adjusted so that it
does not add to the overhead in monitoring the system.

                                                                                                   ISSN 1947-5500
                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                    Vol. 8, No. 1, April 2010

                          New Shared Pool_Size                                     Miss Ratio so that query execution time is reduced and the
                                                                                   memory is used efficiently.
                          New Buff_Cache_Size

                               No. of Users
                                                                                      Tab. Size                           Buff.Miss     Shared Pool         Buff. Size
                                                  Buff_Miss_ratio                                                         Ratio         size (in MB)
                                                                                      (in no. of                                                             (in MB)

                                                Neural                                                       1000         0.9824               32               4
                                                            Tuner                                            1000         0.9895               32               4
                                              Aggregator                                                     1000         0.9894               32               8
                                Avg_Table_size                                                               1000         0.9505               32               8
                                                                                                             2000         0.947                32               8
                                                                                                             2000         0.9053               40               8
                              DBA                                                                            2000         0.8917               40               16
                                                           Rules                                             2500         0.875                40               16

   Figure 2. Neural Network based Tuning Architecture                                                                  Table I. Sample Training Data Set
     As suggested in[2] physical tuning should be avoided as                            The experiment was carried on Oracle 9i with a 3-input
it is expensive. Most importantly the internal corrective                          2-output feed forward neural network with 100 internal
measure such as altering the buffer size of the DBMS used                          nodes. The training is carried with an epoch value of 100,
in query processing is explored in this architecture.                              learning rate of 0.4 and with a training dataset of size 100.
However, several parameters can be altered simultaneously                          The estimated buffer size generated by the Neural Network is
for better performance gain. The Neural network estimates                          based on the dynamic values of the above three parameters
the required buffer size based on the current DBMS input                           as input. The tuner takes this input and alters the buffer size.
parameters and the tuner applies the necessary correction to                       The results obtained are really promising. As can be seen
the buffer size based on the tuning rules. The tuner triggers a                    from the output in Fig. 4 the execution time is significantly
control action to fine tune the performance of the dbms                            lower for the increasing value of the buffer size. The query
based on the following algorithm                                                   used takes join of three tables and generate huge dataset as
ALGORITHM dbTune(ESTMTD_DB_CACHE_SZ)                                                    Fig. 3 shows the effect of buffer cache size on the query
Begin                                                                              response time. TPC-C type benchmark load was used which
                                                                                   represents an OLTP type load. As number of users grow
      Compute the change in response time since                                    beyond 12, the query response time starts rises rapidly. This
      last modification (∆Rtime)                                                   is sensed by the neural network and calculates an appropriate
      If ( ∆Rtime >0 and ∆Rtime > Rth)                                             size of the new buffer size. The tuner uses the tuning rules to
              Increase the new buffer_size to next                                 apply the required correction. The tuning rules indicate when
              higher granule size                                                  and at what interval of the buffer size, the correction is to be
             Issue a command to alter the dbcache size                             applied. Tuning the DBMS frequently may affect the
                                                                                   performance and also lead to instability.
             to the new value
             If(∆Rtime <0 and ∆Rtime < Rth)                                                                                           Without Tuning
              Decrease the new buffer size to next lower
                                                                                      Response Time(msecs)

              granule size.
              Issue a command to alter the dbcache size                                                       80
              to the new value                                                                                                                       With Tuning
End                                                                                                           60

    Table I shows the sample training data. A training data
set of size 100 was used to train the Neural Network. As can                                                  0
be seen from the table, the buffer size is adjusted for                                                            0      5       10       15          20        25
                                                                                                                                   No. of Users
increased table size, Number of user processes and Buffer
                                                                                     Figure 3. Effect of Buffer size on Query Response Time

                                                                                                                                ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 8, No. 1, April 2010

                      VII. CONCLUSION                                  [7] Wiese, David; Rabinovitch, Gennadi, “Knowledge
                                                                              Management in Autonomic Database Performance
     A new tuning algorithm is presented in this paper. The
Neural Network estimates the buffer cache size based on the                   Tuning”, 20-25 April 2009.
trained data set. The correction is applied in accordance with
                                                                       [8] B. DageVille and K. Dias, “Oracle’s self tuning
the tuning algorithm so as to scale up system performance.
This architecture learns from a training set to fine tune the                 architecture and solutions”, IEEE Data Engg. Bulletin,
system and thus it relieves the DBA of the tedious process of
tuning the DBMS and also need for an expert DBA.                              Vol 29, 2006.
Monitoring the macroscopic performance indicators ensures              [9] S. Choudhuri and G. Weikum, “Rethinking database
that there is little monitoring overhead. However, the system
needs further refinement that takes into account sudden surge                 system architecture: Towards a self tuning risc style
in work loads and also the neural network training dataset                    database system”, in VLDB, 2000, pp 1-10.
must be derived based on proper database characterization. It
is also important to ensure that the system remains stable and         [10] S. W. Cheng, D. Garlan et. al, “Architecture based self
gives consistent performance over a long period of time.
                                                                              adaptation in the presence of multiple objectives”,
                                                                              Proceedings of 2006 International journal of Computer
                                                                              Systems and Engineering., 2006.
                                                                       [11] Benoit Dageville and Karl Dias, “Oracle’s Self Tuning
    We would like to thank Prof. D.A.Kulkarni              for
scruitimizing the paper and for his valueable suggestions.                    Architecture and Solutions”., Bulletin of IEEE, 2006.
Special thanks to Prof. Santosh Saraf for his help in learning         [12]    Sanjay Agarwal, Nicolas Bruno, Surajit Chaudhari,
Neural Network implementation in MATLAB. We extend
our thanks to Computer Center, GIT, for providing                             “AutoAdmin:       Self      Tuning        Database        System
laboratory facilities. We thank our esteemed Management for                   Technology”, IEEE Data Engineering Bulletin, 2006.
their financial support.
                                                                       [13] Soror, A.A.; Aboulnaga, A.; Salem, K., “Database
                                                                              Virtualization: A New Frontier for Database Tuning
                                                                       [14] Gerhar Weikum, Axel Moenkerngerg et. al., Self-tuning
[1] S. Agarwal and, “Automated selection of
                                                                              Database Technology and Information Services : From
    materialized views and indexes”, VLDB, 2007.
                                                                              wishful thing to viable Engineering”, Parallel and
[2] Surjit Choudhuri, Vivek Narasayya, “Self tuning
                                                                              Distributed Information System 1993.
    database systems : A Decade progress”, Microsoft
                                                                       [15] Satish, S.K.; Saraswatipura, M.K.; Shastry, S.C, “DB2
    Research. 2007.
                                                                              performance enhancements using Materialized Query
[3] Philip Koopman, “Elements of the Self-Healing System
                                                                              Table for LUW Systems”, 2007. ICONS '07. Second
    Problem Space”, IEEE Data Engineering Bulletin.
                                                                              International Conference, April 2007.
                                                                       [16] Chaudhuri, S.; Weikum G, “Foundations of Automated
[4] Peng Liu, “Design and Implementation of Self healing
                                                                               Database Tuning”, Data Engineering, April 2006.
    Database system”, IEEE Conference, 2005.
                                                                       [17] Gennadi Rabinovitch, David Wiese, “Non-linear
[5] Rimma V. Nehme, “Dtabase, Heal Thyself”, Data Engg.
                                                                              Optimization of Performance functions Autonomic
    Workshop April 2008.
                                                                              Database Performance Tuning”, IEEE Conference,
[6] Debnath, B.K.; Lilja, D.J.; Mokbel, M.F., “SARD: A
    statistical approach for ranking database tuning
                                                                       [19] Weikum G, Monkenberg A, “Self-tuning database
    parameters”,    Data   Engineering    Workshop,     2008.
                                                                              technology:     from      wishful      thinking      to       viable
    ICDEW 2008. IEEE 24th International Conference,
                                                                              engineering”, VLDB Conference, pages, 20-22.
    April 2008 .

                                                                                                     ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 8, No. 1, April 2010

           A Survey of Mobile WiMAX IEEE 802.16m
             Mr. Jha Rakesh                           Mr. Wankhede Vishal A.                            Prof. Dr. Upena Dalal
           Deptt. Of E & T.C.                           Deptt. Of E & T.C.                                Deptt. Of E & T.C.
                 SVNIT                                        SVNIT                                             SVNIT
               Surat, India                                Surat, India                                      Surat, India                                 

Abstract— IEEE 802.16m amends the IEEE 802.16 Wireless                  earlier this year has added mobility support. This is generally
MAN-OFDMA specification to provide an advanced air                      referred to as mobile WiMAX [1].
interface for operation in licenced bands. It will meet the
cellular layer requirements of IMT-Advanced next generation                  Mobile WiMAX adds significant enhancements:
mobile networks. It will be designed to provide significantly               • It improves NLOS coverage by utilizing advanced
improved performance compared to other high rate                        antenna diversity schemes and hybrid automatic repeat
broadband cellular network systems. For the next generation
                                                                        request (HARQ).
mobile networks, it is important to consider increasing peak,
sustained data reates, corresponding spectral efficiencies,                 • It adopts dense subchannelization, thus increasing
system capacity and cell coverage as well as decreasing latency         system gain and improving indoor penetration.
and providing QoS while carefully considering overall system
complexity. In this paper we provide an overview of the state-             • It uses adaptive antenna system (AAS) and multiple
of-the-art mobile WiMAX technology and its development. We              input multiple output (MIMO) technologies to improve
focus our discussion on Physical Layer, MAC Layer,                      coverage [2].
Schedular,QoS provisioning and mobile WiMAX specification.
                                                                           • It introduces a downlink subchannelization scheme,
   Keywords-Mobile WiMAX; Physical Layer; MAC Layer;                    enabling better coverage and capacity trade-off [3-4].
Schedular; Scalable OFDM.                                                          This paper provides an overview of Mobile
                                                                        WiMAX standards and highlights potential problems arising
                    I.   INTRODUCTION                                   from applications. Our main focuses are on the PHY layer,
         IEEE 802.16, a solution to broadband wireless                  MAC layer specifications of mobile WiMAX. We give an
access (BWA) commonly known as Worldwide                                overview of the MAC specification in the IEEE 802.16j and
Interoperability for Microwave Access (WiMAX), is a recent              IEEE802.16m standards, specifically focusing the discussion
wireless broadband standard that has promised high                      on scheduling mechanisms and QoS provisioning. We
bandwidth over long-range transmission. The standard                    review the new features in mobile WiMAX, including
specifies the air interface, including the medium access                mobility support, handoff, and multicast services. We discuss
control (MAC) and physical (PHY) layers, of BWA. The key                technical challenges in mobile WiMAX deployment. We
development in the PHY layer includes orthogonal                        then conclude the paper.
frequency-division multiplexing (OFDM), in which multiple
access is achieved by assigning a subset of subcarriers to                       II.    PHYSICAL LAYER OF IEEE 802.16M.
each individual user [1]. This resembles code-division                      This section contains an overview of some Physical
multiple access (CDMA) spread spectrum in that it can                   Layer enhancements that are currently being considered for
provide different quality of service (QoS) for each user; users         inclusion in future systems. Because the development of the
achieve different data rates by assigning different code                802.16m standard is still in a relatively early stage, the focus
spreading factors or different numbers of spreading codes. In           is on presenting the concepts and the principles on which the
an OFDM system, the data is divided into multiple parallel              proposed enhancements will be based, rather than on
substreams at a reduced data rate, and each is modulated and            providing specific implementation details. Although the
transmitted on a separate orthogonal subcarrier. T