IJCSIS Computer Science Research

Document Sample
IJCSIS Computer Science Research Powered By Docstoc
					     IJCSIS Vol. 9 No. 3, March 2011
           ISSN 1947-5500




International Journal of
    Computer Science
      & Information Security




    © IJCSIS PUBLICATION 2011
                               Editorial
                     Message from Managing Editor

International Journal of Computer Science and Information Security (IJCSIS) is a peer-reviewed
monthly journal devoted to research and applications of computer science and security. The
journal papers follow IEEE format guidelines and publication standards.


IJCSIS editorial board consists of several internationally recognized experts and guest editors.
Wide circulation is assured because libraries and individuals, worldwide, subscribe and reference
to IJCSIS. The Journal has grown rapidly to its currently level of about 1,000 articles published
and indexed.


Other field coverage includes: security infrastructures, network security: Internet security,
content protection, cryptography, steganography and formal methods in information security;
multimedia systems, software, information systems, intelligent systems, web services, data
mining, wireless communication, networking and technologies, innovation technology and
management. (See monthly Call for Papers)


IJCSIS is published using an open access publication model, meaning that all interested readers
will be able to freely access the journal online without the need for a subscription.



On behalf of the Editorial Board and the IJCSIS members, we would like to express our gratitude
to all authors and reviewers for their hard and high-quality work.




Available at http://sites.google.com/site/ijcsis/
IJCSIS Vol. 9, No. 3, March 2011 Edition
ISSN 1947-5500 © IJCSIS, USA.


Abstracts Indexed by (among others):
                 IJCSIS EDITORIAL BOARD


Dr. M. Emre Celebi,
Assistant Professor, Department of Computer Science, Louisiana State University
in Shreveport, USA

Dr. Yong Li
School of Electronic and Information Engineering, Beijing Jiaotong University,
P. R. China

Prof. Hamid Reza Naji
Department of Computer Enigneering, Shahid Beheshti University, Tehran, Iran

Dr. Sanjay Jasola
Professor and Dean, School of Information and Communication Technology,
Gautam Buddha University

Dr Riktesh Srivastava
Assistant Professor, Information Systems, Skyline University College, University
City of Sharjah, Sharjah, PO 1797, UAE

Dr. Siddhivinayak Kulkarni
University of Ballarat, Ballarat, Victoria, Australia

Professor (Dr) Mokhtar Beldjehem
Sainte-Anne University, Halifax, NS, Canada

Dr. Alex Pappachen James, (Research Fellow)
Queensland Micro-nanotechnology center, Griffith University, Australia

Dr. T.C. Manjunath,
ATRIA Institute of Tech, India.
                                 TABLE OF CONTENTS


1. Paper 28021153: Addressing Vulnerability of Mobile Computing: A Managerial Perspective (pp.
1-5)

Arben Asllani and Amjad Ali
Center for Security Studies, University of Maryland University College, Adelphi, Maryland, USA

2. Paper 28021166: Reduction of PAPR for OFDM Downlink and IFDMA Uplink Wireless
Transmissions (pp. 6-13)

Bader Hamad Alhasson , Department of Electrical and Computer Engineering, University of Denver,
Denver, USA
Mohammad A. Matin, Department of Electrical and Computer Engineering, University of Denver, Denver,
USA

3. Paper 28021123: Evolution Prediction of the Aortic Diameter Based on the Thrombus Signal from
MR Images on Small Abdominal Aortic Aneurysms (pp. 14-19)

A. Suhendra, C.M. Karyati, A.Muslim, A.B. Mutiara
Faculty of Computer Science and Information Technology, Gunadarma University, Jl. Margonda Raya
No.100, Depok 16424, Indonesia

4. Paper 28021145: Empirical Evaluation of the Shaped Variable Bit Rate Algorithm for Video
Transmission (pp. 20-29)

A. Suki M. Arif, Suhaidi Hassan, Osman Ghazali, Mohammed M. Kadhum
InterNetWorks Research Group, UUM College of Arts and Sciences Universiti Utara Malaysia, 06010
UUM Sintok, Kedah, Malaysia

5. Paper 28021146: An Efficient Self-Organized Authentication and Key Management Scheme for
Distributed Multihop Relay-Based IEEE 802.16 Networks (pp. 30-38)

Adnan Shahid Khan, Norsheila Fisal, Sharifah Kamilah, Sharifah Hafizah, Mazlina Esa, Zurkarmawan
Abu Bakar
UTM-MIMOS Center of Excellence in Telecommunication Technology, Faculty of Electrical Engineering,
Universiti Teknologi Malaysia 81310 Skudai, Johor, Malaysia
M. Abbas, Wireless Communication Cluster, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala
Lumpur, Malaysia

6. Paper 28021157: A Digital Image Encryption Algorithm Based On Chaotic Logistic Maps Using A
Fuzzy Controller (pp. 39-44)

Mouad HAMRI & Jilali Mikram, Mathematics and computer science department, Science University of
Rabat-Agdal, 4 Avenue Ibn Battouta Rabat Morocco
Fouad Zinoun, Economical sciences and management department, University of Meknes Morocco

7. Paper 28021159: Performance Analysis of Connection Admission Control Scheme in IEEE 802.16
OFDMA Networks (pp. 45-51)

Abdelali El Bouchti, Said El Kafhali and Abdelkrim Haqiq
Computer, Networks, Mobility And Modeling Laboratory, e- NGN Research Group, Africa And Middle
East, FST, Hassan 1st University, Settat, Morocco
8. Paper 23021105: Enhancement and Minutiae Extraction Of Touchless Fingerprint Image Using
Gabor And Pyramidal Method (pp. 52-57)

A. John Christopher, Associate Professor, Department of Computer Science, S.T. Hindu College, Nagercoil,
Dr. T. Jebarajan, Principal, V.V. College of Engineering, Tisayanvilai

9. Paper 23021108: Automatic Parsing For Arabic Sentences (pp. 58-63)

Zainab Ali Khalaf, School of computer science, (USM), Penang, Malaysia
Dr. Tan Tien Ping, School of computer science, Universiti Sains Malaysia (USM), Penang, Malaysia

10. Paper 28021129: Amelioration of Walsh-Hadamard Texture Patterns based Image Retrieval
using HSV Color Space (pp. 64-69)

Dr. H.B.Kekre, Sudeep D. Thepade, Varun K. Banura
Computer Engineering Department, MPSTME, SVKM’s NMIMS (Deemed-to-be University), Mumbai,
India

11. Paper 27021119: Analysis and Comparison of Medical Image Fusion Techniques: Wavelet based
Transform and Contourlet based Transform (pp. 70-75)

C G Ravichandran, RVS College of Engg. & Tech, Dindigul
R. Rubesh Selvakumar, Research Scholar, Anna University of Technology, Tricirappalli
S. Goutham, Surya Engineering College, Erode

12. Paper 28021135: Performance Comparison of Texture Pattern Based Image Retrieval Methods
using Walsh, Haar and Kekre Transforms with Assorted Thresholding Methods (pp. 76-83)

Dr. H. B. Kekre, Sudeep D. Thepade, Varun K. Banura
Computer Engineering Department, MPSTME, SVKM’s NMIMS (Deemed-to-be University), Mumbai,
India

13. Paper 28021139: A Generic Rule-Based Agent for Monitoring Temporal Data Processing (pp. 84-
89)

S. Laban, International Data Centre (IDC), Comprehensive Nuclear Test-Ban Treaty Organization (CTBT),
Vienna, Austria
A.I. El-Desouky, Computer and Systems Department, Faculty of Engineering, Mansoura University,
Mansoura, Egypt
A. S. ElHefnawy, Information Technology, Department, Faculty of Computer, & Information, Mansoura
University, Mansoura, Egypt

14. Paper 28021141: A New Approach for Model based Gait Signature Extraction (pp. 90-94)

Mohamed Rafi, Dept. of CS&E, HMS Institute of Tech., Tumkur, Karnataka, India
Shanawaz Ahmded J, College of Computer Science, King Khalid University, Abha, Kingdom of Saudi Arabia
Md. Ekramul Hamid, College of Computer Science, King Khalid University, Abha, Kingdom of Saudi Arabia
R.S.D Wahidabanu, Dept. of E&C, Government college of Engg Salem, Tamil Nadu, India.

15. Paper 28021142: Mining Fuzzy Cyclic Patterns (pp. 95-99)

A. Mazarbhuiya, M. A. Khaleel, P. R. Khan
Department of Computer Science, College of Computer Science, King Khalid University, Abha, Kingdom of
Saudi Arabia
16. Paper 28021144: Robust Color Image Watermarking Using Nonsubsampled Contourlet
Transform (pp. 100-111)

C. Venkata Narasimhulu , Professor ,Dept of ECE,Hasvita Institute of Engg & Tech,Hyderabad,INDIA
K. Satya Prasad, Professor, Dept of ECE, JNTU Kakinada,India

17. Paper 28021150: Parallel Implementation of Compressed Sensing Algorithm on CUDA- GPU (pp.
112-119)

Kuldeep Yadav & Ankush Mittal, Computer Science and Engineering, College of Engineering
Roorkee , Roorke-247667, India
M. A. Ansar & Avi Srivastava, Galgotia College of Engineering, Gr. Noida, India

18. Paper 28021151: Fuzzy HRRN CPU Scheduling Algorithm (pp. 120-124)

Bashir Alam, M.N. Doja, Department of Computer Engineering, Jamia Millia Islamia, New Delhi, India
R. Biswas, Department of Computer Science and Engineering, Manav Rachna University, Faridabad, India
M. Alam, Department of computer Science, Jamia millia Islamia,New Delhi India

19. Paper 28021152: Experiences and Comparison Study of EPC & UML For Business Process & IS
Modeling (pp. 125-133)

Md. Rashedul Islam School of Business and Informatics Högskolan i Borås Borås, Sweden
Md. Rofiqul Islam School of Business and Informatics Högskolan i Borås Borås, Sweden
Md. Shariful Alam School of Business and Informatics Högskolan i Borås Borås, Sweden
Md. Shafiul Azam Dept. of Computer Science and Engineering Science and Technology University, Pabna
Pabna, Bangladesh

20. Paper 28021155: Facial Tracking Using Radial Basis Function (pp. 134-138)

P. Mayilvahanan, Research scholar, Dept. of MCA, Vel's University, Pallavaram, Chennai, India
Dr. S. Purushothaman, Principal, Sun College of Engineering & Technology, Kanyakumari – 629902,
Tamil Nadu, India
Dr. A. Jothi, Dean, School of Computing Sciences, Vels University, Pallavaram, Chennai, India

21. Paper 28021156: Performance Comparison of Speaker Identification using circular DFT and
WHT Sectors (pp. 139-143)

Dr. H B Kekre, Vaishali Kulkarni, Indraneal Balasubramanian, Abhimanyu Gehlot, Rasik Srinath
MPSTME, NMIMS University.

22. Paper 28021158: Reliability and Security in MDRTS: A Combine Colossal Expression (pp. 144-
153)

Gyanendra Kumar Gupta, Computer Science & Engineering Department, Kanpur Institute of Technology,
Kanpurr, UP, India, 208 001
A. K Sharma, Computer Sc. & Engg. Deptt, M.M.M. Engineering College, Gorakhpur, UP, India, 273010
Vishnu Swaroop, Computer Science & Engineering Department, M.M.M. Engineering College, Gorakhpur,
UP, India, 273010
23. Paper 28021161: Implementation in Java of a Cryptosystem using a Dynamic Huffman Coding
and Encryption Methods (pp. 154-159)

Eugène C. Ezin
Institut de Mathématiques et de Sciences Physiques, Unité de Recherche en Informatique et Sciences
Appliquées, Université d’Abomey-Calavi, République du Bénin

24. Paper 28021164: Towards Generating a Rulebase to Provide Feedback at Design Level for
Improving Early Software Design (pp. 160-164)

B. Bharathi, Research Scholar, Sathyabama University, Chennai-119
G. Kulanthaivel , Assistant Professor, NITTTR, Chennai

25. Paper 28021165: Performance Comparison of TCP Variants in Mobile Ad- Hoc Networks (pp.
165-170)

Mandakini Tayade, School of Information Technology, Rajiv Gandhi Prodyogiki Vishwavidyalaya, Bhopal
(M.P.) India
Sanjeev Sharma, Head of School of Information Technology, Rajiv Gandhi Prodyogiki Vishwavidyalaya,
Bhopal (M.P.) India

26. Paper 28021168: Analysis on Robust Adaptive Beamformers (pp. 171-178)

T. S. Jeyali Laseetha, Professor, Department of Electronics and Communication Engineering, Anna
University of Technology, Tirunelveli, Tamil Nadu, India
Dr. (Mrs) R.Sukanesh, Professor, Department of Electronics and Communication Engineering, Madurai,
Tamil Nadu, India

27. Paper 28021169: A Review On Distance Measurement And Localization In Wireless Sensor
Network (pp. 179-184)

Kavindra Kumar Ahirwar, School Of Information Technology, Rajiv Gandhi Technical University, Bhopal
(MP), India
Dr. Sanjeev Sharma (Head of department), School Of Information Technology, Rajiv Gandhi Technical
University, Bhopal (MP), India

28. Paper 27021117: An Improved Visual Cryptography Scheme for Secret Hiding (pp. 185-197)
Full Text: PDF

G. Prasanna Lakshmi, Computer Science, IBSAR, Karjat, India
Dr. J.A.Chandulal, Professor and HOD, IBSAR, Computer Science, India
Dr. KTV Reddy, Professor & Principal, Electronics & Telecommunications Dept., Computer Science, India

29. Paper 17021103: Adaptive MIMO-OFDM Scheme with Reduced Computational Complexity and
Improved Capacity (pp. 198-205)

L. C. Siddanna Gowd, A. R. Ranjini and M. Kanthimathi,
Faculty of ECE Dept, SriSairam Engineering College, Chennai, T.N., India
30. Paper 24021111: An Efficient Fair Queuing Model for Data Communication Networks (pp. 206-
216)

M. A. Mabayoje 1* , S. O. Olabiyisi 2, A.O. Ameen 1, R. Muhammed 1 , O.C. Abikoye 1.
1
  Department of Computer Science, Faculty of Communication and Information Sciences, University of
Ilorin, PMB1515, Ilorin, Kwara-Nigeria.
2
   Department of Computer Science and Engineering, Ladoke Akintola University of Technology,
Ogbomosho, Oyo-Nigeria.

31. Paper 24021112: Implementation of Audio Wave Steganography By Replacing 4th Bit LSB of
Audio Wave File (pp. 217-219)

Mr. Vijay B. Gadicha, Department of Computer Science & Engg, P.R.Patil College of Engg & Tech,
Amravati (MH),India.
Mr. Ajay. B. Gadicha, Department of Computer Science & Engg, P.R.Pote(Patil) College of Engg & Tech,
Amravati (MH),India.

32. Paper 24021114: Modeling of Aluminium – Flyash Particulate Metal Matrix Composites using
Fuzzy Logic (pp. 220-225)

R. Elangovan, Research Scholar, Department of Mechanical Engineering, Vinayaka Missions University,
Salem, India-636 308
Dr. S. Purushothaman, Principal, Sun College of Engineering and Technology, Sun Nagar, Erachakulum,
Kanyakumari District – 629902, India

33. Paper 27021116: Compression Techniques and Water Marking of Digital Image using Wavelet
Transform and SPIHT Coding (pp. 226-258)

G. Prasanna Lakshmi, Computer Science, IBSAR, Karjat, India
Dr. J. A. Chandulal, Professor and HOD, IBSAR, Computer Science, India
Dr. KTV Reddy, Professor & Principal, Electronics & Telecommunications Dept., Computer Science, India

34. Paper 28021171: An analytical survey on Network Security Enhancement Services (pp. 259-262)

Deshraj Ahirwar , PG Scholar, CSE, SATI, Vidisha
Manish K. Ahirwar , CSE, UIT, RGPV
Piyush K. Shukla, CSE, UIT, RGPV
Pankaj Richharia, CSE, BITS, Bhopal

35. Paper 28021199: An Empirical Study of Software Project Management among Some Selected
Software Houses in Nigeria (pp. 263-271)

Olalekan Akinola, Funmilayo Ajao, Opeoluwa B. Akinkunmi
Computer Science Department, University of Ibadan, Ibadan, Nigeria

36. Paper 30011123: New Codes for Spectral Amplitude Coding Optical CDMA Systems (pp. 272-279)

Hassan Yousif Ahmed, Communication & Networking Engineering Department, Computer Science
College, King Khalid University, Abha, Kingdom of Saudi Arabia
Elmaleeh, M. A, Electronics Engineering Dept, Faculty of Engineering and Technology, University of
Gezira, Wad Madni, Sudan
Hilal Adnan Fadhil, School of Computer and Communication Engineering, Universiti Malaysia Perlies,
Malaysia
S.A. Aljunid, School of Computer and Communication Engineering, Universiti Malaysia Perlies, Malaysia
37. Paper 31011135: Image Retrieval with Image Tile Energy Averaging using Assorted Color Spaces
(pp. 280-286)

Dr. H.B.Kekre, Sudeep D. Thepade, Varun Lodha, Pooja Luthra, Ajoy Joseph, Chitrangada Nemani
Information Technology Department, MPSTME, SVKM‟s NMIMS (Deemed-to-be University), Mumbai,
India

38. Paper 27021118: Improvement of Distributed Virtual Environment (DVE) performance (pp. 287-
295)

Olfat I. EL-Mahi, Computer Graphics department, IRI institute- MuCSAT, Borg EL-Arabe, Egypt
Hanan Ali, Computer Graphics department, IRI institute- MuCSAT, Borg EL-Arab, Egypt
Walaa M. Sheta, Computer Graphics department, IRI institute- MuCSAT, Borg EL-Arab, Egypt
Salwa Nassar, Electronic Research Institute, Cairo, Egypt
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 9 No. 3, March 2011




        Addressing Vulnerability of Mobile Computing
                                                 A Managerial Perspective

                                                   Arben Asllani and Amjad Ali
                                                    Center for Security Studies
                                             University of Maryland University College
                                                      Adelphi, Maryland, USA



Abstract— Popularity of mobile computing in organizations has            approximately 0.6 percent in stock price when a vulnerability is
risen significantly over the past few years. Notebooks and laptop        reported and the impact is more severe when the vulnerability
computers provide the necessary computing power and mobility             flaws are not addressed in advance [2]. However, while most
for executives, managers, and other professionals. Such                  organizations consider vulnerability management critical to
advantages come with a price for the security of the                     their operations, fewer than 25 percent have vulnerability as an
organizational networks: increased vulnerability. The paper              integrated part of their operations [3]. This paper offers a
discusses three types of mobile computing vulnerability: physical,       managerial framework to address the issues of information
system, and network access vulnerability. Using a managerial             systems vulnerabilities with a special focus on laptop
approach, the paper offers a framework to deal with such
                                                                         computers and their use for remote access to organizational
vulnerabilities. The framework suggests specific courses of action
for two possible scenarios. When there is no present threat, a
                                                                         networks.
proactive approach is suggested. When one or more threats are                The proposed framework can help system administrators to
present, a reactive, matrix-based approach is suggested. Both            assess the vulnerabilities associated with using mobile laptops
approaches offer a systematic methodology to address laptop              to remotely access the local area networks (LAN) or wireless
vulnerabilities. A similar framework can be extended to address          local area networks (WLAN). Once an assessment is made, the
security vulnerabilities of other mobile computing devices in            network administrator can address such vulnerabilities in a
addition to notebooks and laptop computers. A real case scenario
                                                                         systematic and efficient manner. Also, the framework suggests
from a network in a university college in the southeastern U.S. is
                                                                         a step-by-step procedure to address such vulnerabilities when
used to illustrate the proposed framework.
                                                                         the system is under attack, or when one or more threats are
   Keywords - mobile computing; cybersecurity; vulnerability;            present.
managerial approach                                                          The paper is organized as follows. First, a brief discussion
                                                                         of vulnerabilities of mobile laptops and their use for remotely
                       I.    INTRODUCTION                                accessing a given network is provided. The next section
    Recent trends of globalization, outsourcing, off-shoring,            discusses the modeling framework and presents the practical
and cloud computing have changed the structure of                        recommendations for system administrators. The framework
organizations and cyberspace. Information is no longer                   includes a proactive systematic approach to continuously
confined within the walls of an organization. Today’s                    evaluate the set of vulnerabilities and a reactive approach for
organizations are constantly allowing their suppliers to access          dealing with vulnerabilities when one or more threats are
their supply chain management systems, customers to retrieve             present. Finally, conclusions and several practical
product information from their electronic commerce systems,              recommendations are provided
and their own employees to log on to the organizations’
intranet. Organizations use remote access to information                          II. VULNERABILITIES OF MOBILE COMPUTING
systems to streamline their business processes, become                       During the last two decades the popularity of notebooks and
operationally efficient, and to gain competitive advantage.              laptops has increased significantly. They have been and will
However, the global reach of information systems has raised              continue to be the computers of choice for individuals and
concerns over security and has made organizations more                   organizations. Forrester Research recently reported that laptop
vulnerable to security threats.                                          sales in the U.S. overtook desktop sales 44 percent to 38
    Organizations must pay special attention to cybersecurity            percent in 2009 and 44 percent to 32 percent in 2010 [4]. The
vulnerabilities and ensure that their notebooks, laptops, and            same report predicts that laptop sales will remain unchanged in
other mobile devices and networks are not compromised as a               the 42-44 percent range for the next few years while desktop
result of this increase in mobility [1]. A recent study about            sales will gradually decline to 18 percent in 2015. Laptops have
software vendors indicated that organizations lose                       become popular because they allow professionals and




                                                                     1                              http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 9 No. 3, March 2011
knowledge workers to access their networks when they are                 access, Internet access, and file transfer protocol (FTP) access.
travelling or from home offices and at the same time they offer          Such actions create an environment for opening potential
storage and processing capabilities similar to, or even better           harmful attachments, allowing potential unauthorized access to
than desktops.                                                           important files, potential for sniffing, session hijacking, IP
                                                                         address spoofing, and denial of service attacks. In general,
    The shift toward mobile computing is associated with a new           using a laptop to access a WLAN is more susceptible to attacks
set of vulnerabilities for information systems. Mobile laptops           because WLAN includes both the organization’s internal
are considered by most organizations as the greatest security            network and the general public network segments. For
threat and the most difficult to maintain [3]. A survey                  example, WLANs can be susceptible to attacks such as traffic
published in 2006 indicated that in 27 percent of the cases, it          analysis, eavesdropping, brute force attack, renegade access
took longer than 10 days to deploy critical patches to mobile            points, and masquerading attacks.
laptops [3]. A timely and efficient response to laptop
vulnerabilities must be a major concern for organizations and                System administrators and laptop users can address network
their system administrators.                                             access vulnerabilities through several courses of action. They
                                                                         can formulate and implement network access security policies,
    Mobile computing vulnerabilities can be classified into              require periodic change of login information and enforce a
three major categories: physical vulnerability, system                   policy for strong passwords, clearly define user privileges
vulnerability, and network access vulnerability. A brief                 (read, write, delete) and user access, and enforce secure setting
discussion of those categories is provided below along with a            access and avoid access from open networks.
suggested course of actions.

A. Physical Vulnerability                                                  III.   MANAGING VULNERABILITIES OF LAPTOP COMPUTERS
                                                                                          AND NETWORK ACCESS
    Laptops are mobile computers and they travel with their
owners or users. There is a greater chance for laptops to be lost            The identification of physical, system, and network access
or stolen in airports, hotels, and meeting auditoriums. Physical         vulnerabilities allows the system administrator to prepare a
vulnerability is not only associated with the loss of hardware; it       course of action to address these vulnerabilities. It is very
is also associated with the loss of valuable data and sensitive          important that a continuously improvement plan is in place and
information. Another form of physical vulnerability occurs               vulnerabilities are dealt with in a timely manner and preferably
when laptops are left open and unattended, which leads to                before a threat occurs. Such an approach requires that security
exposure to sensitive information and documents and the                  perspective is shifted from technical to managerial. The main
ability for network access.                                              goal of addressing vulnerabilities will be to improve business
                                                                         resiliency and continuity [6].
    System administrators must continuously raise awareness
about the importance of physical security and remind laptop              A. Managing Vulnerabilities: No Present Threat
users of consequences of this vulnerability. In some cases, it is
necessary to secure the rooms or offices where the laptop is                 System administrators must continuously work to reduce
located and other times it is necessary to fasten the laptop to a        the number of vulnerabilities present at any time during normal
non-movable object.                                                      business operations. Even when there is no immediate threat a
                                                                         systematic, process based, proactive approach must be
                                                                         followed. This approach has three major steps:
B. System Vulnerability
    Laptop computer systems are as vulnerable as any other               1. Identify present vulnerabilities in the IT security area
computer system in the organization. A recent survey on laptop           2. Rate vulnerabilities based on the potential damage and
vulnerability assessment indicates that the most significant type            likelihood of attack
of vulnerabilities are missing security patches and updates,
misapplied and outdated patches, outdated virus and spyware              3. Address vulnerabilities with specific course of action
definition files, configuration weaknesses that create exposures,
                                                                           1) Identification of Vulnerabilities
and missing or deficient security applications, topologies and
processes [5]. Remote laptops can be physically accessed
easier than desktops. As such, non-secure laptop systems pose                During normal business operations of the organizational
greater vulnerability than desktop systems.                              cyberspace, when there is no threat to the system, system
                                                                         administrators must evaluate potential vulnerabilities of the
    System administrators must prepare a schedule of updates             system and among them, vulnerabilities of laptop computers
for security patches, antivirus programs, and other security             and their access to the organizational network. The literature
programs. It is very important to follow the schedule and allow          review and practical experience have identified a series of
users to update their systems as soon as a new update becomes            vulnerabilities for any particular information system. Reference
available.                                                               [7] suggests a series of vulnerability categories related to
                                                                         network access as shown in the first column of Table I.
C. Network Access Vulnerability                                             System administrators must identify what vulnerabilities
    The need to access LAN and WLAN using mobile laptops                 from the above list are present in his or her network. For those
creates the single most significant set of vulnerabilities for the       vulnerabilities which are present the administrator must specify
organizational cyberspace. Laptops are used to provide e-mail



                                                                     2                                http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 9 No. 3, March 2011
 any symptom(s), rating, and required action (s). This process is                     Timothy Parker is a systems administrator at the College
 illustrated with a real case scenario as described below:                        of Business, an AACSB accredited institution in a regional
                                                                                  university in the southeastern U.S. The college has two
                     TABLE I.     LIST OF VULNERABILITIES
                                                                                  computer laboratories, four computer classrooms, and many
                                                                                  lecturing podiums equipped with workstations and projectors.
  Vulnerability        Presen     Symptoms         Rating       Action            The college has an inventory of 78 laptops that are distributed
                         t?                                   Required            to faculty members for their research and teaching needs. The
Password cracking      Yes      Several faculty    High     Send a memo
                                members use                 with
                                                                                  college has several LANs, a secure WLAN, and an open
                                the same                    guidelines for        wireless network. Faculty members use their laptops to access
                                password to                 strong                student information, classroom information, and research files
                                access several              passwords             that are stored in several drives around the college’s LAN.
                                services such as            and request           Students also use their own laptops and mobile devices to
                                Blackboard,                 password              access classroom information and other files located in the
                                Banner, and a               changes.
                                shared server                                     network.
                                with sensitive
                                                                                      Mr. Parker is aware that many faculty members use the
                                research
                                documents                                         same password to access several services, including
Network and                                                                       Blackboard, Banner, and servers with sensitive information.
system                                                                            Students also use their laptops to access their records using an
information                                                                       unsecured wireless network. Several laptops and desktops are
gathering                                                                         infected due to students downloading harmful documents via
User enumeration
                                                                                  the Internet. Several new programs on the faculty laptops and
Backdoors,
Trojans and                                                                       desktops need to be updated. Students use classroom and
remote controlling                                                                laboratory computers to access gaming Web sites. As Mr.
Gaining access to      Yes      Students are       High     Enforce               Parker was walking through the building he noticed that some
remote                          using their                 secure wired          faculty members had left their office open or unlocked with
connections and                 laptops to                  or wireless           laptops already logged onto the network.
services                        access student              connection to
                                records using               sensitive data          2)   Vulnerability Priority Ratings
                                the unsecured
                                wireless
                                network                                               A system’s vulnerability rating represents a combination of
Privilege and user                                                                the potential damage a certain attack poses on the vulnerability
escalation                                                                        and the attractiveness of the vulnerability in the eyes of an
Spoofing                                                                          intruder. The following three vulnerability ratings are
Misconfigurations                                                                 suggested:
Denial-of-service
(DoS) and buffer                                                                  • High: This vulnerability is very attractive to the intruder and
overflows                                                                             has high consequences if this vulnerability is exploited.
Viruses and            Yes      Several laptops    High     Update
worms                           and desktops                antivirus
                                                                                      Mr. Parker has rated password cracking, gaining access to
                                are infected.               programs and              remote connections, presence of viruses and worms in this
                                                            scan and                  category.
                                                            clean the
                                                            infected              • Moderate: This vulnerability is somewhat attractive to the
                                                            computers                intruder and consequences if this vulnerability is exploited
Hardware specific                                                                    are moderate. Mr. Parker has rated security policy
Software specific      Yes      Several new        Low      Update and               violation in this category.
and updates                     programs need               install new
                                to be updated in            patches to            • Low: This vulnerability is not very attractive to the intruder
                                the faculty                 improve                   and has low consequences if this vulnerability is exploited.
                                laptops and                 security
                                desktops.
                                                                                      Mr. Parker has rated software specific and updates in this
Security policy        Yes      Students use       Modera   Send a memo               category.
violations                      classroom and      te       and remind
                                laboratory                  students and
                                                                                    3) Course of Actions
                                computers to                faculty of
                                access gaming               security                  Using the priority ratings identified in the previous step,
                                websites. Some              policies              Mr. Parker generates a working plan to address the
                                faculty                     related to this       vulnerabilities in the College of Business. Specifically, Mr.
                                members leave               vulnerability
                                open laptops in                                   Parker must immediately send a memo with guidelines for
                                unlocked                                          strong passwords and request password changes, enforce
                                offices                                           secure wired or wireless connection to sensitive data, update
                                                                                  antivirus programs, scan, and clean the infected computers,




                                                                              3                              http://sites.google.com/site/ijcsis/
                                                                                                             ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 9 No. 3, March 2011
send a memo and remind students and faculty of relevant                       2) Evaluate the severity of each threat for each
security policies, and update and install new patches.                      vulnerability

B. Managing Vulnerabilities: Present Threat                                     Each cell in Table II represents the severity (or risk) of a
    When one or more threats are present, system                            given threat to a still existing vulnerability. High severity or
administrators must change the mode of operation from                       risk combinations are designated in red, moderate severity
proactive to reactive. When the system is under attack, a quick             combinations are designated in yellow and low severity
evaluation of the threats and quick reaction to these threats is            combinations in green. The interpretations of the severity
necessary. The reaction is immediate but still systematic, and              ratings are provided below:
the following steps must be followed:                                               Severity of this combination is high. The course of
1. Create a vulnerability-threat matrix                                             action     recommended      to     mitigate   these
                                                                                    threats/vulnerabilities should    be    implemented
2. Evaluate the severity of each threat for each vulnerability                      immediately.
3. Address vulnerability-threat with specific course of action                      Severity of this combination is moderate. The course of
  1) Create a vulnerability-threat matrix                                           action     recommended         to    mitigate     these
                                                                                    threats/vulnerabilities should be implemented as soon
    The vulnerability-threat assessments matrix can be utilized                     as possible.
with any information system or part of it. The matrix approach                      Severity of this combination is low. The course of
is often suggested in the literature [8] [9]. The matrix is used                    action     recommended         to    mitigate      these
to map the severity of a given threat with a given vulnerability                    threats/vulnerabilities will improve security, but is of
and to systematically generate an emergent and effective                            less urgency.
response. Table II is an illustration of this matrix from the
College of Business case.                                                       As shown in Table II, the spoofing attack is currently
                                                                            presenting a moderate level of severity with regard to gaining
                                                                            remote access to the network. In general, spoofing can be very
            TABLE II.    VULNERABILITY-THREAT MATRIX                        devastating for the organization (college) and the use of laptop
 Unaddressed      Threat 1:     Threat 2:      Action Required              computers to access the network is a weakness for the system.
Vulnerabilities   Spoofing     New Virus is                                 However, Mr. Parker is happy to see that his last memo on
                   Attack      Spreading at                                 security policy, the importance of strong passwords, and his
                               a High Rate                                  action to request password changes have transformed this
 Gaining access                                 Enforce secure
   to remote                                   wired or wireless
                                                                            potentially high risk threat-vulnerability combination into a
connections and                                 connection to               moderate level. On the other hand, the spread of new viruses is
    services                                    sensitive data              causing significant damage to the laptops and other machines
  Viruses and                                  Update antivirus             that are already infected or which do not have up-to-date
    worms                                     programs and scan             antivirus protections.
                                                 and clean the
                                              infected computers              3) Address vulnerability-threat with specific course of
  Software                                     Update and install           action
 specific and                                   new patches to
   updates                                     improve security
                                                                                 Based on the findings from the previous step, system
                                                                            administrators need to identify the immediate course of action
     Mr. Parker has addressed several vulnerabilities but is still          to address the most severe vulnerability-threat. Specifically,
working on enforcing secure connection, performing the latest               Mr. Parker must update antivirus programs and scan and clean
update to the antivirus programs, and scanning and cleaning the             all the infected laptop and desktop computers. Simultaneously,
several infected computers. Suddenly, Mr. Parker is made                    he needs to install new patches to improve security for the rest
aware of two security threats. First, a spoofing e-mail is                  of the network. Additionally, Mr. Parker must address the
circulating among the faculty members’ and students’                        moderate vulnerability-threat combination by enhancing the
electronic mailboxes. The e-mail asks recipients to login to a              security of the wired and wireless networks.
Web site and verify their login information or their e-mail
service will be interrupted. Second, several faculty members                          IV.    SUMMARY AND RECOMMENDATIONS
are reporting that many computers in the computer lab have
stopped responding due to what seems to be a Trojan attack. As                 Notebooks and laptops have become the computers of
the first step, Mr. Parker builds the vulnerability-threat matrix           choice for professionals and managers who want to access their
as shown in Table 2. Only the unaddressed vulnerabilities are               organizational networks while traveling or while working from
listed in this table along with their typical course of actions.            home. With this popularity they also offer the greatest security
                                                                            challenges for system administrators. Laptops and their use to
                                                                            access organizational networks produce three major
                                                                            vulnerability categories: physical, system, and network access.




                                                                        4                               http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 9 No. 3, March 2011
The paper discuses these vulnerabilities and offers a framework                       http://www.edtechmag.com/higher/docs/2008/09/mobile-computing-
for addressing them.                                                                  security.pdf.
                                                                                [2]   R. Telang, R. and S. Wattal, “An empriical analysis of the impact of
    In general, there are two scenarios under which a system                          software vulnerability announcement on form stock price, “ in IEEE
administrator can address the vulnerabilities. The first scenario                     Transactions on Software Engineering, Vol 33 (8), pp. 544-557, 2007.
assumes no presence of a given threat and is designed to                        [3]   B. Bosen, “Vulnerability management survey” in Trusted Strategies,
provide a systematic and proactive course of action to                                2006, Retrieved February 7, from               2011. http://www.trusted
                                                                                      strategies.com/ papers/vulnerability_management_survey.pdf.
continuously improve the security of the laptops and their use
                                                                                [4]   E. Schonfeld, “Forrester projects Tablets will outsell Netbooks by 2012,
to access organizational LANs or WLANs. The scenario                                  Desktops by 2013” June 2010, Retrived February 9, 2011 from
suggests a course of action based on a vulnerability rating                           http://techcrunch.com/2010/06/17/forrester-tablets-outsell-netbooks/
system. The vulnerabilities are rated based on two factors: the                 [5]   Fiberlink, “Laptop vulnerability sssesment service,” 2011, retrieved n
degree of attractiveness to a potential intruder and the                              February 8, 2011 from http://feeneywireless.com/fetchdoc.php?docID
consequences/impact of the vulnerability for the organization.                        =90856300.
                                                                                [6]   J. Allen, J. “The art of information security governance” in Qatar
   The second scenario assumes the presence of one or more                            information security forum, 2008, Software Engineering Institute,
security threats. This scenario is designed to offer a reactive,                      retrieved on February 8, 2011 from http://www.cert.org/archive/pdf/
but systematic course of action. A matrix is designed, and in                         QISF_Allen_022408.pdf.
each cell of the matrix, the severity of a vulnerability-threat                 [7]   H. S. Venter, and J. H. Eloff, “Vulnerabilities categories for intrusion
combination is represented with a color coded sign. Again, a                          detection systems in Computers & Security, Vol. 21 (7), pp. 617-619,
                                                                                      2002.
course of action is suggested starting with the most severe
combinations, followed by moderate combinations, and ending                     [8]   S. Goel and V. Chen, “Information security risk analysis–a matrix-based
                                                                                      approach, 2005, retrieved on February 7, 2011 from
with the low risk combinations.                                                       http://www.albany.edu/~goel/publications/goelchen2005.pdf.
                                                                                [9]   N. A. Renfroe and J. L. Smith, “Threat/vulnerability assessments and
                          V.     CONCLUSIONS                                          risk analysis” November 2010, retrived on February 7, 2011
                                                                                      fromhttp://www.wbdg.org/resources/riskanalysis.php.
    This paper offers a managerial framework for addressing
laptop physical, system, and network access vulnerabilities.                                                AUTHORS PROFILE
The purpose of the framework is to assist system administrators
                                                                                      Arben Asllani is a Post Doctoral Fellow in Cybersecurity at the Center
to create effective action plans to deal with such vulnerabilities.                   for Secusrity Studies at the University of Maryland University College
A proactive approach to eliminating vulnerabilities is suggested                      (UMUC) and a UC Foundation Professor of Management at the
and a step-by-step methodology is offered. When security                              University of Tennessee at Chattanooga. He has published over 24
threats are present, a matrix-based approach is suggested. The                        journal articles and presented and published over twenty conference
                                                                                      proceedings. His most recent research has been published in such
matrix can help the system administrator identify the most                            journals as Omega, European Journal of Operational Research,
severe attack/vulnerability combination and mitigate the risk of                      Knowledge Management, and Computers & Industrial Engineering.
such threats. The matrix based approach is a reactive approach
but it is necessary to guide the system administrator when the                        Amjad Ali is the Director of the Center for Security Studies and a
networks or laptop computers are under attack. A real case                            Professor of Cybersecurity at University of Maryland University
scenario from a university college is used to illustrate the                          College. He played a significant role in the design and launch of
framework. The suggested framework is not limited to the use                          UMUC’s cybersecurity programs. He teaches graduate level courses in
of laptop computers; it can be used by organizations to monitor                       the area of cybersecurity and technology management. He has served as
                                                                                      a panelist and a presenter in major conferences and seminars on the
vulnerabilities in other areas of organizational cyberspace.                          topics of cybersecurity and innovation management. He is a member of
                                                                                      the Maryland Higher Education Commission (MHEC) Cybesecurity
                               REFERENCES                                             Advisory Council, providing advice and help on how MHEC can
                                                                                      respond best to the higher education needs of the growing cybersecurity
[1]   CDW-G (White Paper), “Mobile computing security: protecting data on             workforce.
      devices roaming on the perimeter,” Retrieved March 7, 2011, from:




                                                                            5                                    http://sites.google.com/site/ijcsis/
                                                                                                                 ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 9, No. 3, March 2011




             Reduction of PAPR for OFDM Downlink
            and IFDMA Uplink Wireless Transmissions
                  Bader Hamad Alhasson                                                    Mohammad A. Matin
     Department of Electrical and Computer Engineering                      Department of Electrical and Computer Engineering
                   University of Denver                                                   University of Denver
                       Denver, USA                                                            Denver, USA


Abstract-- One of the major drawbacks of OFDM is the high              ad-hoc mode or access point for current wide use. In 1997
peak-to-average power ratio (PAPR) of the transmitted signals.         WLAN standard – IEEE 802.11, also known as Wi-Fi, was
In this paper, we propose a novel low complexity clipping scheme       first developed with speeds of up to 2 Mbps [2]. At present,
applicable to Interleaved-FDMA uplink and OFDM downlink
systems for PAPR reduction. We show the performance of PAPR
                                                                       WLANs are capable of offering speeds up-to 600 Mbps for the
of the proposed Interleaved-FDMA scheme is better than                 IEEE 802.11n utilizing OFDM as a modulation technique in
traditional OFDMA for uplink transmission system. Our                  the 2.4 GHz and 5 GHz license-free industrial, scientific and
reduction of PAPR is 53% when IFDMA is used instead of                 medical (ISM) bands. It is important to note that WLANs do
OFDMA in the uplink transmission. We also examine an                   not offer the type of mobility, which mobile systems offer. In
important trade-off relationship between clipping distortion and       our previous work, we modeled a mix of low mobility
quantization noise when the clipping scheme is used for OFDM           1.8mph, and high mobility, 75mph with a delay spread that is
downlink systems. Our results show that we were able to reduce         constantly slighter than the guard time of the OFDM symbol
the PAP ratio by 50% and reduce the out-of-band radiation              to predict complex channel gains by the user by means of
caused by clipping for OFDM downlink transmission system.
                                                                       reserved pilot subcarriers [3].
                                                                       Orthogonal frequency division multiplexing (OFDM) is a
                                                                       broadband multicarrier modulation scheme. Research on
Keywords-component-- Signal to quantization noise ratio                multi-carrier transmission started to be an interesting research
(SQNR);Localized-frequency-division-multiple-access (LFDMA);           area [4-6]. OFDM modulation scheme leads to better
interleaved-frequency-division-multiple-access (IFDMA); peak-
                                                                       performance than a single carrier scheme over wireless
to-average power ratio (PAPR); clipping ratio (CR); single
carrier frequency division multiple access (SC-FDMA).                  channels since OFDM uses a large number of orthogonal,
                                                                       narrowband sub-carrier that are transmitted simultaneously in
                                                                       parallel. We investigated the channel capacity and bit error
                    I.    INTRODUCTION                                 rate of MIMO-OFDM [7]. The use of OFDM scheme is the
                                                                       solution to the increase demand for future bandwidth-hungry
Wireless communication has experienced an incredible growth
                                                                       wireless applications [8]. Some of the wireless technologies
in the last decade. Two decades ago the number of mobile
                                                                       using OFDM are Long-Term Evolution (LTE). LTE is the
subscribers was less than 1% of the world’s population [1]. In
                                                                       standard for 4G cellular technology, ARIB MMAC in Japan
2001, the number of mobile subscribers was 16% of the
                                                                       have adopted the OFDM transmission technology as a
world’s population [1]. By the end of 2001 the number of
                                                                       physical layer for future broadband WLAN systems, ETSI
countries worldwide having a mobile network has
                                                                       BRAN in Europe and Wireless local-area networks (LANs)
tremendously increased from just 3% to over 90% [2]. In
                                                                       such as Wi-Fi. Due to the robustness of OFDM systems
reality the number of mobile subscribers worldwide exceeded
                                                                       against multipath fading, the integration of OFDM technology
the number of fixed-line subscribers in 2002 [2]. As of 2010
                                                                       and radio over fiber (RoF) technology made it possible to
the number of mobile subscribers was around 73% of the
                                                                       transform the high speed RF signal to the optical signal
world’s population which is around to 5 billion mobile
                                                                       utilizing the optical fibers with broad bandwidth [9].
subscribers [1].
                                                                       Nevertheless, OFDM suffers from high peak to average power
                                                                       ratio (PAPR) in both the uplink and downlink which results in
In addition to mobile phones WLAN has experienced a rapid
                                                                       making the OFDM signal a complex signal [10].
growth during the last decade. IEEE 802.11 a/b/g/n is a set of
standards that specify the physical and data link layers in




                                                                   6                               http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                    Vol. 9, No. 3, March 2011




                                                                                                  N C 1                     N C 1
                                                                                     x (t )       
                                                                                                   K 0
                                                                                                            x k (t )        a
                                                                                                                             K 0
                                                                                                                                         m
                                                                                                                                                e j 2  k  ft               (1)


                   Max linear limit
                                                                                  Where xk (t)         is the       kth        modulated subcarrier at a
                                                                                  frequency f k  k .f . The modulation symbol                                              a k ) is
                                                                                                                                                                               (m

          Figure.1 Fresnel diagram illustrating the PAPR issue
                                                                                  applied to the kth subcarrier during the mth OFDM
Figure.1 shows a constructive addition of subcarriers on a                        interval which is mTu  t  ( m  1)TU . Therefore, during
random basis which causes the peak-to-average power ratio
problem. The outcome of high PAPR on the transmitted                              each OFDM symbol interval transmission, N C modulation
OFDM symbols results in two disadvantages high bit error                          symbols are transmitted in parallel. The modulation symbols
rate and inference between adjacent channels. This would                          are dependent on the use of this technology and can be any
imply the need for linear amplification. The consequence of                       form of modulation such as 16QAM, 64QAM or QPSK. The
linear amplification is more power consumption. This has                          choice of which modulation scheme to implement varies
been an obstacle that limits the optimal use of OFDM as a
modulation and demodulation technique [11-14]. The problem                        depending on the environment and application.
of PARP affects the uplink and downlink channels differently.
On the downlink, it’s simple to overcome this problem by the
                                                                                                                                m
                                                                                                                               a0        e j 2f 0t x (t )
use of power amplifiers and distinguished PAPR reduction                                                                                             0

methods. These reduction methods can’t be applied to the                             m     m       m
                                                                                   a0 , a1 ,..., a NC 1
                                                                                                                               a1m       e j 2f1t x (t )
uplink due to their difficulty in low processing power devices                                                   Serial to                          1

such as mobile devices. On the uplink, it is important to                                                        parallel                                                          x (t )
                                                                                                                                                                         +
reduce the cost of power amplifiers as well.
                                                                                                                                             j 2f N C 1t
PAPR reduction schemes have been studied for years [15-18].                                                                     m        e
                                                                                                                              a N C 1                   x N C 1 (t )
Some of the PAPR reduction techniques are: Coding
techniques which can reduce PAPR at the expense of                                                                      f k  kf
bandwidth efficiency and increase in complexity [19-20]. The
probabilistic technique which includes SLM, PTS, TR and TI
can also reduce PAPR; however; suffers from complexity and                                      Figure 3 OFDM modulation valid for time interval
spectral efficiency for large number of subcarriers [21-22].                                              mT u  t  ( m  1 ) T U .

We perform an analysis on a low complexity clipping and                           Subcarriers spacing range hundreds of kHz to a small number
filtering scheme to reduce both the PAPR and the out-of-band-                     of kHz depending on the environment of operation. Once the
radiation caused by the clipping distortion in downlink                           spacing between subcarriers has been specified, then the
systems. It was also shown that a SC-FDMA system with                             choice of how many subcarriers to be transmitted in parallel
Interleaved-FDMA or Localized-FDMA performs better than                           has to be done. It is important to note that allocation of the
Orthogonal-FDMA in the uplink transmission.                                       number of subcarriers is dependent on the transmission
                                                                                  bandwidth. For instance, LTE uses 15 kHz as the basic
                    II.        SYSTEM MODEL                                       spacing with a 600 subcarriers assuming the operation is in the
                                                                                  10 MHZ spectrum.
                                                                                  Let us consider two modulated OFDM subcarriers x k 1 (t ) and
                                                                                  x k 2 (t ) . The two signals are orthogonal over the time period
      IFFT                    Clipping                 Filtering
                                                                                  mTu  t  (m  1)TU
                                                                                               ( m 1)Tu

                                                                                                   
Figure 2 Clipping and Filtering at the Transmitter of OFDM system                                                        *
                                                                                                           x k 1 ( t ) x k 2 ( t ) dt
In complex baseband, an OFDM signal                    x(t ) during time                         mT    u

interval mTu  t  ( m  1)TU can be expressed as




                                                                              7                                       http://sites.google.com/site/ijcsis/
                                                                                                                      ISSN 1947-5500
                                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                             Vol. 9, No. 3, March 2011


              ( m 1) Tu                                                                   add up coherently with identical phases. The largest PAPR

                  a
                                   j 2 k1 f        j 2 k 2  f
                          *
                            a e
                       k1 k 2                   e                    dt  0                happens randomly with a very low probability. The main
                                                                                           interest is actually in the probability of the occurrence of high
                mTu                                                                        signal power. This high signal power is out of the linear range
                                                                                           of high power amplifiers. The probability PAPR is below a
                   for k1  k 2                                                  (2)       certain threshold can be expressed as:

Therefore, OFDM transmission can be expressed as the
                                                                                                      P(PAPR z)  F ( z) N  (1  exp(z))N                                    (6)
modulation of a set of orthogonal functions  k (t ) , where
                             j 2  k  ft                                                  Equation (6) holds for samples that are mutually uncorrelated;
    k (t )       =   e                             0  t  Tu           ,                 however; when over sampling is applied then it doesn’t hold.
                                                                                           This is due to the fact that a sampled signal doesn’t certainly
                                                     0 otherwise                 (3)
                                                                                           include the maximum point of the original continuous time
                                                                                           signal. Nevertheless, it is important to note that it is difficult to
       Pilot                                                                               derive the exact cumulative distribution function for the peak
                                                                                           power distribution. The following simplified proposed PAP
                                    User A                      User B                     distribution will be used:

                                                                                                                F ( z ) N  (1  exp( z 2 ))N                                 (7)

                                                                                           Where  has been found by fitting the theoretical CDF into
                                                                                           the actual one. From our simulation, it was shown that  =2.8
                                   Frequency                                               is suitable for adequately a large number of subcarriers.
                                  Guard Band

                                                                                           The theoretical and simulated curves are plotted in Figure 5
                                                                                           for different number of subcarriers. As N decreases, the
  Figure 4. OFDM available bandwidth is divided into subcarriers that are                  deviation between the obtained simulation and theoretical
                mathematically orthogonal to each other
                                                                                           results increases, which indicates that equation (7) is quite
                                                                                           accurate for N>256. It is worth noting that equation (6) is
       III.           DISTRIBUTION of THE PAP RARIO                                        more accurate for large CDF values as shown in Figure 5.

                                                                                                       0
                                                                                                      10
The complex baseband signal for one OFDM symbol can be                                                                                                            Theoretical
rewritten as:                                                                                                                                                     Simulated



                                                                                                                             N=16
                           1 N
          x(t )              an exp( j n t )                                                                                                                   N=1024

                           N n1                                                 (4)                                      N=32
                                                                                               CCDF




                                                                                                       -1
                                                                                                      10
                                                                                                                                 N=64
Where N is the number of subcarriers and                              an are the
modulating symbols. From the central limit theorem, we can                                                                               N=128
assume that the real and imaginary parts of the time domain
complex OFDM signal x(t ) have a Gaussian distribution for                                                                                    N=512

a large number of subcarriers. Therefore, the amplitude of the
OFDM signal x(t ) follows a Rayleigh distribution, whereas                                             -2
                                                                                                      10
                                                                                                            2       3    4          5     6      7     8      9      10         11
power follows a central chi-square distribution with the                                                                                  PAPR[dB]
cumulative distribution expressed as:
                                                                                           Figure 5 OFDM system with N-point FFT. CCDFs of signal PAP ratio with
                                                                                             N=16, 32, 64, 128 and 1024. Solid lines are calculated; dotted lines are
                       F ( z)  1  e z                                          (5)                                      simulated.


OFDM system with a certain number of subcarriers suffers
from maximum power which arises when all of the subcarriers




                                                                                       8                                                http://sites.google.com/site/ijcsis/
                                                                                                                                        ISSN 1947-5500
                                                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                               Vol. 9, No. 3, March 2011


 IV.                     CLIPPING AND SIGNAL TO QUANTIZATION
                                     NOISE RATIO                                                             The performance of any PAPR reduction scheme is evaluated
                                                                                                             based on out-of-band radiation, in-band ripple, distribution of
An OFDM signal has the tendency to have a large peak to                                                      PAPR and the BER performance [23].
average power ratio when each subcarrier by chance has the
highest amplitude and identical phases at the same time. The                                                                             V.                SIMULATION AND RESULTS
likelihood of such event is rare yet it does occur. As the
number of subcarriers increase, the maximum power increases                                                  To evaluate the performance of the clipping and filtering
as shown in Figure 5. The probability of that maximum power                                                  method used in our simulation, the following parameters were
signal actually decreases as N increases. This is due to the                                                 used to in the simulation.
statistical magnitude distribution of the time-domain OFDM
signal.                                                                                                                                                    Table I.          Simulation parameters
The simplest approach to reduce the PAP ratio is to clip the
amplitude of the signal to a desired maximum level. Although                                                                                                    N                                   256
clipping is the simplest method, in our method it enhances the                                                                                  Clipping Ratio                                     1.4
signal to quantization noise ratio (SQNR) in the conversion
from analog to digital.                                                                                                                        Carrier frequency                                 5 MHz
                                                                                                                                                     Modulation                                   QPSK
As the clipping threshold increases, clipping distortion                                                                                      Sampling frequency                                 10 MHz
decreases at the expense of PAPR and quantization noise. On
the other hand as the clipping threshold decreases, PAPR and                                                                                          Bandwidth                                   1MHz
quantization noise decrease at the expense of clipping                                                                                       Guard interval samples                                  32
distortion. Therefore, it is important to take into consideration
this trade-off relationship between clipping distortion and
quantization noise when picking the number of bits for                                                                             0.4
quantization and the clipping threshold.
                                                                                                                                   0.3
                                                                                                                    abs(x ”[m ])




Figure 6 shows the SQNR values of OFDM signal quantized
                                                                                                                                   0.2
with 5, 6, 7, 8 bits against the clipping threshold and N=128.
The optimal clipping threshold to maximize the signal to                                                                           0.1
quantization noise ratio fluctuates with the quantization level;
however; we can see that the maximum points are                                                                                     0
                                                                                                                                         0           0.2            0.4       0.6          0.8         1          1.2          1.4
approximately around 3.5 of the normalized clipping                                                                                                                                 time
threshold. Clipping distortion is more significant to the left of
                                                                                                                                    0
the maximum points due to the low threshold of clipping
whereas the clipping distortion is not as significant to the right
of the maximum points where the clipping threshold is higher.
                                                                                                               P S D[dB ]




                                                                                                                                   -50
               45



               40                                                                8bits
                                                                                                                               -100
                                                                                                                                   -5           -4         -3        -2     -1       0       1        2       3         4
                                                                                                                                                                                    Hz                                         6
                                                                                                                                                                                                                            x 10
               35                                                        7bits
                                                                                                                                                                  Figure 7 Baseband signal
    SQNR[dB]




               30                                                                                            Figure 7 shows the power spectral density of oversampled
                                                                         6bits
                                                                                                             baseband signal. This is the output of IFFT. Let x(s) be the
                                                                                                             output of IFFT. Then the output of IFFT can be expressed
               25
                                                                                                             mathematically as:
                                                                         5bits
                                                                                                                                                                L . N 1
                                                                                                                                                1
               20                                                                                            x(s) 
                                                                                                                                                L.N
                                                                                                                                                                  X ( k ). e
                                                                                                                                                                 k 0
                                                                                                                                                                                      2  js  fk / L . N
                                                                                                                                                                                                            , s  0 ,1,... NL  1
                                                                                                             With
               15
                    2      2.5    3    3.5    4      4.5
                                               clipping level)
                                                               5   5.5       6           6.5     7
                                                                                                             X ( k ) = X ( k ) , for 0  k< N/2 and NL-N/2< k <NL
                                                                                                                                                                     0, otherwise                                       (8)
                        Figure 6 Clipping threshold against SQNR of quantized
                                        OFDM signal. N=128




                                                                                                         9                                                                 http://sites.google.com/site/ijcsis/
                                                                                                                                                                           ISSN 1947-5500
                                                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                        Vol. 9, No. 3, March 2011


Where L, f , N and X ( k ) represent the oversampling factor,                                                                                                                 Clipping Ratio=1.4
                                                                                                                                     0.04
the subcarrier spacing, the number of subcarriers and the
                                                                                                                                     0.03
symbol carried by subcarrier k, respectively.




                                                                                                                           pdf
                                                     Gaussian distribution                                                           0.02
            0.08
                                                                                                                                     0.01
            0.06
                                                                                                                                       0
  pdf




            0.04                                                                                                                       -0.1       -0.08   -0.06    -0.04      -0.02        0      0.02      0.04        0.06     0.08       0.1
                                                                                                                                                                                           x
            0.02                                                                                                                                                              Out-of-band radiation
                                                                                                                                        0
                                                                                                                                                                             reduction after filtering
               0
              -0.4         -0.3           -0.2          -0.1        0          0.1            0.2           0.3
                                                               x




                                                                                                                           PSD[dB]
                                                                                                                                      -50
               0
  PSD[dB]




                                                                                                                                     -100
             -50                                                                                                                         -5        -4         -3    -2          -1         0       1         2           3        4
                                                                                                                                                                                          Hz                                                6
                                                                                                                                                                                                                                        x 10
                                                                                                                                                    Figure 10 Clipped and filtered passband signal
            -100
                -5    -4          -3           -2      -1       0       1      2          3         4
                                                               Hz                                               6
                                                                                                         x 10
                                         Figure 8 Baseband signal                                                        The out-of-band radiation can be seen from Figure 9 and 10. It
                                                                                                                         is clear that the out-of-band radiation increases after clipping;
Figure 8 shows the power spectral density and a histogram of                                                             however; it decreases after filtering and shows a peak value
the baseband signal without clipping and filtering. We can see                                                           beyond the clipping threshold implying a slight peak re-
the power density function shows a Gaussian distribution of                                                              growth in PAPR after filtering as shown in Figure 10. To
the signal.                                                                                                              complete the evaluation of clipping and filtering then we have
                                                                                                                         to look at the BER performance when the clipping ratio varies.
                                                      Clipping Ratio=1.4
             0.2

            0.15                                                                                                                        0
                                                                                                                                     10
  pdf




             0.1

            0.05
                                                                                                                                                                              Clipped
                                                                                                                                                  Clipped                     & filtered
                                                                                                                              CCDF




               0                                                                                                                        -1
              -0.08   -0.06            -0.04        -0.02      0        0.02       0.04       0.06         0.08                      10                                                                    Unclipped
                                                               x
                                                    Out-of-band radiation
               0
                                                       due to clipping
  PSD[dB]




                                                                                                                                        -2
             -50                                                                                                                     10
                                                                                                                                             2            4          6                   8      10                 12            14               16
                                                                                                                                                                                         PAPR[dB]
            -100
                -5    -4          -3           -2      -1       0       1      2          3         4                                   0
                                                               Hz                                           6                        10
                                                                                                         x 10
                                  Figure 9 Clipped passband signal

Clipping and filtering OFDM has been studied [23]; however;
                                                                                                                                        -2
these techniques reduce PARP at the expense of increased
                                                                                                                              BER




                                                                                                                                     10
system complexity and a high peak re-growth. Figure 9 shows
the level of Out-of-band radiation increases as the OFDM                                                                                                                                          Unclipped
signal passes through a nonlinear device. An OFDM
transmitter emits out-of-band radiation when a set of                                                                                   -4
subcarriers are modulated. Our results show less out-of-band                                                                         10
                                                                                                                                            0        1         2         3           4       5         6         7           8          9         10
power emission compared to traditional OFDM by the use of
the low complexity clipping and filtering technique.                                                                                                                                       EbNo


                                                                                                                                                 Figure 11 (a) PAPR distribution (b) BER performance




                                                                                                                    10                                                          http://sites.google.com/site/ijcsis/
                                                                                                                                                                                ISSN 1947-5500
                                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                               Vol. 9, No. 3, March 2011


It can be seen from Figure 11(a) as the clipping ratio increases
                                                                                                                               0
from right to left, the PAPR decreases dramatically after                                                                 10
clipping and increases slightly after filtering. The simulation
result in Figure 11 (b) shows that the performance of BER is
better as the clipping ratio increases. Unlike OFDM used for                                                                   -1
downlink transmission, SC-FDMA is utilized in the uplink                                                                  10

transmission where subcarriers are separated and designated




                                                                                                Pr(PAPR>PAPR 0 )
for several mobile units. Each unit utilizes a number of
subcarriers, let N denote the number of subcarriers assigned                                                              10
                                                                                                                               -2
                                        unit
to each unit for uplink transmission. The effectiveness of
reduction in PAPR is greatly influenced by the technique in
the method utilized to assign N to each unit [24].                                                                        10
                                                                                                                               -3
                                                          unit
                                                                                                                                       Orthogonal-FDMA
Discrete Fourier Transform (DFT) spreading technique is a
                                                                                                                                       Localized-FDMA
promising solution to reduce PAPR because of it’s superiority
                                                                                                                                       Inter leaved-FDMA
in PAPR reduction performance compared to block coding,                                                                        -4
                                                                                                                          10
Selective Mapping (SLM), Partial Transmit Sequence (PTS)                                                                           0    2            4          6             8         10           12
and Tone Reservation (TR) [25-26]. SC-FDMA and OFDMA                                                                                                        PAPR in dB
are both multiple-access versions of OFDM. There are two                                                                                       Figure 12 (b) 16 QAM
subcarrier mapping schemes in single carrier frequency
division multiple access (SC-FDMA) to allocate subcarriers                                   The three figures of 12 show that when the single carrier is
between units: Distributed FDMA and Localized FDMA.                                          mapped either by LDMA or DFDMA, it outperforms OFDMA
                                                                                             due to the fact that in an uplink transmission, mobile terminals
                                                                                             work differently then a base station in terms of power
                              0
                             10                                                              amplification. In the uplink transmission PAPR is more of a
                                                                                             significant problem then on the downlink due to the type and
                                                                                             capability of the amplifiers used in base station and mobile
                              -1
                                                                                             devices. For instance, when a mobile circuit’s amplifier
                             10                                                              operates in the non-linear region due to PAPR, the mobile
                                                                                             devise would consume more power and become less power
  P r(P A P R> P A P R 0 )




                                                                                             efficient whereas base stations don’t suffer from this
                              -2
                                                                                             consequence. Therefore, OFDM works better in the downlink
                             10                                                              transmission in terms of PAPR.
                                                                                                                           0
                                                                                                                          10

                              -3
                             10
                                      Orthogonal-FDMA
                                      Localized-FDMA                                                                       -1
                                                                                                                          10
                                      Interleaved-FDMA
                              -4
                                                                                               P r(P A P R> P A P R 0 )




                             10
                                  0    2         4           6        8   10   12
                                                         PAPR in dB
                                                                                                                           -2
                                                                                                                          10
                                                Figure 12 (a) QPSK

Figure 12 show the performance of PAPR while the number of
subcarriers is 128 and the number of subcarriers assigned to                                                               -3
                                                                                                                          10
each unit or mobile device is 32. This simulation helps in                                                                             Orthogonal-FDMA
evaluating the performance of PAPR with different mapping                                                                              Localized-FDMA
schemes and modulation techniques. In LFDMA each user                                                                                  Interleaved-FDMA
transmission is localized in the frequency domain where in the                                                             -4
                                                                                                                          10
DFDMA each user transmission is spread over the entire                                                                          0       2        4            6           8        10           12
                                                                                                                                                          PAPR in dB
frequency band making it less sensitive to frequency errors
and diversifies frequency.                                                                                                                     Figure 12 (c) 64 QAM
                                                                                             Our results show the effect of using Discrete Fourier
                                                                                             Transform spreading technique to reduce PAPR for OFDMA,
                                                                                             LDMA and OFDMA with N=128 and N =32. A comparison
                                                                                                                                                                         unit




                                                                                        11                                                               http://sites.google.com/site/ijcsis/
                                                                                                                                                         ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 9, No. 3, March 2011


is shown in Figure 12 a,b and c utilizing different modulation                                    REFERENCES
schemes. The reduction in PAPR is significant when DFT is
used. For instance, Figure 12(b) where Orthogonal-FDMA,                  [1]    ITU, “World Telecommunication Development Report 2002:
Localized-FDMA and Interleaved-FDMA have the values of                          Reinventing Telecoms”, March 2002.
5.9 dB, 9 dB and 11 dB, respectively. The reduction of PAPR              [2]      Anthony Ng’oma, “Radio-over-Fibre Technology for Broadband
                                                                                Wireless Communication Systems”, June 2005.
in IFDMA utilizing the DFT-spreading technique compared to               [3]      Bader Alhasson, and M. Matin “The challenge of scheduling user
OFDMA without the use of DFT is about 53 percent. Such                            transmissions on the downlink of a long-term evolution (LTE)
reduction is significant in the performance of PAPR. Based on                     cellular communication system”, Proc. SPIE, Vol. 7797, 779719,
the simulation results in Figure 12 we can see that single                        Sep 2010.
                                                                         [4]     H. Atarashi, S. Abeta, and M. Sawahashi, “Variable spreading
carrier frequency division multiple access systems with                          factor orthogonal frequency and code division multiplexing
Interleaved-FDMA and Localized-FDMA perform better than                            (VSF-OFCDM) for broadband packet wireless access,” IEICE
OFDMA in the uplink transmission. Although Interleaved-                          Trans. Commun., vol. E86-B, pp. 291-299, Jan. 2003.
FDMA performs better than OFDMA and LFDMA, LFDMA                         [5]         R. Kimura and F. Adachi, “Comparison of OFDM and multicode
                                                                                   MC-CDMA in a frequency selective fading channel, ” IEE
is preferred due to the fact that assigning subcarriers over the                   Electronics Letters, vol. 39, no.3, pp. 317-318, Feb. 2003.
whole band of IFDMA is complicated while LFDMA doesn’t                   [6]        Z Wang and G. B. Giannakis, “Complex-field Coding for OFDM
require the insertion of pilots of guard bands.                                    over Fading Wireless Channels,” IEEE Trans. Inform. Theory, vol.
                                                                                   49, pp.707-720, Mar. 2003.
                                                                         [7]       Alhasson Bader, Bloul A., Li X., and Matin M. A: “LTE-advanced
                   VI.      CONCLUSION
                                                                                   MIMO uplink for mobile system” Proc. SPIE, Vol. 7797, 77971A,
                                                                                   2010.
We have shown the importance of the trade-off relationship               [8]      L. Mehedy, M. Bakaul, A. Nirmalathas, "115.2 Gb/s optical OFDM
between clipping distortion and quantization noise. Our results                    transmission with 4 bit/s/Hz spectral efficiency using IEEE
                                                                                   802.11a OFDM PHY," in proc. the 14th OptoElectronics and
show that as the clipping threshold increases, clipping
                                                                                   Communications Conference, 2009 (OECC 2009), July 2009.
distortion decreases at the expense of PAPR and quantization             [9]       Alhasson Bader, Bloul A., and Matin M. A.: “Dispersion and
noise. On the other hand as the clipping threshold decreases,                      Nonlinear Effects in OFDM-RoF system”, SPIE, Vol. 7797,
PAPR and quantization noise decrease at the cost of clipping                       779704, 2010.
                                                                         [10]    J. Tellado, “Multicarrier transmission with low PAR,” Ph.D.
distortion. Therefore, it is important to take into consideration
                                                                                  dissertation,Stanford Univ., Stanford, CA, 1998.
this trade-off relationship between clipping distortion and              [11]       Z.-Q. Luo and W. Yu, “An introduction to convex optimization
quantization noise when picking the number of bits for                             for communications and signal processing," IEEE J. Sel. Areas
quantization and the clipping threshold. We showed that                            Communication, vol. 24, no. 8, pp. 1426-1438, Aug. 2006.
                                                                         [12]      J. Tellado, “Peak to average power reduction for multicarrier
clipping decreases the amplitude to a desired maximum power
                                                                                   modulation," Ph.D. dissertation, Stanford University, Stanford,
level and the outcome signal suffers from out-of-band                              USA, 2000.
radiation as a result of clipping distortion which is eliminated         [13]     A. Aggarwal and T. Meng, “Minimizing the peak-to-average power
by filtering at the expense of a slight peak re-growth in                          ratio of OFDM signals using convex optimization," IEEE Trans.
                                                                                   Signal Process., vol. 54, no. 8, pp. 3099-3110, Aug. 2006.
amplitude due to filtering. It was also shown that a SC-FDMA             [14]      Y.-C. Wang and K.-C. Yi, “Convex optimization method for
system with Interleaved-FDMA or Localized-FDMA performs                            quasiconstant peak-to-average power ratio of OFDM signals,"
better than Orthogonal-FDMA in the uplink transmission                             IEEE Signal Process. Lett., vol. 16, no. 6, pp. 509-512, June 2009.
where transmitter power efficiency is of great importance in             [15]        S. H. Wang and C. P. Li, “A low-complexity PAPR reduction
                                                                                   scheme for SFBC MIMO-OFDM systems,” IEEE Signal Process.
the uplink.                                                                        Lett., vol. 16, no. 11, pp. 941–944, Nov. 2009.
LFDMA and IFDMA result in lower average power values,                    [16]     J. Hou, J. Ge, D. Zhai, and J. Li, “Peak-to-average power ratio
due to the fact that OFDM and OFDMA map their input bits                           reduction of OFDM signals with nonlinear companding scheme,”
straight to frequency symbols where LFDMA and IFDMA                                IEEE Trans. Broadcast., vol. 56, no. 2, pp. 258–262, Jun. 2010.
                                                                         [17]       T. Jaing, W. Xiang, P. C. Richardson, D. Qu, and G. Zhu, “On the
map their input bits to time symbols. We conclude that single                      nonlinear companding transform for reduction in PAPR of MCM,”
carrier-FDMA is a better choice on the uplink transmission for                     IEEE Trans. Wireless Commun., vol. 6, no. 6, pp. 2017–2021, Jun.
cellular systems. Our conclusion is based on the better                            2007.
efficiency due to low PAPR and on the lower sensitivity to               [18]       S. H. Han and J. H. Lee, “An overview of peak-to-average power
                                                                                   ratio reduction techniques for multicarrier transmission,” IEEE
frequency offset since SC-FDMA has a maximum of two                                Wireless Commun., vol. 12, pp. 56–65, Apr. 2005.
adjacent users.                                                          [19]      Wilkinson, T.A. and Jones, A.E . Minimization of the peak-to-
                                                                                   mean envelope power ratio of multicarrier transmission scheme by
         VII.      FUTURE WORK                                                     block coding. IEEE VTC’95, Chicago, vol. 2,pp. 825-829. July,
                                                                                   1995.
                                                                         [20]       Park, M.H. PAPR reduction in OFDM transmission using
We would like to further investigate the effect of PAPR in                         Hadamard transform. IEEE ICC’00, vol.1, pp. 430-433. 2000
MIMO-OFDM systems.                                                       [21]       Bauml, R.W., Fischer, R.F.H., and Huber, J.B. Reducing the peak
                                                                                   to average power ratioof multicarrier modulation by selective
                                                                                   mapping. Electron. Lett., 32(22), 2056-2057. 1996.
                                                                         [22]      Muller, S.H and Huber, J.B. a novel peak power reduction scheme
                                                                                   for OFDM. PIMRC, vol. 3,pp. 1090-1094. 1997.
                                                                         [23]      Li, X. and Cimini, L.J. Effects of clipping and filtering on the
                                                                                   performance of OFDM. IEEE Commun. Letter, 2(20), pp. 131-133
                                                                                   1998.




                                                                    12                                  http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                               Vol. 9, No. 3, March 2011


[24]      Myung, H.G., Lim, J., and Goodman, “Single Carrier FDMA
         forUplink Wireless Transmission,” IEEE Veh. Technol. Mag.,
         1(3), 30-38. 2006.
[25]     H. G. Myung, J. Lim, and D. J. Goodman, “Single Carrier FDMA
         for Uplink Wireless Transmission,” IEEE Vehicular Technology
         Mag., vol.1, no. 3, pp. 30 – 38, Sep. 2006.
[26]     H. G. Myung and David J. Goodman, Single Carrier FDMA”,
         WILEY, 2008.
[27]    Cho, kim, Yang & Kang. MIMO-OFDMWireless Communications
        with MATLAB. IEEE Press 2010.
[28]     Bloul A., Mohseni S., Alhasson Bader, Ayad M., and Matin M. A.:
        “Simulation of OFDM technique for wireless communication
       systems”, Proc. SPIE, Vol. 7797, 77971B, 2010.


                       AUTHORS PROFILE




                          Bader Hamad Alhasson is a PhD candidate
                          from the University of Denver. He received a
                          bachelor degree in Electrical Engineering
                          (EE) in 2003 from the University of
                          Colorado at Denver (UCD) in the United
                          States, a Master’s of Science in EE and a
                          Master’s of Business Administration (MBA)
                          in 2007 from UCD. His primary research
                          interest is in the transmission and reception
                          of radio over fiber (RoF) utilizing OFDM.



                           Dr. Mohammad Abdul Matin, Associate
                           Professor of Electrical and Computer
                           Engineering, in the School of Engineering and
                           Computer Science, University of Denver. He
                           is a Senior Member of IEEE and member of
                           SPIE, OSA, ASEE and Sigma Xi. His
                           research interest is in Optoelectronic Devices
                           (such as Sensors and Photovoltaic)
                           RoF, URoF, Digital, Optical & Bio-Medical
                           Signal        &         image       Processing
                           Engineering Management and Pedagogy in
                           Engineering Education.




                                                                           13                             http://sites.google.com/site/ijcsis/
                                                                                                          ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 9, No. 3, March 2011

Evolution Prediction of the Aortic Diameter Based on
  the Thrombus Signal from MR Images on Small
           Abdominal Aortic Aneurysms
                                     A. Suhendra1, C.M. Karyati2, A.Muslim3, A.B. Mutiara4
                            Faculty of Computer Science and Information Technology, Gunadarma University
                                             Jl. Margonda Raya No.100, Depok 16424, Indonesia
                                     1,2,3,4
                                            {adang,csyarah,amuslim,amutiara}@staff.gunadarma.ac.id


Abstract—The paper is about studying the T1 and T2 from                   parts of the human body. The human blood pressure will refer
Magnetic Resonance (MR) Images examination for the existence              to how much pressure in the arteries that brings blood to all
of thrombus in patient with Small Abdominal Aortic Aneurysms              cells of the human body through the delicate vessels
(SAAA) in order to know whether thrombus signal has                       (capillaries) which then will return to the heart through blood
correlation with the evolution of aortic diameter enlargement,            vessels and takes oxygen through the lungs. There are a little
which then can be used to predict the risk of rupture of aortic           description of the aorta which will be discussed further in this
wall. Data were derived from 16 patients with SAAA, whereas               study. It could be imagined if there are any damage to the
MR images obtained from 3T imager (Trio TIM, Siemens                      human aorta would result in abnormalities in blood flow in the
Medical Solution, Germany), which came from: the study of
                                                                          human body. In the following image, we can see the anatomy
anatomy, cine-MR images, pictures T1/T2, blood flow images,
and images after injection of contrast agents. The surface area of
                                                                          of the aorta and the arteries (figure 1) :
the aorta and luminal are determined by tracing manually, which
can be used to determine the surface area of thrombus. The
maximum diameter of the aorta are automatically obtained from
manual tracing on T1 images. The parameters to study the
thrombus signal are the mean, median, standard deviation,
skewness and kurtosis. Each parameter is calculated on the area
of thrombus, while for normalization we implement the signal in
the muscles. All parameters are compared to evolution of aortic
diameter. We found 13 out of 16 patients with SAAA have
thrombus. But there is no correlation between thrombus signals
and maximum diameter (mean (r = 0.318), median (r = 0.318),
skewness (r = 0.304)), or even with maksimum evolution diameter
(mean (r=0.512)). As the conclusion is the comparation between
mathematical and visual calculation of thrombus categories
reached 81% similar, but thrombus signal itself cannot be used to
                                                                                           Figure 1. Anatomy of the aorta [1]
predict the evolution of aortic diameter.
                                                                              The Studies of human aorta have been conducted and
   Keywords-component; Thrombus signal; evolution of aortic
diameter; T1 and T2 weighted images; Small Abdominal Aortic               successfully detected abnormalities in the aortic wall, both at
Aneurysms.                                                                the thoracic or abdominal aortas [1,2]. In general, the swelling
                                                                          of the aortic wall is very elastic, therefore if the swelling is
                                                                          occur then aortic wall will not be able to shrink back and it will
                       I.      INTRODUCTION                               be broken without being able to predict when the rupture risk
    Aorta is the larger artery that delivers blood from the heart         of the aortic wall. It could be in the risk of patient death.
of human beings throughout the body. In this way, the human
                                                                              An Abdominal Aortic Aneurysm, also called AAA, is a
blood flow will go through some branch, for example, that led
                                                                          bulging area in the wall of the aorta which is causing of an
to the arm (subclavian arteries), heading toward of the head
                                                                          abnormal widening or ballooning until greater than 50 percent
(carotid arteries), and headed toward of the chest (thoracic
                                                                          of the normal diameter. The the swelling of the aortic wall
aorta), then toward of the diaphragm to the stomach
                                                                          could be caused by age (more than 60), male (four to five times
(abdominal aorta). In the region around the stomach will be
                                                                          greater than females), family history (first degree relatives such
much more branching, including to the liver, intestines and
                                                                          as father or brother), genetic factors, hyperlipidemia (elevated
kidneys. And last, the branching will be forwarded to the
                                                                          fats in the blood), hypertension (high blood pressure), smoking
direction of human legs (iliac arteries).
                                                                          and diabetes.
   Human blood will be pumped by the heart into the aorta,
which then flows through the artery and its ramifications to all



                                                                     14                               http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 9, No. 3, March 2011
    Asymptomatic aneurysms may not require surgical                           According to the result of clinical data, there are difference
intervention until they reach a certain size or are noted to be            characteristics based on status of each patient (smoking/ex
increasing in size over a certain period of time. The parameters           smoking, fat in blood (dyslipidemied), and hypertency) as
for surgical decisions, but are not limited to, are as follows             shown in Table I.
[1,2]:
   •    aneurysm size greater than 5 centimeters (about two
        inches)
   •    aneurysm growth rate is arround 0.5 centimeters
        (slightly less than one-fourth inch) over a period of six
        months to one year
   •    patient’s ability to tolerate the procedure

                       II.   TROMBUS SIGNAL
    Thrombosis term will refer to the formation of a blood clot
(thrombus) in the blood vessels or human heart cavities.
Abdominal Aortic Aneurysms are often associated with the                                                                               (a)
thrombus (clots). This field have been studied and
demonstrated by the pathological, surgical, and clinical
examination based on the results of computed tomography
(CT), ultrasound imaging, angiography, traditional spin-echo
(SE) or cine-MRI. There are many methods have been created
or modified to prove the existence of intact thrombus signal in
the aorta. But until now, with a disorder that occurs in the
aorta, it is difficult to detect or properly evaluate the existence
of thrombus signal [2, 3].




                                                                                                                                       (b)


                                                                           Figure 3. (a) T1- image and (b) T2- image at the level of Abdominal Aortic
                                                                                                          Aneurysms


                                                                                         TABLE I.        PATIENT’S CHARACTERISTICS

         Figure 2. Aneurysms with a formation of Thrombus [4]                                                   Age
                                                                                  Name of Patient     Sex                   Characteristics
                                                                                                               (year)
    The selection of images for thrombus formation analyzing                      Patient 1          Male      65       Smooking
is very important. Images are selected from the result of
examination during relaxation took place (as shown in Figure 3                    Patient 2          Female 68          Dyslipidémie
of T1 and T2 images)[5].                                                                                                Smooking, Hypertensi,
                                                                                  Patient 3          Male      62
                                                                                                                        Dyslipidémie
    This work analysed the T1 and T2 of thrombus of SAAA
patient examination to determine whether the thrombus signal                      Patient 4          Male      82       Ex Smooking
has correlation with the aortic diameter enlargement, and to                      Patient 5          Male      83       -
predict the rupture risk of the aorta wall.
                                                                                  Patient 6          Male      59       Ex Smooking

                III.    MATERIALS AND METHODS                                     Patient 7          Male      53       -
                                                                                                                        Ex Smooking,
A. Data                                                                           Patient 8          Male      79       Hypertensi,
    Data were obtained from 16 patients with Small Abdominal                                                            Dyslipidémie
Aortic Aneurysms (SAAA) who have been examined since                                                                    Ex Smooking,
July 2006 until January 2010. Each patient has been examined                      Patient 9          Male      77       Hypertensi,
at least 1 to 4 times with examination period between 6 to 12                                                           Dyslipidémie
months (depend on the patient). MR Images were acquired on a                      Patient 10         Male      71       Smooking,
3T Imager (Trio TIM, Siemens Medical Solution, Germany).



                                                                      15                                    http://sites.google.com/site/ijcsis/
                                                                                                            ISSN 1947-5500
                                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                             Vol. 9, No. 3, March 2011
                                     Age
        Name of Patient     Sex                  Characteristics
                                    (year)
                                             Dyslipidémie
        Patient 11         Female 74         Ex Smooking
        Patient 12         Male     69       -
                                             Ex Smooking,
        Patient 13         Male     55       Hypertensi,                                                                   (a)                             (b)
                                             Dyslipidémie
                                             Ex Smooking,
        Patient 14         Male     51
                                             Dyslipidémie
                                             Ex Smooking,
        Patient 15         Male     73       Hypertensi,
                                             Dyslipidémie
        Patient 16         Male     59       Smooking
                                                                                                                                           (c)

                                                                                      Figure 5. (a) Anterior-Posterior Diameter, (b) Transversal Diameter, (c)
B. Protocol Small Abdominal Aortic Aneurysms                                                                   Maximum Diameter
   In this study protocol, images originating from: the study of
anatomy, cine-MR images for 3D/4D modeling, images T1/T2,
blood flow images, and images after injection of contrast
agents have been used to study the aspects of inflammation.
For each patient, the images are located in the same position
between one to another examination.

C. Processing
   We used MatLab software to precess the data. Preliminary
examination has been conducted for predictive aspect, and final                          (a)                                                     (b)
examination has been conducted as well for data which has
more important thrombus, more areas, and more signals. The
borders have been manually traced to define the Aorta Surface
and Luminal Surface, therefore Thrombus Surface = Aorta
Surface – Luminal Surface, (see figure 4).
    In aortic wall surface calculation, thrombus is found if the
thrombus surface area is greater than 30% of aortic surface                                                          (c)
area. Diameter of aorta is achieved by tracing manually the
aorta surface. There are three kinds of diameter positions:                           Figure 6. (a) T1-W image and (b) T2-W image after manual tracing, (c)
                                                                                                        Normalization area in the muscle
Anterior-Posterior Diameter, Transversal Diameter and
Maximum diameter, as shown in the figure 5.
                                                                                    D. Paramaters
  The muscle signal are slightly differences among each
examinations, therefore we normalized the data of muscle area.                          Maximum aortic diameter was automatically obtained from
                                                                                    manual tracing on T1 image in all examinations. Then we
                                                                                    calculated the evolution of the aortic diameter (mm/year) = ∆
                                                                                    maximum diameter (mm) / ∆ examination date (day) * 365
                                                                                    days. Several parameters were used to study the thrombus
                                                                                    signal, such as mean, median, standard deviation, skewness that
                                                                                    describes the degree of asymmetry of the signal histogram by
                                                                                    using the equation ∑ni(xi-x)3/Ns3, and the kurtosis that
                                                                                    describes how sharp the peak of the signal histogram which is
                                                                                    defined by using the equation ∑ni(xi-x)4/Ns4-3, where ni is
                                                                                    number of pixel at aorta xi , x is mean value of the aorta, s is the
                                  (a)                                (b)            SD, and N is the total number of pixels. [5]
                                                                                        Each parameter is calculated for the thrombus area, and the
Figure 4. (a) Manual tracing in Aorta Surface, (b) Manual tracing in Luminal        signal in the muscle is used to normalize the mean of signal in
                           Surface (in green line)
                                                                                    thrombus, the median of signal in thrombus and the standard
                                                                                    deviation of signal in thrombus. These parameters are
                                                                                    compared to the evolution of the aortic diameter. By using



                                                                               16                                   http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500
                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 9, No. 3, March 2011
mean/median/SD signal of the aorta and normalized                                       Patient     Thrombus Categories      Thrombus Categories
mean/median/SD signal of muscle, the thrombus is classified as                                      (based on parameters)   (based on visualization)
follow: Homogeneous Thrombus (if T1 = T2 = Low signal);                                Patient 5    Heterogeneous           Heterogeneous
                                                                                       Patient 6    Indefinite              Indefinite
Heterogeneous Thrombus (if T1 = T2 = High signal); and                                 Patient 7    Heterogeneous           Heterogeneous
Indefinite Thrombus (if T1 ≠ T2 (low and high signal, or high                          Patient 8    Homogeneous             Heterogeneous
and low signal)).                                                                      Patient 9    Heterogeneous           Heterogeneous
                                                                                       Patient 10   Heterogeneous           Heterogeneous
                  IV.   RESULTS AND DISCUSSION                                         Patient 11   Homogeneous             Heterogeneous
                                                                                       Patient 12   Homogeneous             Heterogeneous
    We found 13 out of the 16 patients with SAAA have a                                Patient 13   Homogeneous             Homogeneous
thrombus. Figure 7 and 8 are a sample of T1 image which can                            Patient 14   Heterogeneous           Heterogeneous
describe about presence of thrombus in SAAA.                                           Patient 15   Heterogeneous           Heterogeneous
                                                                                       Patient 16   Homogeneous             Homogeneous




   Figure 7. Surface thrombus : 243mm² (11,6%)      without thrombus                                                                      (a)




                                                                                                                                         (b)


                                                                                  Figure 9. P13, Male, 55, ex smooking, hypertensi, dyslipidémie, ∆ Max
                                                                                Diameter = 2.80 mm/year, 40% surface thrombus, Homogeneous T1 = T2 =
    Figure 8. Surface thrombus : 1026mm² (48,4%)      with thrombus
                                                                                              Low, T1= 0.391 < 0.815, (b) T2= 0.327 < 0.788

    Based on height’s distribution of thrombus signal, there
were 3 patient without thrombus, 5 patiens with homogenous
thrombus, 7 with heterogeneous thrombus and 1 with indefinite
thrombus. Figure 9, 10, and 11 shows the categories of
thrombus presence. If we compare the used of parameters to
the visual, there were 3 differences of the result of thrombus
categories as shown in Table II. There are three categories are
different (patient number 8, 11, 12). It indicates that 81,25% of
thrombus categories determination using parameters are the
same with the result of based on visualization.
                                                                                                                                          (a)
     TABLE II.      COMPARISON WITH VISUALIZATION CATEGORIES
       Patient     Thrombus Categories      Thrombus Categories
                   (based on parameters)   (based on visualization)
      Patient 1    Without thrombus        Without thrombus
      Patient 2    Without thrombus        Without thrombus
      Patient 3    Homogeneous             Homogeneous
      Patient 4    Without thrombus        Without thrombus




                                                                           17                                  http://sites.google.com/site/ijcsis/
                                                                                                               ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 9, No. 3, March 2011
                                                                                 and Table IV). From those tables, there are many values of R <
                                                                                 0.3 (not good correlation), a few values of R > 0.3, which
                                                                                 indicates a correlation between thrombus signals and the
                                                                                 evolution of the aortic diameter.

                                                                                            TABLE III.       PARAMETERS VS MAXIMUM DIAMETER


                                                                                   Name of                    T1                              T2
                                                                                  Comparison
                                                        (b)                                       R²     R         Equation       R²      R        Equation
                                                                                 Mean/Mean   0.099 0.314 y = 0.030x – 0.010 0.098 y = 0.005x +
 Figure 10. P5, Male, 83, ∆ Max Diameter = 2.27 mm/year, 90.85% surface          Muscle                  0.325                    0.485
thrombus, Heterogeneous T1 = T2 = High, (a) T1= 2.675 > 0.815, (b) T2 =          Mean/Median 0.099 0.314 y = 0.030x – 0.045 0.212 y = 0.012x +
                              0.881> 0.788                                       Muscle                  0.329                    0.253
                                                                                 Mean/SD     0.006 0.078 y = 0.041x + 0.071 0.266 y = 0.145x +
                                                                                 Muscle                  4.002                    6.542

                                                                                 Median/Mean 0.101 0.318 y = 0.031x – 0.009 0.097 y = 0.005x +
                                                                                 Muscle                    0.364                    0.471
                                                                                 Median/Median 0.101 0.318 y = 0.031x – 0.045 0.212 y = 0.011x +
                                                                                 Muscle                    0.368                    0.249
                                                                                 Median/SD     0.008 0.089 y = 0.045x + 0.063 0.252 y = 0.136x +
                                                                                 Muscle                    3.774                    6.483


                                                                                 SD/Mean      0.055 0,234 y = 0.005x + 0.004 0.063 y = -0.002x +
                                                        (a)                      Muscle                   0.038                     0.340
                                                                                 SD/Median    0.055 0,234 y = 0.005x + 0.005 0.069 y = 0.005x +
                                                                                 Muscle                   0.0383                    0.176
                                                                                 SD/SD Muscle 0.000 0.001 y = -0.000x + 0.000 0.005 y = 0.001x +
                                                                                                          1.550                     4.873


                                                                                 Skewness       0.093 0.304 y = -0.011x + 0.015 0.121 y = -0.007x +
                                                                                                            0.549                     0.669
                                                                                 Kurtosis       0.000 0.010 y = -0.000x + 0.040 0.200 y = -0.025x +
                                                                                                            2.321                     3.649

                                                                                            TABLE IV.    PARAMETERS VS ∆ MAXIMUM DIAMETER

                                                        (b)
                                                                                   Name of                    T1                              T2
                                                                                  Comparison
Figure 11. P6, Male, 59, ex smoking, ∆ Max Diameter =1.33 mm/year, 6.93%                          R²     R         Equation       R²      R        Equation
surface thrombus, Indefinite T1 = Low ≠ T2 = High, (a) T1= 0.691 < 0.815,
                            (b) T2 = 0.853 > 0.788                               Mean/Mean   0.262 0.512 y = 0.024x +           0.029 0.171 y = -0.004x +
                                                                                 Muscle                  0.931                              0.717
Then all parameters for generate thrombus categories that have                   Mean/Median 0.001 0.028 y = -0.010x +          0.019 0.134 y = -0.024x +
                                                                                 Muscle                  1.125                              0.843
correlation with the evolution of aortic diameter compared
with parameters that don’t have correlation with the evolution                   Mean/SD        0.031 0.176 y = -0.101x +       0.014 0.112 y = -0.160x +
                                                                                 Muscle                     5.441                           12.953
of aortic diameter which is indicated by many occurrence of
value r <0.3 (r is the coefficient of determination on the
graph). But there are also some parameters indicate a linear                     Median/Mean 0.000 0.02       y = -0.007x +     0.019 0.137 y = -0.018x +
                                                                                 Muscle                       1.109                         0.705
correlation between thrombus signal with a maximum
                                                                                 Median/Median 0.001 0.022    y = -0.008x +     0.019 0.137 y = -0.022x +
diameter, where the mean value (r = 0.314), median (r =                          Muscle                       1.123                         0.798
0.318), skewness (r = 0.304), or thrombus signal with the                        Median/SD      0.024 0.154 y = -0.091x +       0.013 0.114 y = -0.156x +
evolution of maximum diameter (mean (r = 0.512).                                 Muscle                     5.391                           12.437

    But there are some parameters that showed a linear
correlation between thrombus signal with a maximum diameter                      SD/Mean        0.002 0.044 y = 0.003x +        0.022 0.148 y = -0.011x +
                                                                                 Muscle                     0.250                           0.327
(mean (r = 0.314), median (r = 0.318), skewness (r = 0.304)) or
                                                                                 SD/Median      0.002 0.04 y = 0.002x +         0.027 0.163 Y = -0.013x +
correlation between thrombus signal and the evolution of                         Muscle                     0.252                           0.361
maximum diameter (mean (r = 0.512) (as shown in Table III



                                                                            18                                     http://sites.google.com/site/ijcsis/
                                                                                                                   ISSN 1947-5500
                                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 9, No. 3, March 2011
SD/SD Muscle 0.005 0.07    y = 0.010x +   0.010 0.102 y = -0.060x +                                      REFERENCES
                           1.266                      5.277
                                                                           [1]   Health Library, Aneurysm Overview, New York-Presbyterian Hospital,
                                                                                 30 Novembre 2008
Skewness     0.046 0.215 y = -0.021x +    0.000 0.017 y = 0.003x +         [2]   Shin Ito, Koichi Akutsu, Yuiichi Tamori, Shingo Sakamoto, Tsuyoshi
                         0.178                        0.479                      Yoshimuta, Hideki Hashimoto, Satoshi Takeshita, Differences in
                                                                                 Atherosclerotic Profiles Between Patients With Thoracic and Abdominal
Kurtosis     0.021 0.146 y = -0.016x +    2.000 0.005 y = 0.002x +               Aortic Aneurysms, The American Journal of Cardiology, Volume 101,
                         2.367                        2.855                      Issue 5, Pages 696-699, 1 March 2008
                                                                           [3]   Toshio Honda, Mareomi Hamada, Yuji Matsumoto, Hiroshi Matsuoka,
                                                                                 Kunio Hiwada, Diagnosis Thrombus and Blood Flow in Aortic
                                                                                 Aneurysms by use of Tagging Cine MRI, International Journal of
                          V.   CONCLUSION                                        Angiology, 6:203-206, 1997
                                                                           [4]   M. Xavier, A. Lalande, P.M.Walker, C. Boichot, A. Cochet, O. Bouchot,
    The similarity level between thrombus catagory and                           E. Steinmetz, L. Legrand, F. Brunotte, Dynamic 4D Blood Flow
visualization is 81.25%. There are no good correlation between                   Representation in the Aorta and Analysis From Cine-MRI in Patients,
thrombus signal and evolusi of aortic diameter on SAAA                           Computers in Cardiology, 34:375−378, 2007
(many values of r < 0.3), therefore thrombus signal itself                 [5]   Marco Castrucci et al, Mural Thrombi in Abdominal Aortic Aneurysms:
cannot be used as parameter to predict the evolution of aortic                   MR Imaging Characterization-Useful before Endovascular Treatment ?,
diameter. The correlation between blood flow, thrombus signal                    RSNA, 197, Italy, October 1995
and bilogy is still studied. For the next research, we will                [6]   Christopher M. Kramer, Magnetic Resonance Imaging Identifies the
                                                                                 Fibrous Cap in Atherosclerotic       Abdominal Aortic Aneurysm ,
implement other comparation parameters to aortic diameter,                       Circulation ; 109 ; 1016-1021, 2004
such as: blood flow speed with 3D/4D modeling (The aspect of               [7]   Eric M. Isselbacher, Thoracic and Abdominal Aortic Aneurysms ,
laminar and turbulance, maximum spped, radial speed, and                         Circulation, 2005
shear stress). and othehr parameters are comparison our                    [8]   Michèle Coutard, Thrombus versus Wall Biological Activities in
thrombus categories with the visualization categories is 81%.                    Experimental Aortic Aneurysms, Journal of Vascular Research,2009
For the evolution of the aortic diameter, we found no                      [9]   Shin Matsuoka, Quantification of Thin-Section CT Lung Attenuation in
correlation between thrombus signals with the evolution of the                   Acute Pulmonary Embolism : Correlations with Arterial Blood Gas
aortic diameter in Small AAA (R < 0.3), but the parameters                       Levels and CT Angiography, American Roentgen Ray Society,
                                                                                 186:1272-1279, May 2006
were used. The methodologies to measure and the
normalization area with muscle signal will be discussed. We
cannot use thrombus signal alone as a parameter to predict the                                        AUTHORS PROFILE
evolution of the aortic diameter. Relationship between flow
data, thrombus signal and biology findings will be studied.                A. Suhendra is a Lecturer of Informatics Engineering, Industrial Engineering
                                                                                Fakulty of Industrial engineering, Gunadarma university.
    Currently, comparison of the blood flow velocity with                  C. M. Karyati, Graduate from Master Program in Information System,
3D/4D modeling (aspect laminar flow and turbulence,                             Gunadarma Unviversity, 1998. She is now a Ph.D-Student at Groupe
maximum speed, radial speed, and shear stress) with evolution                   Imagerie Médicale, Le2i, UMR CNRS 5158, Faculté de Médecine,
                                                                                Université de Bourgogne, Dijon, France
of the maximum diameter was performed.
                                                                           A Muslim, Graduate from Master Program in Information System,
                                                                                Gunadarma Unviversity, 1997. He is a Ph.D-Student at Groupe Database
                    ACKNOWLEDGEMENTS                                            Sistem Information et Image, Le2i, UMR CNRS 5158, Faculté de
                                                                                Science et L’Enginer, Université de Bourgogne, Dijon, France
This research was conducted because of aid from the MRI and
                                                                           A. B. Mutiara is a Professor of Computer Science at Faculty of Computer
Nuclear Medicine Department at the Centre Hospital                              Science and Information Technology, Gunadarma University
Universitaire (CHU) de Bocage in Dijon, France. More
specifically the authors want to thank Nicolas Abello who has
been so helpful in terms of procurement data. A.B.M. also
gratefully acknowledges financial support of the Gunadarma
Education Foundation during the research.




                                                                      19                                   http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   Vol. 9, No. 3, March 2011

     Empirical Evaluation of the Shaped Variable Bit Rate Algorithm for
                            Video Transmission
                             A. Suki M. Arif, Suhaidi Hassan, Osman Ghazali, Mohammed M. Kadhum
                                  InterNetWorks Research Group, UUM College of Arts and Sciences,
                                                      Universiti Utara Malaysia,
                                                06010 UUM Sintok, Kedah, Malaysia
                                          {suki1207, suhaidi, osman, kadhum}@uum.edu.my


Abstract—Due to the surge of media traffic over the existing best-           video coding, it leaves the flexibility to designers to develop
effort Internet, the network congestion condition is projected to            the suitable scheme for specific applications [6]. Thus, any
worsen. Hence, video transmission rate needs to be regulated to              created algorithm can be applied to many video coding
adjust it with the network condition or constraint. Therefore,               standards.
rate control is essential in video transmission for an accepted
visual quality under some certain rate constraint. One of the                   Most of the video applications are employing video rate
novel algorithms for the rate control, which is called Shaped VBR            controllers in the form of either a Variable Bit Rate (VBR) and
(SVBR), was created by Hamdi et al. SVBR is a novel video data               Constant Bit Rate (CBR). The advantage of VBR is that it
rate shaping for a real-time video transmission application. It is a         produces a consistent visual quality, while CBR generates
preventive traffic control which allows VBR coding video traffic             constant data rate for the network interface.
direct into the network, while regulating unpredictable large
bursty traffic by utilizing a leaky bucket algorithm. SVBR                        However, the bursty form of VBR causes grave problems to
algorithm uses prediction in calculating the next Group of                   networks in terms of significant variation of the network traffic,
Picture (GoP) video data size and in determining the next                    jitters and delays. Equally, the problems with CBR
appropriate quantization parameter value. This algorithm has                 implementation are the additional delay due to buffering, and
been utilized by many researchers and implemented in many                    the visual quality tends to vary according to the video content.
network scenarios. However, despite its novel creation for a real-
time, the analytical empirical evaluation in this paper found some
                                                                                 There is an obvious need for an alternative solution by
obvious weaknesses. The weaknesses which are revealed in this                taking advantages of both CBR and VBR. At the same time,
paper are the occurrence of a sudden sharp decrease in the data              the alternative solution should eliminate the weaknesses of the
rate, the occurrence of a sudden bucket overflow, the existence of           both rate controllers. Therefore, an ideal rate controller requires
a low data rate with a low bucket fullness level, and the                    a higher and consistent visual quality video, video data rate is
generation of a cyclical negative fluctuation.                               always within a permissible bandwidth level, and less delay.
                                                                             The delay can happen either because of the introduction of an
    Keywords-component; rate control; shaped VBR; video                      additional buffer or complex computational algorithm.
transmission
                                                                                 By that, it is useful to encode the video with an open loop
                                                                             VBR as much as possible, but, at the same time, it needs to
                        I.      INTRODUCTION                                 control traffic admission into the network when the permissible
    Recent years have witnessed an explosive growth of the                   level is exceeded. By doing so, it helps to maintain the
Internet and increasing demand for multimedia information                    consistent visual quality in VBR and get into reasonable
services. Multimedia based applications via the Internet have                compromise with adaptive quality when the network
received tremendous attention. In spite of the growing                       transmission rate degrades. In addition, it avoids unpredictable
networking capabilities of the modern Internet and the                       large bursty rate variations, as in VBR. However, it is done
sophisticated techniques used by today’s video coding,                       without the rigidity and systematic coding delay of CBR coders
transmitting video data over the Internet is still a great                   or intermediate CBR buffer.
challenging task, as stated in [1].                                              SVBR was introduced by Hamdi et al. [11] in 1997. The
    Rate control is always regarded as an essential element of               main idea behind SVBR is to limit the open-loop burst while,
the typical video coding as stated in the following works [2],               at the same time, allowing open-loop VBR coding, provided
[3], [4], [5], [6]. The general rate control is illustrated in Figure        that they are still within a permitted constraint. To achieve this
1. Its main task is to regulate the coded video bits to meet a               objective, SVBR manipulates a leaky bucket algorithm to
suitable target rate. Video coding refers to the process of                  perform admission control. The leaky bucket used in SVBR
reducing the quantity of data used to represent a sequence of                can be considered as an imaginary buffer, thus, no extra delay
video pictures or frames. A number of standards of video                     is introduced. Moreover, Hamdi et al. assumed that for a fast
coding have been defined, such as MPEG-2, H.263, and                         moving scene with complex image structure, the scene quality
MPEG-4. Although a rate control is not always a part of video                can slightly be reduced, since human eyes do not have enough
coding, but it plays a very important role in producing video                time to notice the image details. In addition to that, Hamdi et
data traffic [7], [8], [9], [10]. Since, it is not a fix part of the         al. have suggested applying the algorithm at Group of Picture




                                                                        20                               http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 9, No. 3, March 2011




                                                   Figure 1: General rate control architecture

(GoP) granularity, which consequently yields to a less complex               They claimed that their algorithm is based on SVBR
algorithm and lower delay.                                               algorithm with several enhancements. Among the enhancement
                                                                         they made, besides implementing SVBR in Evalvid framework
    In this paper, we discuss our extensive analysis of the              to become Evalvid-RA, they added a supported network
SVBR algorithm performance in identifying its strengths and              feedback systems, and performed some changes in the
weaknesses. The organization of the paper is as follows. We              parameters of the SVBR's leaky-bucket algorithm. Instead of
describe the related works on the SVBR algorithm in next                 using SVBR leaky-bucket equation, as in (1), they
section. Then, in Section III, we comprehensively describe the           reformulated the equation as (2),
SVBR algorithm. In Section IV, we present our experiments
setting in order to evaluate the SVBR algorithm. After that, in
Section V we elaborate and analyze extensively the experiment                      X (k + 1) = min{b, (max{0,X (k) – r} + R(k))}           
results. We summarize our paper in Section VI.
                                                                                  X (k + 1) = min{b’, (max{0,X (k) – r’} + R(k))}          
                   II.   RELATED WORKS
    SVBR can be regarded as an alternative solution that taking              where Rk is video data input, rk is the leak rate, b is the
advantages of both CBR and VBR. At the same time, it                     bucket size and Xk is a level of the bucket fullness after
eliminates the weaknesses of the both rate controllers. Since the        transmitting the data in the time of k. In (2), r’ < r, b’ < b,
introduction of SVBR, it has been utilized by several works.             where r’ and b’ are dynamically adjusted based on network
However, these works are still implemented under the                     feedback.                   r’=roldi/G+rnew(G−i)/G          and
limitations or weaknesses of the SVBR.                                   b’=boldi/G+bnew(G−i)/G. When there is no network feedback
                                                                         during a GOP, r’=rold=rnew and b’=bold=bnew. Here, r' and b' are
                                                                         Evalvid-RA's adaptive leaky bucket rate and bucket size used
A. Evalvid-RA and RA-SVBR
                                                                         during GOP-k, rold and rnew are previous and current network
    One of the recent works is performed by Lie and Klaue, as            update of rate, G is a GOP size, and i is the time index for the
in [12], [13]. They created the Evalvid-RA, a tool-set for rate          network feedback event counted as the position in the active
adaptive video performance evaluation in ns-2. The creation of           GOP of size G frames.
Evalvid-RA builds on modifications to the original software of
Klaue et al. [14] and Ke et al. [15]. The main modification to               What should be stressed here is that although the changes
EvalVid was that the re-assembly post process program had to             made in Evalvid-RA is seemed big, but it still maintained the
take into account that multiple MPEG-4 source files and                  core of SVBR algorithm. By limiting the changes to SVBR
modification to the ns-2 interface, and the associated VBR rate          leaky-bucket parameters, such as r’ < r and b’ < b, all the
controller based on SVBR by Hamdi et al. Thus, the engine of             weaknesses in SVBR are inherited.
the video rate adaptation in the Evalvid-RA is come from
SVBR algorithm.                                                          B. Other Recent SVBR Related Works
    Lie and Klaue claimed that Evalvid-RA is a true rate                     Talaat et al. [16] investigated the effect of incorporating
adaptive video. Their solution has generated a real rate adaptive        TFRC on the peak signal-to-noise ratio PSNR of the
MPEG-4 streaming traffic, using the quantizer parameter for              transmitted video over the Internet in a simulated environment.
adjusting the sending rate. Then, a feedback based on the VBR            They found that TFRC performance on slow motion videos
rate controller is used at simulation time, which supporting             was slightly better than on medium-motion that was better than
TFRC and a proprietary congestion control system named P-                that on high-motion videos. In this work, they actually
AQM. By simulating in ns-2 of TFRC and P-AQM, they                       deployed the Evalvid-RA in their investigation, without
demonstrated that Evalvid-RA capabilities in performing close-           making any changes to the core of SVBR algorithm.
to-true rate adaptive codec operation with low complexity.




                                                                    21                                http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 9, No. 3, March 2011
    Another work can be found in [17], where a power                      important attribute was the long term TCP-friendliness.
management mechanism for wireless video transmission using                Although, this work is dissimilar from others in terms of
the TFRC protocol that takes into account feedback about the              implementing the SVBR in a multicast environment instead of
received video quality and tries to intelligently adapt                   unicast, and make some adjustment in data rate, but it does not
transmitting power accordingly. The purpose of the mechanism              make any adjustment to the estimation approach in the SVBR.
is to utilize TFRC feedback and thus achieve a beneficial
balance between the power consumption and the received                               III.     SVBR ALGORITHM AND PRINCIPLES
video quality.
   The researchers claimed that they have implemented a                   A. SVBR Fundamentals
module consists of the logic of the proposed mechanism in the                 It can be concluded that SVBR principles are revolved
Evalvid-RA environment. The module that implemented the                   around several principles. First, the algorithm works on GoP
TFRC protocol also was changed so that, they claimed, can                 granularity, thus, it has low complexity that leads to a lesser
provide information about packet losses to their mechanism.               delay [12]. Therefore, it is much simpler than that the one
The mechanism calculates the power needed to improve PSNR,                employed in the CBR coder which is working on macroblock-
and then this information was passed to the modified wireless             by-macroblock variations [11].
physical layer module that is able to increase or decrease power
according to the mechanism. However, as stated previously, the              1) Controlling Video Data Input: The second principle of
fundamental contributors for the weaknesses in SVBR/Evalvid-              SVBR algorithm is that it uses a leaky-bucket algorithm to
RA is as a consequence of the estimation and prediction used in           avoid an excessive burst data rate into the network. By using
generating the data rate, the work in [17] did not change                 the leaky-bucket algorithm, the SVBR mechanism can control
anything on this part.                                                    the video data admission into the network. Thus, it avoids data
    The other work is done by Bouras et al. in [18], [19], where          loss at the network interface. This principle can be written as
they performed a performance evaluation of MPEG-4 video                   follows,
transmission with their proposed multicast protocols, namely
Adaptive Smooth Simulcast Protocol (ASSP) and Adaptive                                      Rk < rk + (b - Xk)                 
Smooth Multicast Protocol (ASMP). The features in their
protocols include adaptive scalability to large sets of receivers,            The relationship of all variables are illustrated in Figure 2.
TCP-friendly behavior, high bandwidth utilization, and smooth             The (b - Xk) in (3) is the space in the bucket that can still
transmission rates which suitable for multimedia applications.            accommodate more video data. Whereas, rk is the data from the
They evaluated the performance of their protocols under an                bucket that has been sent into the network interface. Thus, the
integrated simulation environment which extends Evalvid-RA                blank space in the bucket is now ((b - Xk) - rk). Therefore, (3)
to the multicast domain with the use of the Real-time Transport           restricts the video data input, so that the bucket fullness will not
Protocol (RTP) or Real-Time Transport Control Protocol                    be overflowed. Consequently, no drops will occur.
(RTCP). Simulations conducted under that environment
                                                                              In order to realize the above-mentioned principle, the video
combine the measurements of network-centric along with video
                                                                          bit allocation or Rk should be controlled so that the Xk does not
quality metrics. They claimed that the ―joint‖ evaluation
                                                                          exceed b. For that purpose, Hamdi et al. in [11] proposed (1).
process provides a better understanding of the benefits and
                                                                          In the Equation, they defined X(k) as a bucket fullness level at
limitations of any proposed protocol for multimedia data
                                                                          the beginning of GoP-kth video transmission (before
transmission.
                                                                          transmitting GoP-kth data). Thus, X(k+1) can be regarded as
    They called their tool-set as Multi-Evalvid-RA, which                 the bucket fullness level after transmitting GoP-kth data.
provides all the necessary tools to perform simulation studies            Equation (1) restricts the bucket fullness level into (4).
and assess the video quality by using both network related
metrics along with video quality measurements. This is due to
that Evalvid-RA does not support multicast transmission,
which is necessary for experiments and simulations with the
RTP/RTCP protocols. Therefore, they further extend the
functionality of Evalvid-RA by adding the codes in order to
exploit the sender and receiver RTCP reports.
    They used Evalvid-RA in implementing media rate control
based on traffic feedback and takes advantage of RTCP's
Sender Report (SR) and Receiver Reports (RR). They claimed
innovation created is the calculation of smooth transmission
rates, which was performed by receivers and is based on RTCP
reports. In such a way, the oscillations are reduced. Another



                                                                                      Figure 2: Restrict input in the leaky-bucket algorithm




                                                                     22                                  http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. 9, No. 3, March 2011
                 0 < X(k + 1) < b          (4)

    Here (4) can be realized since the mathematical expression
max{0, (X(k)-r)} from (1) will produce a positive result
between 0 and (X(k)-r), even if the expression (X(k)-r)
produces negative result. Even though expression (max{0,
(X(k)-r)}+R(k)) might produces a value which is higher than b,
the expression min{b,(max{0, (X(k)-r)}+R(k))} will cap the
result to b only.
  2) Allowing VBR Coding when the Network Permits: This
principle can be written as follows, as stated in [11], video
sequence with reasonable activity and duration can be coded at
the normal VBR rate while excessively long and/or active
sequence is ―truncated‖ and their bit rate is reduced to r. This
means that for video sequence where conforms to the traffic
contract, the shaping algorithm behaves like normal VBR. On                         Figure 3: SVBR shaping principle when bucket empty
the other hand, during overload periods (those where video
sequence does not conform to the traffic contract), the
algorithm aims to bring the rate down to r. During these
periods, image quality may be reduced to that of CBR coding.
However, because network resources are dimensioned based
on the leaky-bucket conformance, this shaping avoids data
loss. Thus, only harmful sequences are shaped.
   This principle can be depicted in Figure 3 and Figure 4.
SVBR data rates are shown in the red-dotted lines. Here,
SVBR uses bucket fullness level as a conditional parameter to
address this principle. When the bucket is empty, it indicates a
video sequence with a reasonable activity. Whereas, the full
bucket shows that active video sequence is being processed.
    This principle can be easily seen in the region II of Figure
3. When the bucket empty, SVBR uses VBR data rate. Thus,
the higher quality of VBR in Region II is maintained without                         Figure 4: SVBR shaping principle when bucket full
bucket overflow. On the contrary, the video data rate is
decreased into CBR rate when an excessive active sequence is            next QP value. The notation Q(k) is used to represent the
occurred, as illustrated in region II of Figure 4. Here, SVBR           quantization parameter value used in a kth GoP. Rest(k+1)
compromises high visual quality since the bucket is already             notation will be used to represent the estimation data rate.
full. By following VBR data rate, SVBR might risks of bucket
                                                                            As described in the Subsection III-A2, that the SVBR data
overflowing.
                                                                        rate lies between CBR and VBR data rate due to the use of
    In case of low VBR data rate (lower than CBR rate), as              linear calculation. Based on bucket fullness level, SVBR data
seen in the region I and region III of Figure 4, SVBR applies           rate can be either close to CBR or VBR. For that purpose,
normal VBR data rate. This will not make the bucket overflow,           SVBR uses (5) and (6).
since the leak rate is greater than the input rate. However, in
                                                                            if Ropen (k + 1) > r,
similar case but with an empty bucket level, SVBR employs
CBR rate. This will increase the video rate which leads to a                         Rest(k + 1) = (1 - x) .Ropen(k + 1) + x.r             
better visual quality without compromizing the risk of bucket
overflowing or underflowing.                                                if Ropen (k + 1) ≤ r,
   However, in the actual implementation, the bucket rarely                          Rest(k + 1) = (1 - x) .Ropen(k + 1) + x.r             
empty or full completely. Thus, by using linear calculation,
which will be described later, the SVBR will be lying between               Here Ropen(k + 1) is actually a GoP-(k + 1)th SVBR data rate
CBR and VBR data rate.                                                  estimation. Since SVBR is targeted for real time video
                                                                        application, the next GoP data is not known in advance. Thus,
B. Determining Bit Rate Allocation:                                     Ropen(k + 1) is needed in order to calculate R(k + 1). For that
    SVBR uses estimation and prediction in calculating the              purpose, SVBR uses (7) as a prediction or estimation for the
next Group of Picture (GoP) video bit rate allocation, that             Ropen(k + 1).
means determining R(k+1). The size of the R(k) is very much
related to the QP. Also, SVBR uses estimation to determine the



                                                                   23                                 http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          Vol. 9, No. 3, March 2011
                                            Rk   Qk                            allocation), suitable QP value should be determined. To
                       Ropenk 1                                     
                                                                                     calculate the next GoP QP value, which is Q(k+1), SVBR
                                                  q
                                                                                     requires R(k), q and x. The relationship of all variables with the
                                                                                     SVBR rate control block diagram is depicted in Figure 5.
    It is written like that in order to indicate that SVBR will use
quantization parameter q which is used for the generation of                             The main idea here is as follows, if the current GoP
VBR data rate as its base rate. q is any suitable value that will                    produces high bit rates data (higher than r) and the bucket
be fixed by user in the beginning of the operation.                                  fullness level is high as well (more than half of the bucket), a
                                                                                     higher QP value for the next GoP should be generated. Higher
   Additionally, x is a simple function to calculate the ratio of                    QP value means lower data rate, R(k+1), which is exactly what
bucket fullness level. The calculation is shown in (8);                              SVBR is supposed to produce. When the bucket is nearer to
                                     X (k  1)                                       full level and VBR video sequence is active (high rate), the
                               x                                              next data rate should be lower. Thus, the bucket does not tend
                                        b                                            to full or overflow. The opposite will be happened if the current
     As described previously, the purpose of the bucket fullness                     data rate is low. The QP value will be reduced, which will
function is to determine either it should be closer to CBR (r) or                    produce higher data rate for the next transmission.
SVBR (Ropen(k + 1)). If the bucket fullness is high (x>0.5) and
Ropen(k + 1) (refer to Region II of Figure 4), the SVBR data rate                    D. Implications of the Design
should be closer to r. This is reflected well in (5); higher x                           There are many implications from the original principles of
value, increases r value and comparatively will decrease                             the SVBR design. The implications are inherited from the way
Ropen(k + 1) value by using expression (1-x). On the other hand,                     SVBR determines (predicts or estimates) the suitable bit rate
if the bucket fullness is low (x>0.5) and Ropen(k + 1)) ≤ r (refer                   allocation and the QP value. From this estimation, the QP value
to Region I and III of Figure 4), the SVBR data rate is closer to                    will be fed into the video encoder (refer to Figure 5) to generate
Ropen(k + 1). This condition is also reflected in (5). All the                       the real encoded video. The data rate for the encoded video
combinations of bucket fullness level, Ropen(k + 1) is greater or                    might be very much differ from the earlier data rate estimation,
lower than r, and either SVBR is closer to r or Ropen(k + 1) are                     when estimating the suitable bit rate allocation (Rest(k+1)).
shown in Table I.
                                                                                         Another big implication is on the encoded video data rate.
   From (7), it is clear that the next GoP Ropen is calculated                       Since the estimated QP value is used to encode the video, and
based on previous R(k) and previous QP values.                                       the next original video data rate is much different from the
                                                                                     previous one, this might create another undesired relationship.

C. Determining QP Value                                                              E. SVBR Algorithma Flow
    In determining the next QP value, which is vital in                                  The gross or important parts of the SVBR algorithm can be
producing the suitable bit rate allocation for the next GoP data,                    illustrated in a flow chart form as shown in Figure 6. Following
the following equations are used;                                                    is the explanation on the notations, commands or variables
     if Ropen (k + 1) > r,                                                           used;
                                                                                        •     X(0): It is the intialization bucket fullness level at
                                     q  Ropen k  1
             Qk  1                                                                  GoP-0, which is set to half of the bucket.
                             1  x  R k  1  x  r 
                                           open
                                                                                        •     For k: to perform looping in order to process all video
                                                                                              sequences from the first GoP until the last GoP.
     if Ropen (k + 1) ≤ r,                                                              •     Transmit data: to transmit each GoP data. The real
                                                                                              implementation in the simulation package is much
                                     q  Ropen k  1                                        more complex. This is due to the fact that the
            Qk  1                                                 
                           x  R k  1  1  x  r 
                                    open
                                                                                              simulation package imitate almost 100% similar to the
                                                                                              real network behaviour.
   As mentioned earlier, QP value is used by the video
encoder to encode the video data. In the context of SVBR, QP
value produces R(k). To obtain a certain R(k) value (bit rate

    TABLE 1. RELATIONSHIP OF BUCKET FULLNESS, ACTIVE SEQUENCE AND
                           SVBR DATA RATE

                                x < 0.5                 x > 0.5
     Ropen(k + 1) > r        SVBR is closer          SVBR is closer
                                to Ropen                  to r
     Ropen(k + 1) ≤ r        SVBR is closer          SVBR is closer
                                  to r                  to Ropen
                                                                                                     Figure 5: Rate control block diagram with SVBR




                                                                                24                               http://sites.google.com/site/ijcsis/
                                                                                                                 ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 9, No. 3, March 2011
                                                                                 and approximation in calculating the next GoP VBR
                                                                                 data.
                                                                            •    Ropen(k+1) > r: is to test whether VBR data is in higher
                                                                                 data rate than r. From that test, SVBR can determine
                                                                                 either the next GoP data rate is closer to Ropen or r.
                                                                            •    Q: is to represent Q(k+1), the quantization parameter
                                                                                 for the next GoP video data.


                                                                                               IV.    EXPERIMENTS
                                                                             This section presents a performance evaluation experiments
                                                                         of SVBR to analyze its strengths and weaknesses.

                                                                         A. The Evaluation Approach
                                                                             A combination of several video clips is used for the purpose
                                                                         of evaluating SVBR. These clips consist of low and high video
                                                                         rate, thus, the SVBR performance in several scenarios can be
                                                                         observed.
                                                                             The video traffic used in this research is the video traffic
                                                                         trace. The idea is to employ a real video information (from the
                                                                         selected video sequence) but without the use of a very big data
                                                                         in the experiments. Compared to traffic model, trace-traffic is
                                                                         considered credible as it represents an actual traffic load, as
                                                                         justified in [20]. The video traffic trace is perfectly fix the
                                                                         requirement and it has been employed by majority of the video
                                                                         communication research community, as in [21], [22], [23],
                                                                         [24].

                                                                         B. The Video Sequences Used
                                                                             The video sequence used is a combination of several video
                                                                         clips taken from http://trace.eas.asu.edu/yuv/index.html. These
                                                                         include ―news‖, ―bridge_far‖, ―bridge_close‖, ―bus‖, and
                                                                         ―highway‖. They are typical video sequences used in various
                                                                         video rate control studies. All the video clips and its profiles
                                                                         are describe in Table II. The combined clip duration is 218.4
                                                                         seconds, which consists of 7099 frames (6552 frames without
                 Figure 6: SVBR algorithma flow chart                    header) and 547 GoPs. All clips used are in a raw YUV 4:2:0,
                                                                         25 frames per seconds (fps) and in CIF format (352x288). CIF
•    X(k+1): after the transmission, the virtual bucket                  and QCIF format are two commonly used formats in video
     fullness will be calculated.                                        transmission-related studies [25], [26].
•    e: to represent x, which is the ratio of bucket fullness
     level to the bucket size.
•    Calculate Ropen(k+1): to calculate the next GoP VBR
     data. As described previouly, SVBR uses prediction

                                       TABLE 2. PROFILE OF VIDEO SEQUENCES USED IN THE EXPERIMENTS

Video clips                  Size (frames)              Motion categories          Description
News                         300                        Medium                     Two news reporter are talking.
Bridge (far)                 2101                       Low                        A bridge view from far side, then a small
                                                                                   boat going to that direction with a bird flying
                                                                                   in the sky.
Bridge (close)               2001                       Low                        A bridge view from near side and many
                                                                                   people are walking across the bridge.
Bus                          150                        Very High                  A bus moving speedily in a road.
Highway                      2000                       High                       A panoramic view from a vehicle speedily
                                                                                   moving in a highway.




                                                                    25                               http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 9, No. 3, March 2011
C. Rate Control Experiment Settings                                      compared to VBR. This case is clearly can be seen in Figure 7.
   In order to execute the experiments, the following                    The SVBR data rate is bursted from 6048 Bytes/GoP to 253000
parameters setting are employed;                                         Bytes/GoP, while, the VBR data rate is only increased sharply
                                                                         from 4032 Bytes/GoP to 30240 Bytes/GoP.
   •    Bucket size (b) = 60000 Bytes/GoP. This size is equal
        to 1Mbps transmission speed.                                        This above scenario can be explained as follows;
   •    Leak rate (r) = 20000 Bytes/GoP. This rate is average                 For Gop-201, the k is equal to 201. Then, the bucket
        open-loop rate for the video sequences used, as                  fullness ratio for the next GoP, when applying (8), is 0.1008.
        suggested by Hamdi et al. [11].                                  The QP values for the respective GoPs are shown in the Table
   •    VBR Initial q = 10. This initial value is selected to            III.
        provide space for the SVBR rate control to increase or
        decrease QP value freely. For a more limited network                                     X 202 6048
                                                                                           x                   0.1008
        resources setup, the higher initial q value should be                                       b     60000
        considered.
                                                                                                 R201  Q201 6048  7
                                                                                  Ropen202                             4233.6
               V.     THE RESULT AND ANALYSIS                                                           q          10
    The strengths and weaknesses of the SVBR algorithm will
                                                                            Since Ropen(202) is less than r (CBR=20000), then (6) is
be highlighted based on the analysis on several scenarios
                                                                         applied;
below. The analysis are based on how SVBR will react when
certain scenarios occur. The analyzed scenarios include the                                                 q  Ropen202
                                                                                       Q202 
occurrence of sharp decrease in VBR data rate, sudden bucket
overflow, low data rate with low bucket fullness level, and                                        x  R 202  1  x  r 
                                                                                                          open

fluctuate data rate.
                                                                                                         10  4233.6
                                                                                Q202 
A. Sharp Decreases in SVBR Data Rate                                                       0.1008  4233.6  1  0.1008  20000
    Figure 7 show the phenomenon of sharp decreses in SVBR
data rate which can be seen at the GoP-201. Figure 7 charts the
                                                                              Q202 
                                                                                                42336           42336
                                                                                                                         2.2995
data for this particular GoP and the GoPs around it. The sudden
sharp decrease in SVBR data rate is from 17,136 Bytes/GoP to
                                                                                         426.7469  17984 18410.7469
6,048 Bytes/GoP. The SVBR data rate is examined here
because the generation of the next SVBR data rate is very                    Since the nearest QP value for Q(202) is 2, then Q(202)=2.
much dependent on it (refer to Figure 6); Rest(k+1) and                  What should be highlighted here is that, when at GoP-201 the
Ropen(k+1) is the SVBR data rate estimation for the GoP-(k+1).           data rate is comparatively very low, the algorithm wants to
                                                                         increase the data rate for the next GoP by lowering the QP
    What can be observed here is that when a sharp decrease in           value. The algorithm succeeds in its principle to perform that
SVBR data rate occurs, it will automatically generate a sudden           task. The QP value has been decreased from 7 to 2, with the
bursty in the next SVBR data rate. This kind of scenario occurs          intention to increase the data rate substantially. However, at
especially when the next VBR data rate is increased sharply as           GoP 202, VBR data rate increases to higher rate, consequently
well. The possible problem of this scenario is that, the SVBR            QP=2 for SVBR GoP 202 data rate is equal to 252000
data rate might be increased at very much higher rate as                 Bytes/GoP. For a reference purpose, if VBR data rate remains
                                                                         at the same rate for GoP-202 as the GoP-201, QP=2 will
                                                                         moderately increase SVBR GoP 202 data rate to 24192
                                                                         Bytes/GoP.
                                                                             From the above calculation, it can be concluded that one of
                                                                         the weaknesses of SVBR algorithm is when a sharp decrease in
                                                                         VBR data rate occurs, the SVBR data rate for the next GoP will
                                                                         increase dramatically. This is especially true in the case the
                                                                         next VBR data rate is increased sharply as well. The sudden
                                                                         sharp fluctuation in the SVBR data rate can go up beyond the
                                                                         permissible level, which overflows the bucket. This scenario
                                                                         crops up as a result of very low bucket level fullness and low
                                                                         Ropen value case. This leads to a very low QP value and
                                                                         consequently, producing very high data rate.

                                                                                    TABLE 3. RESPECTIVE QP VALUES FOR GOP 198-204

                                                                               GoP #     198     199    200      201    202     203     204
             Figure 7: Sharp decrease scenario in the GoP-201
                                                                                QP        7       7      7        7      2       25      18




                                                                    26                                 http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 9, No. 3, March 2011
B. When a Sudden Bucket Overflow Occurs                                  fullness is also high, the estimation for the R(370) is going
                                                                         closer to r or CBR. All these will produce a higher QP value
    Figure 8 shows the case when a sudden bucket overflow                (Q(370)) than q. Accordingly, the higher QP value will
occurs. This scenario can be observed starts from GoP 369                generate low SVBR data rate. This is a good strentgh of the
until 381. This phenomenon can be described as an advantage              SVBR algorithm. Even though the VBR data rate are very high
of the SVBR algorithm, but at the same time, it shows poor               and it is maintained high for several consecutive GoPs, SVBR
performance as well. The disadvantage of this algorithm can be           algorithm quickly adjust its data rate to an acceptable level and
seen at GoP-369. Here, when VBR data rate fluctuates sharply             maintains that rate onwards.
to 59472 Bytes/GoP, exceeds the bucket size, the SVBR also
follows this behaviour, even to higher data rate than the VBR.               It can be concluded here that when there is a high data rate
                                                                         burst occurs, for the next first GoP, SVBR will burst as well
    In addition to the aforementioned explanation, since SVBR            leading to bucket overflow. As each GoP consists 12 frames,
uses previous GoP data, which is GoP 368, to estimate the next           this means that 12 frames will burst as well. However, SVBR
GoP data, it generates a small Ropen. The small Ropen and higher         has been designed to quickly retract the burst into permissible
bucket fullness level will generate R(369) (refer (6)) close to          level for the following GoPs.
the Ropen value (refer to Table I). Consequently, SVBR
algorithm generates QP value which is close to the default               C. Low Data Rate and Low Bucket Fullness Level
value q. This form of estimation is good for video sequences
which are increased or decreased in a smoother way. But, for                 As shown in Figure 9, this scenario can be clearly seen
the next video sequence, even though the date rate is sharply            from GoP 26 to 200. From this range, especially from GoP 106
increased, the q value is still the same as for previous video           to 200, it shows a long flat low rate sequence. Part of this
sequence. Then, for the SVBR, when the QP value is around                sequence is shown in Figure 9. Here, both of CBR and VBR
the default q value, it produces a high data rate. This is due to        are at low rate, SVBR is seen tend to be around the both rates.
that SVBR is using the real data for the transmission, which is          SVBR should increase its data rate here to improve its visual
same as VBR (Q(369)=q=10).                                               quality. It is clearly shows here that SVBR is not designed to
                                                                         increase the visual quality in a low data rate sequence.
                          X 370 60000
                   x                    1.0000                            Based on Figure 9, for SVBR to estimate GoP 106 from
                             b     60000                                 GoP 105, SVBR bases on its distance to CBR (r) and VBR. In
                                                                         the case of calculating GoP 106 using GoP 105, that its data
                         R369  Q369 59472  10                      rate is very close to r, this derives SVBR to adjust its rate to
         Ropen 370                               59472
                                q           10                           lower than r value. It does that by lowering the R(106)
                                                                         estimation than the real value of the R(105), refer to (6). By
   Since Ropen(370) is greater than r, then (9) is applied;              designing in such a way, it produces a higher QP value
                                   q  Ropen 370
                                                                         estimation, which is 7 (previously QP=6). Consequently, it
               Q370                                                   generates lower value of data rate for SVBR GoP 106 data rate.
                           x  R 370  1  x   r 
                                 open
                                                                             In producing SVBR data rate for GoP 107, the SVBR will
                                                                         estimate R(107) as a little bit higher than the real R(106) since
                          10  59472                                     R(106) is located in the middle of the r and VBR but at a little
      Q370 
                                                  594720
                                                         29.736
                 0  59472  1  20000         20000                 bit lower distance. In consequence, when calculating the value
                                                                         for QP, it gains Q(107)=6.7717. Since, the nearest round
   Since the nearest QP value for Q(370) is 30, then QP=30.              number for 6.7717 is 7, it produces the SVBR data rate for GoP
The data rate for QP=30 is 25200 Bytes/GoP, then accordingly             107 similar to the previous GoP. This scenario is continuing
SVBR data rate for GoP-369 is 25200 Bytes/GoP.                           until the GoP 200, where VBR data rate for GoP changes to
                                                                         other value.
    In the above case, although the present VBR and estimation
of the next VBR (Ropen(370)) are high, and since the bucket                  Therefore, the clear disadvantage of SVBR algorithm is that
                                                                         it has been designed to maintain its data rate to be vary between
                                                                         VBR and CBR data rate. Inevitably in a low data rate,




                                                                                         Figure 9: A long-low-flat data rate sequence
                  Figure 8: Sharp VBR increase scenario




                                                                    27                                 http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 9, No. 3, March 2011
especially in a long-flat data rate, SVBR tend to be around that
rate. Thus, it maintains its visual quality. However, since the
bucket fullness is in a low level, the data rate can be increased
so that the visual quality can be increased as well.

D. Fluctuate Data Rate
    There are two types of conflicting fluctuated data rate;
positive and negative fluctuation. Video data rate fluctuates
generaly. A positive fluctuation is a condition where its QP
                                                                                               Figure 10: SVBR data rate fluctuation
values are constant. Thus, video data rate fluctuation is
considered as a positive fluctuation. A negative fluctuation is a
condition where data rate and QP value are not constant. This
subsection is referred to the negative fluctuation, especially the
fluctuation in the SVBR data rate. In the following, two cases
of the negative fluctuation are explained.
Case I: VBR is Higher than CBR
   Wherever SVBR fluctuates differently from VBR, it can be
considered as a negative fluctuation. One of this scenario can
be seen at GoP 208 to 250. Figure 10 and 11 show this
scenario.                                                                                         Figure 11: SVBR QP fluctuation

    This scenario is one of the good examples of the sensitive                  The fluctuation occurs also as a result of sensitivity in the
relationship as describe in the subsection III-D. In the GoP 209,           algorithm when the gap between VBR and CBR is quite wide.
QP=11 and SVBR data rate (R(209)) is 19152. This data rate is               Although, it is less wider as compared to the previous
below both CBR and VBR data rate at that GoP. One of the                    subsection, but the changes in one QP value has contributed to
principles of SVBR is to let the next data rate lying between               this fluctuation. Another attributes to this fluctuation is the
VBR and CBR data rate by adjusting the next QP value. It                    oscillation on the bucket fullness level at the CBR data rate.
should be emphasized here that SVBR is only able to estimate
the next QP value with the hope that the actual data rate will be
lying between intended region.                                                                       VI.      SUMMARY
                                                                                This paper presented the extensive analysis of the SVBR
    Since the generated Ropen for GoP 210 is a little bit higher, it
                                                                            algorithm performance. In general, this algorithm has
will produce a little bit higher QP value but below than 11.
                                                                            demonstrated its novel creation of an ideal rate controller
Another way to comprehend this scenario is by looking at how
                                                                            especially for real time video application which produce a
SVBR try to increase its data rate to be between CBR and
                                                                            higher visual quality video, the video data rate is within a
VBR. Given that the current data rate is below CBR-VBR and
                                                                            permissible bandwidth level, and less delay as a result of no
its QP value is 11, SVBR will decrease its QP value (to
                                                                            introduction of an additional buffer or complex computational
increase its data rate) but to the maximum of 10. This is due to
                                                                            algorithm.
that the QP value for the VBR is 10. The calculation of Q(210)
gives 10.4223 and since there is no fraction of QP encoding,                   However, there are spaces for performance improvement of
SVBR obtains Q(210)=10.                                                     the SVBR algorithm, especially under certain specific
    The Q(210)=10 will generate similar data rate with VBR
GoP-210 data rate. Since, VBR data rate is in active sequence,
SVBR obtains much higher data rate compared to previous data
rate. The negative implication of this situation is on the
generation of the next QP and its data rate. As a result of the
wider gap between Ropen and r, it causes the generation of QP
becomes sensitive. In the case of Q(211), SVBR obtains
14.3137 (or 14 only). This QP, when translates to the actual
data rate, is equal to 17136 Bytes/GoP. It creates a significant
ding-dong in the SVBR data rate. This situation continues until                    Figure 12: SVBR with VBR lower than CBR data rate fluctuation
GoP 250 when VBR data rate changes into a lesser active
scene.
   Case II: CBR is Higher than VBR
    At GoP 424 to GoP 448, similar negative fluctuation can be
observed but this time VBR is lower than CBR. The other
dissimilar condition is that the bucket fullness level is also
fluctuating between above and below CBR data rate. Figure 12                         Figure 13: SVBR with VBR lower than CBR QP fluctuation
and 13 show this scenario.



                                                                       28                                  http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            Vol. 9, No. 3, March 2011
conditions. Besides the strengths of SVBR algorithm, this                              [16] M. A. Talaat, M. A. Koutb, and H. S. Sorour, ―PSNR Evaluation of
analysis had pointed several weaknesses areas that can be                                   Media Traffic over TFRC,‖ International Journal of Computer Networks
                                                                                            & Communications (IJCNC), vol. 1, no. 3, pp. 71–76, 2009.
considered for improvement. The circumstances that had been
                                                                                       [17] C. Bouras, V. Papapanagiotou, K. Stamos, and G. Zaoudis, ―Efficient
strongly stressed were when there is an occurrence of a sudden                              power management adaptation for video transmission over TFRC,‖ in
sharp decrease in the VBR data rate, the occurrence of a sudden                             2010 Sixth Advanced International Conference on Telecommunications
bucket overflow, the existence of a low data rate with low                                  (AICT), May 2010, pp. 509 –514.
bucket fullness level, and the generation of a cyclical negative                       [18] C. Bouras, A. Gkamas, and G. Kioumourtzis, ―Adaptive Smooth
fluctuation. These weaknesses play an important role in                                     Simulcast Protocol (ASSP) for video applications: Description and
designing a new algorithm for stored video transmission                                     performance evaluation,‖ Journal of Network and Systems Management,
                                                                                            pp. 1–35, 2010, 10.1007/s10922-010-9159-8. [Online]. Available:
application.                                                                                http://dx.doi.org/10.1007/s10922-010-9159-8
                                                                                       [19] ——, ―Evaluation of single rate multicast congestion control schemes
                                REFERENCES                                                  for MPEG-4 video transmission,‖ in Proceedings of the 5th Euro-NGI
                                                                                            conference on Next Generation Internet networks, ser. NGI´09.
[1]    P. Seeling, M. Reisslein, and B. Kulapala, ―Network performance                      Piscataway, NJ, USA: IEEE Press, 2009, pp. 32–39. [Online]. Available:
       evaluation using frame size and quality traces of single-layer and two-              http://portal.acm.org/citation.cfm?id=1671421.1671426
       layer video: A tutorial,‖ IEEE Communications Surveys Tutorials, vol.
       6, no. 3, pp. 58 –78, 2004.                                                     [20] A. Al Tamimi, R. Jain, and C. So-In, ―Modeling and prediction of high
                                                                                            definition video traffic: A real-world case study,‖ in 2010 Second
[2]    M.-J. Kim and M.-C. Hong, ―Adaptive rate control scheme for real-time                International Conferences on Advances in Multimedia (MMEDIA), Jun.
       H.264/AVC video coding,‖ in 2010 Digest of Technical Papers                          2010, pp. 168 –173.
       International Conference on Consumer Electronics (ICCE), 2010, pp.
       271–272.                                                                        [21] W.-K. Liao and Y.-H. Lai, ―Type-aware error control for robust
                                                                                            interactive video services over time-varying wireless channels,‖ IEEE
[3]    M. Semsarzadeh, M. Langroodi, and M. Hashemi, ―An adaptive rate                      Transactions on Mobile Computing, vol. 10, no. 1, pp. 136 –145, Jan.
       control for faster bitrate shaping in x264 based video conferencing,‖ in             2011.
       2010 IEEE International Symposium on Broadband Multimedia Systems
       and Broadcasting (BMSB), Mar. 2010, pp. 1 –4.                                   [22] R. Zhang, R. Ruby, J. Pan, L. Cai, and X. Shen, ―A hybrid
                                                                                            reservation/contention-based MAC for video streaming over wireless
[4]    X. Liu, Q. Dai, and C. Fu, ―Improved methods for initializing R-Q                    networks,‖ IEEE Journal on Selected Areas in Communications,, vol.
       model parameters and quantization parameter in H.264 rate control,‖ in               28, no. 3, pp. 389 –398, Apr. 2010.
       2009 WRI World Congress on Computer Science and Information
       Engineering, vol. 6, Mar. 2009, pp. 320 –323.                                   [23] M. S. Koul, ―Error Concealment and Performance Evaluation of
                                                                                            H.264/AVC Video Streams in a Lossy Wireless Environment,‖ Dept. of
[5]    S. G. Zhao Min, Takeshi Ikenaga, ―A novel rate control algorithm for                 Electrical Engineering, The University of Texas at Arlington, Tech.
       H.264/AVC,‖ in The 23rd International Technical Conference on                        Rep., 2008.
       Circuits/Systems, Computers and Communications. The Institute of
       Electronics, Information and Communication Engineers (IEICE), 2008,             [24] O. A. Lotfallah, M. Reisslein, and S. Panchanathan, ―A framework for
       pp. 725 – 728.                                                                       advanced video traces: evaluating visual quality for video transmission
                                                                                            over lossy networks,‖ EURASIP J. Appl. Signal Process., pp. 263–263,
[6]    Z. Chen and K. N. Ngan, ―Recent advances in rate control for video                   Jan.               2006.               [Online].                Available:
       coding,‖ Image Commun., vol. 22, no. 1, pp. 19–38, 2007.                             http://dx.doi.org/10.1155/ASP/2006/42083
[7]    H. A. Marios C. Angelides, The Handbook of MPEG Applications:                   [25] C.-H. Ke, C.-K. Shieh, W.-S. Hwang, and A. Ziviani, ―Improving video
       Standards in Practice. John Wiley and Sons, 2011.                                    transmission on the Internet,‖ IEEE Potentials, vol. 26, no. 1, pp. 16–19,
[8]    S. Eshaghi and H. Farsi, ―Rate control and mode decision jointly                     Jan. 2007.
       optimization in H.264AVC,‖ in Proceedings of the 4th conference on              [26] P. Seeling, F. H. P. Fitzek, and M. Reisslein, Video Traces for Network
       European computing, ser. ECC’10. Stevens Point, Wisconsin, USA:                      Performance Evaluation: A Comprehensive Overview and Guide on
       World Scientific and Engineering Academy and Society (WSEAS),                        Video Traces and Their Utilization in Networking Research. Springer,
       2010,           pp.         280–283.         [Online].        Available:             2007.
       http://portal.acm.org/citation.cfm?id=1844367.1844414.
[9]    P. Ranganth and Y. V. Swaroop, ―Adaptive rate control for H.264 video
       encoding in a video transmission system,‖ in XXXII National Systems                                         AUTHORS PROFILE
       Conference (NSC 2008), Dec. 2008.                                               Ahmad Suki Che Mohamed Arif is is a lecturer in the Graduate Department
[10]   W. Lu, X. Gao, Q. Deng, and T. Wang, ―A basic-unit size based                   of Computer Science, Universiti Utara Malaysia (UUM) and is currently
       adaptive rate control algorithm,‖ in Fourth International Conference on         attached to the InterNetWorks Research Group at the UUM College of Arts
       Image and Graphics (ICIG 2007), Aug. 2007, pp. 268 –273.                        and Sciences as a doctoral researcher.
[11]   H. Hamdi, J. W. Roberts, and P. Rolin, ―Rate control for VBR video              Associate Professor Dr. Suhaidi Hassan is currently the Assistant Vice
       coders in broad-band networks,‖ IEEE Journal on Selected Areas in               Chancellor of the College of Arts and Sciences, UUM. He is a senior member
       Communications, vol. 15, no. 6, pp. 1040–1051, 1997.                            of the Institute of Electrical and Electronic Engineers (IEEE) in which he
                                                                                       actively involved in both the IEEE Communications and IEEE Computer
[12]   A. Lie and J. Klaue, ―Evalvid-RA: Trace driven simulation of rate
                                                                                       societies. He has served as the Vice Chair (2003-2007) of the IEEE Malaysia
       adaptive MPEG-4 VBR video,‖ Multimedia Systems, vol. 12, no. 1,
       2008.                                                                           Computer Society.
                                                                                       Dr. Osman Ghazali is a Senior Lecturer at Universiti Utara Malaysia. As an
[13]   A. Lie, ―Enhancing rate adaptive IP streaming media performance with
                                                                                       academician, his research interests include congestion control, quality of
       the use of Active Queue Management,‖ Ph.D. dissertation, Norwegian
       University of Science and Technology, 2008.                                     services, wired and wireless network, transport layered protocols and network
                                                                                       layered protocols. His works have been published in international conferences,
[14]   J. Klaue, B. Rathke, and A. Wolisz, ―Evalvid - a framework for video            journals and won awards on research and innovation competition in national
       transmission and quality evaluation,‖ in the 13th International                 and international level.
       Conference on Modelling Techniques and Tools for Computer
       Performance Evaluation, 2003, pp. 255–272.                                      Dr. Mohammed M. Kadhum, is a visiting assistant professor in the Graduate
                                                                                       Department of Computer Science, UUM. He is currently attached to the
[15]   C.-H. Ke, C.-K. Shieh, W.-S. Hwang, and A. Ziviani, ―An evaluation              InterNetWorks Research Group at the UUM College of Arts and Sciences as a
       framework for more realistic simulations of MPEG video transmission,‖           research advisor. He has been awarded with several medals for his outstanding
       Journal of information science and engineering, vol. 24, pp. 425–440,           research projects.
       2008.




                                                                                  29                                    http://sites.google.com/site/ijcsis/
                                                                                                                        ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                         Vol. 9, No. 3, March 2011

An Efficient Self-Organized Authentication and Key
Management Scheme for Distributed Multihop Relay-
           Based IEEE 802.16 Networks
    Adnan Shahid Khan, Norsheila Fisal, Sharifah                                               M. Abbas
      Kamilah, Sharifah Hafizah, Mazlina Esa,                                       Wireless Communication Cluster
              Zurkarmawan Abu Bakar                                              MIMOS Berhad, Technology Park Malaysia
       UTM-MIMOS Center of Excellence in                                             57000 Kuala Lumpur, Malaysia
Telecommunication Technology, Faculty of Electrical                                    mazlan.abbas@mimos.my
 Engineering, Universiti Teknologi Malaysia 81310
  Skudai, Johor, Malaysia, adnan.ucit@gmail.com,
         {sheila,kamilah,sharifah, mazlina,
            zurkarmawan}@fke.utm.my,

Abstract— Wireless internet services are rapidly expanding and           between an Multihop Relay Base Station (MR-BS) and an
improving, it is important to provide users with not only high           Mobile Station (MS), here Relay Station (RS) is just an
speed and high quality wireless service but also secured.                amplify and forward, but in the second security mode, referred
Multihop relay-based support was added, which not only help for          to as distributed modes, which incorporate authentication and
improving coverage and throughput but also provides features             key management between an MR-BS and a non-transparent
such as lower backhaul deployment cost, easy setup, robustness           RS we called as NRS and between the NRS and a MS. During
and re-configurability, which make it one of the indispensable           the registration process, an RS can be configured to operate in
technologies in next generation wireless network. A WiMAX                distributed security mode based on its capability [1]. Since
network usually operates in a highly dynamic and open                    AUTH-INFO message is optional and informative we begin
environment therefore it is known to be more vulnerable to               with the security analysis from the AUTH-REQ message. As
security holes. Security holes most of the time is trade off with
                                                                         this message is plain text and for such message, eavesdropping
authentication and key management overheads. In order to
operate securely, communication must be scheduled either by a
                                                                         is not a problem since the information is almost public and is
distributed, centralized or hybrid security control algorithms           preferred to be sent in plain text to facilitate authentication. To
with less authentication and key management overheads. In this           capture and save the authentication message sent by a
paper, we propose a new fully self-organized efficient                   legitimate, is not a big deals, thus NRS may face a replay
authentication and key management scheme (SEAKS) for hop-                attack from an adversary. Although an adversary
by-hop distributed and localized security control for Multihop           eavesdropping the message, cannot derive the AK from the
non-transparent relay based IEEE 802.16 networks which not               message, because it does not have the corresponding private
only helps in security counter measures but also reduce the              key. However, the adversary still can replay message II
authentication and key maintenance overheads. The proposed               multiple times and then either exhaust NRS capabilities or
scheme provides hybrid security controls between distributed             force NRS to deny the SS who owns that certificates [1] [2].
authentication and localized re-authentication and key                   The reason is that if NRS sets a timeout value which makes
maintenance. The proposed scheme uses distributed non-                   NRS reject Auth REQ from the same MS in a certain period ,
transparent decode and forward relays for distributed                    the legitimate request from the victim MS will be ignored.
authentication when any non-transparent Relays (NRS) want to             Then denial of service attack occurs to victim MS, however
join the networks and uses localized authentication when NRSs            the ultimate solution for these types of attacks are the
want to re-authenticate and do key maintenance. We analyze the           introduction of digital signatures at the end of the messages
procedures of the proposed scheme in details and examine how it          which can be automatically time-stamped, that basically
works significantly to reduce overall authentication overheads
                                                                         provides the authentication and non-repudation of this
and counter measures for security vulnerabilities such as Denial
of Service, Replay and interleaving attacks.
                                                                         message. The design of digital signature system may be
                                                                         flawed or vulnerable to some specific attacks such as collision
                                                                         attacks against X.509 public-key certificates and
   Keywords- Wimax Security, Multihop Relay based IEEE                   cryptographically weak pseudo random bit generator.
802.16, Key Management, Self-Organized Authentication)                   Adversaries may attempt for total break, universal forgery,
                                                                         selective forgery or existential forgery.
                     I.      INTRODUCTION
                                                                            The strongest security definition requires protection against
   In Multihop Relay (MR) network, two different security                existential forgery even if an adversary is able to mount an
modes are referred, the first one is referred to as the                  adaptive chosen message attack. Later, nonce was added to the
centralized security mode which is based on key management



                                                                    30                               http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                         Vol. 9, No. 3, March 2011
digital signature, the idea of nonce values is that they are used        literature is very sparse. In this network, all the relays are
only once with a given key, however, the exchange of nonce               connected to MR-BS wirelessly and transparently or non-
only assures SS that message III is a replay corresponding to            transparently and only MR-BS is connected to IP cloud as a
its request. The NRS still faces the replay attack because NRS           backhaul, thus this infrastructure can be used in many real
cannot tell whether message II is sent recently or it is just an         time applications [2].
old message [3]. If reply attack cannot be successful, for sure             As the matter of fact, security is essential in wireless
‘denial of service’ will occur. The author of [4] also suggested         technologies to allow rapid adoption and enhance their
passing the pre-AK to SS instead of AK and let SS and NRS                maturity. Due to lack of physical boundaries, the whole relay-
derive AK from pre-AK at both ends. If the generation of AK              based infrastructure in exposed to security holes. However,
exhibits significant bias, adding freshness in the AK may                IEEE 802.16 standard stipulates some powerful security
prevent the exposure of the AK, however according to [4] this            controls, including PKMv2, EAP-based authentication and
cannot provide freshness as they claimed. If we consider the             over-the-air AES based encryption. But secure technology
security issues of relay-based IEEE 802.16 networks in                   does not in itself comprise a secure end-to-end network and
centralized as well as distributed authenticated, every node             consequently, WiMAX presents a range of security
need to authenticate itself with MR-BS and ultimately with               vulnerabilities. Since the first Amendment was released on
AAA server. Secondly, every node needs to maintain two                   MR specifications [1], a few papers have been published to
simultaneously keys AK and TEK to remain authenticated.                  introduce and address the security issues. There are some
Failure to maintained these keys will result in the re-                  papers that review this standard in details such as [6] and [7],
authenticated from scratch which is no doubt extra                       and there are some papers they purely works on key
                                                                         managements specially Sen Xu and Manton Mathews who
authenticated overhead. Let’s suppose, there are five NRS,
                                                                         published a series of work such as [3] and [4] on security
where every NRS has to keep track of its AK and TEKs and                 issues on the standard as well as on Privacy key Management
consequently authentication. Thus generation of authentication           (PKM) protocols. Karen Scarfore with her team came up with
overhead by five NRS no doubt lessen the overall deployed                a special publication on Guide to security for Wimax
network efficiency. To solve this authentication overhead                technologies (Draft) which was the recommendations of the
problem, Self organized and efficient authentication and key             National Institute of Standards and Technology (NIST).
management scheme (SEAKS) proves to be the best candidate                Taeshik Shon and Wook Choi [8] discussed about the
in the relay-based IEEE 802.16 network, which utilized non-              Analysis of Mobile WiMAX Security, Vulnerabilities and
transparent and decode and forward relays. SEAKS provides                Solutions. Y. Lee and H. K. Lee in their paper [9] gives more
hybrid scheme with distributed authentication and localized              focus on hybrid authentication scheme and key distribution for
re-authentication and key maintenance. However, this                     MMR in IEEE 802.16j.
technique not only helps in minimizing the overall
                                                                             The authors [10] and [11] review the standard and
authentication overhead on MR-BS and AAA server but also                 analyzed its security in many aspects, such as vulnerabilities in
provide efficient way to countermeasure the vulnerabilities.             authentication and key management protocols and failure in
                                                                         data encryption. In IEEE 802.16j [12] standard, Multihop
    The rest of the paper is organized as follows, after related         Relay (MR) is an optional deployment in which a BS in
work, section 3 gives the overview of generals attacks on                (802.16e) may be replaced by a Multihop Relay BS (MR-BS)
network, section 4 discusses centralized and distributed                 and one or more relay stations (RS). The MR mechanism
authentication controls, section 5 deals with the security goals         provides several advantages, such as providing additional
of relay-based WiMAX network, section 6 describe the self-               coverage for the serving BS, increasing transmission speed in
organize scheme (SEAKS), section 7 gives the analysis of                 an access network, providing mobility without SS handover,
proposed scheme which is followed by conclusion and future               decreasing power consumption when transmitting and
work.                                                                    receiving packets, and enhancing the quality of services [3].
                 II.   RESEARCH BACKGROUND
                                                                         There has been a significant amount of work done on security
                                                                         issues and their protocols as shown above but none of these
      In 2006, the IEEE 802.16 working group (WG) approved               cover security protocols which works for minimized
a project Authorization Request (PAR) focused on the Relay               authentication and key management overheads in non-
Tasks Group (TG). The main task of this Relay TG was to                  transparent Relay-based WiMAX networks in distributed
develop an amendment to the IEEE Std 802.16 enabling the                 environment.
operation of Relay Station (RSs) in OFDMA wireless
networks defined by 802.16 [2]. Enhancement of Relays to                    III.   GENERAL ATTACKS ON RELAY-BASED IEEE 802.16
support Multihop not only increases the wireless converge but                                   NETWORK
also provide features such as lower backhaul deployment cost,                Before we start to elaborate our self organized algorithm,
easy setup and high throughput. Relay stations concept as                we would like to high-light some of the typical MAC layer
discussed in [1][2] and [5] introduced four types of RSs from            attacks on authentication and key management protocols. The
the perceptive of physical and Mac layer. After successful               first and very common attack is message replay attack [7].
comparison, the main focus of this research is on the non-               This attack is not only common in key management and
transparent RS operating in distributed scheduling and security          authentication protocols but also in multicast and broadcast (M
mode [2], WiMAX relay-based network in still under draft and             & B) services [11]. In a replay attack, an adversary intercepts



                                                                    31                              http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                       Vol. 9, No. 3, March 2011
captures and saves the authentication messages sent by the             have this key information. The intermediate RS use particular
legitimate RS/SS. Thus adversary impersonates the legitimate           shared keys to authenticate management messages which
RS/SS and resends this message after specific period of time.          received from other RSs [12][14].
Denial of service (DoS) is also one of the major attacks in
wireless networks especially in WiMAX networks. Here,                  B. Distributed Security Control
consider an adversary that eaves-dropping the message cannot                In this mode, an access RS, which provides a point of
derive the AK as it does not have the corresponding private            access into the network for an MS or RS, can derive the
key. This adversary still can replay AUTH-REQ message                  authentication key established between MS and MR-BS. An
multiple times and thus exhaust MR-BS capabilities and force           RS can be configured to operate in distributed security mode
MR-BS to deny this adversary. This may happen, if the MR-              based on its capability during the registration process, and
BS sets a time out value which makes MR-BS reject AUTH-                relays initial key management messages between the MR-BS
REQ message from the same RS/SS with an interval of time.              and MS/subordinate RS. Upon master session key
Thus, MR-BS denies the legitimate RS/SS AUTH-REQ,                      establishment, access RS securely acquires relevant
which actually owns the certificate. DoS are common in                 Authorization Key of the subordinate RS/MS from the MR-
authentication, key management protocols and M & B                     BS. Using PKM protocol, the access RS can derives all
services. Man-in-the-Middle (MiTM) attack is another critical          necessary keys. Different traffic encryption keys (TEKs) are
attack and is generally applicable in communication protocol           used for relay link and access link in distributed security
scheme where mutual authentication is absent especially in             control mode. They are distributed by MR-BS and RS
PKMv1. This attacks leads to message modification and                  respectively [4][15]. The SA will be created between an MS,
masquerading problems, specially node spoofing, rogue base             an access RS and the MR-BS in distributed security mode.
as well as relay stations, theft of service (ToS). To avoid            Each MS shall establish an exclusive primary SA with the RS,
MiTM attack on PKM protocol, mutual authentication was                 interacting with the RS as if it were a BS from the MS’s view.
proposed i.e. PKMv2. No doubt PKMv2 is soundly safe for                Similarly, each RS shall establish an exclusive primary SA
MiTM but it cannot help allowing adversary to play                     with MR-BS [12][16].
interleaving attack.
     Interleaving attack in complex to be explained but easy to
attempt. An adversary attempts this attack with the help of two                V.   SECURITY GOALS OF RELAY-BASED WIMAX
different instances. In the first instance, adversary                                          NETWORKS
impersonates as SS/RS and sends the interrupted message to                Non-transparent Relay-based WiMAX network may
the MR-BS. MR-BS authenticates and replied with                        require the following security function, which have not widely
corresponding keys. Adversary needs to reply these keys to             been studied by others until now.
RS/SS to be successfully authenticated, as it cannot decrypt
the message encrypted by the SS/RS’s public key in order to                •    Localized and hop-by-hop authentication is required.
get the AK to encrypt the nonce challenge. Thus, it cannot do                   In Relay-based WiMAX network. NRS in introduced
authentication currently. Now to solve this technicality,                       for coverage extension and throughput enhancement,
adversary force RS/SS to run another protocol instance to                       for this purpose, hop-by-hop authentication between
answer the challenge. Once RS/SS send the request, adversary                    NRS, NRS/MS and NRS/MR-BS should be
replies SS with the same nonce challenge which the MR-BS                        supported for self organized network operations.
sends him. Thus RS/SS send nonce and AK to adversary                       •    All the participating devices must be validated and
which later sends to MR-BS to finish this authentication                        authenticated by AAA server through MR-BS,
successfully. This attack normally can occur only on PKMv2                      because digital certificates of participating devices
or where mutual authentication is present. In IEEE 802.16                       are only registered in AAA server database, however,
Multihop networks, the number of wireless devices engross is                    NRS should authenticate other NRS/MS on behalf of
increased, thus produce wide space for interleaving attack [3]                  MR-BS, and basically this concept leads our
[4].                                                                            proposed scheme towards self organized way.
                                                                           •    Conventional MS should be used in non-transparent
                                                                                Relay-based WiMAX network without any functional
 IV.   CENTRALIZED VS. DISTRIBUTED AUTHENTICATION                               modification in MS.
                                                                           •    Overall authentication overhead should be
A. Centralized Security Control                                                 minimized.
    In this mode, the intermediate RS is not involved with the
                                                                           In this paper we proposed self organized distributed and
establishment of the security association (SA) between MS
                                                                       localized authentication and key management, where initially
and MR-BS in the multihop relay system. The RS only simply
                                                                       participating devices validated and authenticated by MR-BS
relays the user data or MAC management message that it
                                                                       and afterward NRSs are responsible for authenticating and
receives from the MS, but the RS does not process it. RS does
                                                                       managing freshness of AK/TEK. The proposed scheme
not have any key information relevant to the MS, and all the
                                                                       alleviates above security problems and examined how it
keys related to MS are maintained at the MS and MR-BS [13].
                                                                       satisfies the security requirements of non-transparent Relay-
When the SA is established between RS and MR-BS in the
                                                                       based WiMAX networks.
MR system, key data is shared and maintained at the particular
RS and MR-BS, such as AK, and the intermediate RS does not



                                                                  32                             http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                          Vol. 9, No. 3, March 2011
                          VI.   SEAKS                                     NRS1 will be able to continuously exchange encrypted traffic
A. Authentication Procedures of NRS1 with MR-BS                           with the MR-BS.


     Self organized and efficient authentication and key
management scheme (SEAKS) is based on self organized
model using non-transparent, decode and forward Relays.
SEAKS provides hybrid authentication scheme with
distributed authentication and localized re-authentication and
key maintenance. However, this technique not only helps in
minimizing the overall authentication overhead on MR-BS
and AAA server but also provides efficient way to
countermeasure the vulnerabilities; let’s consider any non-
transparent relay stations such as NRS1 wants to join the
WiMAX networks. NRS1 sends its Auth-REQ message to the
serving MR-BS, Auth-REQ includes manufacturer-issued
                                                                                   Figure 1: Authentication of NRS1 with MR-BS
X.509certificates, a description of cryptography algorithms
and NRS’s basic CID. The CID that assigned during the initial                 A TEK state machine remains active as long as NRS1 is
ranging, normally primary SAID is equal to the basic CID. In              authorized to operate in the MR-BS security domain i.e. with
response to an authorization Request message, a MR-BS                     valid AK. NRS1 is authorized to participate in that particular
validates the requesting NRS’s identity, determines the                   security association [1] [2]. The parent authorization state
encryption algorithm and protocol support, activates an AK                machine stops its entire child TEK state machines when NRS
for NRS1, encrypt it with the NRS1’s public key and send it               receives from the MR-BS authorization reject during the
                                                                          reauthorization cycle. We can say, this is localized
back to the NRS1 is AUTH-REP message. It also includes 4
                                                                          authentication between NRS1 and MR-BS and these
bit sequence number, used to distinguish between successive               procedures are same as mentioned in [3][4]. All the key state
generations of AKs, a life time, and the securities identities for        machines are refreshing the keys. Now NRS1 is eligible to
which NRS1 are authorized to obtain keying materials. Once                transmit UL-MAP message and any node listening to this
authenticated and obtain the authorization key (AK), NRS1                 message, can sends the AUTH-REQ.
must periodically refresh its AK by reissuing an AUTH-REQ                      Now, there is another non-transparent relay station NRS2
message to the MR-BS. However, reauthorization is identical               that wants to join the network. Due to its non-transparent
                                                                          nature, it is not in the coverage of MR-BS and only NRS1 can
to authorization with the exception that NRS1 does not send
                                                                          listen to it. According to SEAKS, NRS2 listened to the UL-
its authentication information messages during reauthorization            MAP from NRS1 and sends the AUTH-REQ message to
cycle, to avoid service interruption during reauthorization,              NRS1. However, any non-transparent node that wants to join
successive generations NRS1 AKs have overlapping lifetime.                the network must have to authenticate itself with MR-BS as
Both NRS and MR-BS support up to two simultaneously                       MR-BS is directly attached to the AAA server, while NRS1
active AKs during these transition period. Authentication of              cannot authenticate NRS2 on behalf of MR-BS.
NRS1 with MR-BS is shown in Figure 1.
    Once NRS1 achieve authorization, its starts a separate                B. Authentication Procedure of NRS2 with MR-BS
traffic encryption key (TEK) state machines for each of SAID
defined in the AUTH-REP message. Each TEK state machine                       According to SEAKS, NRS1 received the AUTH-REQ
operating within the NRS1 is responsible for managing the                 (NRS2) and send it to MR-BS during the refreshing of AK
keying material associated with its respective SAID. TEK                  message because these authentications are delay tolerance and
                                                                          secured. NRS1 receive MACPDU of NRS2 and encapsulate it
state machine periodically send the key request messages to
                                                                          into its own PKM-REQ message of type 9 and code 4 [1] [2].
the MR-BS to refresh the keying material for their respective             MR-BS receives MACPDU of NRS1 which is basically sent
SAID. TEK is encrypted by appropriate KEK derived from the                for refreshing AK. MR-BS will check MAC header of NRS1,
AK. The operation of the TEK state machine’s key request                  if RAR (Relay Auth Request) is equal to 1, it means there is
scheduling algorithm, combined with the MR-BS’s regimen                   one relay request inside MACPDU, RAR is basically the
for updating and using SAID keying materials ensure that                  reserve bit utilized for RAR indications.




                                                                     33                             http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                         Vol. 9, No. 3, March 2011
    Once MR-BS obtains AUTH-REQ of NRS2, it validates
its authenticity and activates AK2 and other parameters,
encrypt it with NRS1 public key and responds to NRS1 in its
AUTH-RSP message. NRS1 receives NRS2’s security info,
save one copy of all info into its knowledge shared table
(KST) generates AK21, encrypt it with NRS2 public key, and
sends its AUTH-RSP message to NRS2.




                                                                             Figure 3: Authentication of NRSn with NRS1/MR-BS

                                                                         means its PKM-AUTH–REQ message, once NRS2 receives
                                                                         this message, it will check RAR values. If the value is one, it
                                                                         will check inside the Mac payload and save the message to its
                                                                         KST, then forward it to NRS1. Before sending, it will again
                                                                         set the RAR==1. Hence, there are two Mac messages present
     Figure 2: Authentication of NRS2 with NRS/MR-BS                     inside the Mac payload, one is AUTH-REQ (code 4) and the
                                                                         other is KEY-REQ (code 5). NRS1 will receive this message
Once NRS2 get authenticated, it will start its separate                  and check RAR value; if it is one then it will copy the AUTH-
authorization and traffic encryption key state machine with              REQ message to its KST, else it will ignore and forward it to
NRS1. As mentioned in the previous section, all the relays               MR-BS. MR-BS will receive the message and validate it. MR-
involved are distributed, non-transparent, and decode and                BS will send back the AUTH-RSP message with type 9. Again
forward. Thus, they can generate AUTH-RSP on behalf of                   here, there are two Mac messages inside the macpayload, one
MR-BS as shown in Figure 2. However, it cannot authenticate              is with key reply (code 8) and other is auth-reply (code 5) to
its real validity because it does not have vendor’s digital              NRS1. NRS1 check the code values, if it is 5, it will send to
certificate database. If NRS1 fails to re-authenticate before the        NRS2. If 8 then it will use for its refreshing of keys. NRS2
expiration of its current AK, the MR-BS will hold no active              again receives two Mac messages inside the payload, one is
AKs for NRS1 and will consider not only NRS1 but also all                with code 5 and other is with code 8. It will retain code 8 with
others NRS unauthorized. A MR-BS will remove from its                    itself and send the code 5 message to NRS3. Thus NRS3 is
keying tables all TEKs associated with NRS1 [4] [12]. All                authenticated with MR-BS with distributed manner and later it
NRSs maintain KST of recently exchanged AK with its                      will maintain its keys locally as mentioned in the previous
neighbours. If NRS2 fails to re-authenticate before the                  sections. The illustrations of authentication procedures of
expiration of its current AK, NRS1 will wait until it sends              NRSn with MR-BS are shown in Figure 3.
AUTH-REQ message, NRS1 will check its KST, if it found
then it validates its authenticity locally rather than sending
                                                                         D. Localized and Distributed Key Management in Relay-
again to MR-BS and wait for the response and compute the
                                                                            Based IEEE 802.16 Network
keys and send to NRS2. The advantage is the communication
cost in shape of authentication overhead and thus less
complexity.                                                                  We assume that all the NRS are authenticated and
                                                                         maintains theirs KST. Inside the KST, we have two portions,
                                                                         one is updated and other is non-updated stacks. All the active
C. Authentication Procedures of NRSn with NRS1/MR-BS                     and valid AK, TEK and SAIDLIST are residing inside the
                                                                         updated one, and all the expired and revoked keys are inside
    Now, if NRS3 wants to join the network, it will send the             that non-updated stack. If any new NRS wants to join the
AUTH-REQ message to MR-BS, as it is working in non-                      network, the serving NRS first look at in its KST in updated
transparent mode. Hence, it has to send the request to the non-          stack. If it cannot find the required information, it will move to
transparent and authenticated relay which should be inside its           non-updated stack. If still it cannot find inside the non-updated
coverage that is NRS2. While sending the message, NRS3 will              stack, the serving NRS will send the AUTH-REQ to the MR-
set RAR==1, inside the macheader so that NRS2 can                        BS through other NRS and all other procedures are the same.
recognize, there is one AUTH-REQ message inside the Mac                  The localized re-authentication and key maintenance
payload, and set the TYPE value ==8 and code ==4, which                  procedures is shown in Figure 4. If incase it found the




                                                                    34                               http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                         Vol. 9, No. 3, March 2011
information in either of the stack, it validates its authenticity        authentication protocol is illustrated in [3] [4]. We have
and send SAIDLIST and AK in AUTH-REP message and                         evaluated our scheme in terms of communication costs some
send one copy to the MR-BS for its own KST.                              key vulnerabilities and their countermeasures.
                                                                         A. Communication Cost
                                                                         The communication cost of our proposed scheme is mainly
                                                                         comprises of re-authentication and key maintenance
                                                                         overheads. The total communication costs of SEAKS can be
                                                                         evaluated into two phases, the AUTH-REQ and AUTH-REP
                                                                         phases. In AUTH-REQ phase, the source NRS sends its
                                                                         AUTH-REQ as well as others NRS AUTH-REQs directly via
                                                                         one hop to the MR-BS. This type of authentication occurs
                                                                         once for specific NRS as after authentication, source NRS is
                                                                         responsible for authenticating others NRSs who have already
                                                                         obtained their AK/SAID. Within this first phase, we have
                                                                         another issue of refreshing AK/TEK and all the NRS/MS have
Figure 4: Localized Re-Authentication and Key Maintenance                to periodically and constantly send their refreshing request.
                                                                         According to the standard, AK/TEK is refreshed by sending to
MR-BS validates its authenticity. If its valid then it will save         the MR-BS with Multihop using Multihop Relays, but in our
in its KST else it will send AUTH-REJECT message in                      scheme, this is done localized as this system became
AUTH-REP. Now the entire network is doing distributed                    distributed. Hence, the communication cost of sending AUTH-
authentication as shown in Figure 5.                                     REQ with refreshing AK/TEK can be calculated as follow

     Figure 5 shows overall flow of our self organized re-
authentication and key management schemes in non-                                                     :                  1            _
transparent Relay-based WiMAX network.
                                                                         Where H is the average number of Hops between the source
                                                                         and the destination, n is the number of NRS participating in
                                                                         the entire network, certificate size is important parameter to be
                                                                         counted as NRS also combine other AUTH-REQs with their
                                                                         digital certificates.
                                                                              In the AUTH-REP phase, MR-BS sends its AUTH-REP
                                                                         message to its neighbor NRS with AK/SAID, this message is
                                                                         unicast altogether with separate other AK/SAID for other
                                                                         requesting NRS. Once NRS receives AK/SAID from MR-BS
                                                                         it is encrypted with public key of requesting NRS, save the
                                                                         copy to its local repository and send it back to requesting
                                                                         NRS. The requesting NRS maintains it is AK/TEK with single
                                                                         hop with serving NRS, thus minimize the authentication and
    Figure 5: localized distribution of Keys using SEAKS                 key maintenance overhead, the communication cost of this
                                                                         phase can be calculated as follows
Instead of re-authentication and refreshing keys with MR-BS
and gave birth to authentication and key maintenance
overhead, they create a very self-organized community to re-
authenticate and refresh keys to avoid delay and overheads.                                           :            1                 _
There is a very strong trust worthy and self-organized
environment is generated after the successful authentication of
all NRSs.
                                                                            Hence, the total communication cost of AUTH-REQ and
                                                                         AUTH-REP phases can be calculated as follows:
        VII. ANALYSIS OF OUR PROPOSED APPROACH
                                                                                                                                           
    In our proposed scheme, we used NRS’s manufacturer
certificates, capabilities, nonce and lists of SAID as sending                                             1                     1            _
parameters and AK, life time of AK, its capabilities, nonce
and digital signatures as receiving parameters. The




                                                                    35                                     http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                          Vol. 9, No. 3, March 2011
B. Evaluation against Denial of Service & Reply Attack                    interleaving attacks to attempt, we assume that PKMv2
    For the denial of service, this attack exists only on pre-            protocols in used to authenticate the participating NRS and
authentication procedures. DoS and replay attacks are                     MS. Let’s say an adversary impersonates any NRSj and send
explained briefly in the previous section. The proposed                   the AUTH-REQ message to MR-BS, MR-BS will validates
scheme work well with Multihop non-transparent relay based                and generate AK for adversary. But adversary cannot decrypt
WiMAX network. As there are numbers of NRS participating                  the AK because it do not have private key, it need to force
in environment thus it becomes fully self-organized after                 NRSj to send once again the AUTH-REQ. Previously, once
successful stability time. Let suppose, when an adversary                 NRSj send the AUTH-REQ, it set the time out value but
impersonate NRS and send AUTH-REQ message to MR-BS,                       within that value it have not received any authenticated
MR-BS validate its authenticity, generate AK, copy the                    message from MR-BS, it assume the link is broken or some
certificate in its KST and send the AUTH-RSP to an                        other technical error. NRSj will try to scan other UL-MAP,
adversary. Hence, adversary don’t have the private key of                 and will found let’s say NRSi, and will send the AUTH-REQ
NRS thus could not decrypt AK; it can only just reply this                to MR-BS. MR-BS will reject legitimate NRSj request
message several time. Whenever, NRS send the AUTH-REQ                     because, and there is already certificate present in KST of
message to MR-BS, it usually set the time out value, and if the           MR-BS. Again NRSj receive AUTH-REJECT message from
time out value reached to the limit, it sends the request again,          NRSi, NRSj will set the time out value and again send the
here in this case, the time out value already reached to the              AUTH-REQ via NRSi. There are two main reasons to adopt
limit, but there is no response from the MR-BS. NRS will                  the same path to authenticate itself, firstly, at least NRSj get
again search for UL-MAP, we assume that it will find another              the response from this links, and secondly it assume to be due
path say NRSi, NRSi is inside the coverage of MR-BS, NRS                  to some technical errors. On the other hand, according to
will send the AUTH-REQ second time to NRSi, NRSi will                     SEAKS, after specific time out value, MR-BS have not get the
send the AUTH-REQ message to MR-BS, again MR-BS                           response from adversary, thus it will delete certificate of
validate the AUTH-REQ, generate the AK and send the                       NRSj. NRSj after time out, sends the AUTH-REQ again and
AUTH-RSP to NRSi and consequently NRS, NRS send                           will be authenticated and MR-BS will save its certificate in its
message III to MR-BS and thus get authenticated from the                  KST. By applying SEAKS and due to storage of AK/SAID in
MR-BS. Later NRS will start its AK and TEK refreshing with                every NRS repositories, and NRS itself encrypt all the
NRSi. On the other hand, an adversary is still replaying the              AK/SAID and TEK for others NRS, and due to distributed
message multiple times to exhaust the MR-BS. Now, MR-BS                   authenticated and localized re-authenticated and key
will again receive the AUTH-REQ message from adversary.                   maintenance, a very strong self-organized trustworthy
MR-BS knows that NRS is part of authenticated network and                 environment is created thus its quite impossible to get success
MR-BS is not expecting any message of AUTH-REQ from                       in interleaving attacks once the SEAKS got its stability.
this certificate. But if MR-BS receives any AUTH-REQ
message from the same certificates it will simply ignore this
                                                                                     VIII. CONCLUSION AND FUTURE WORK
message. After specific stability time, certificate of NRS is
shared with all the participating nodes, thus give maximum                In this paper, we addressed a self organized efficient
protection against Do and Reply attacks. For adversary to                 authentication and key management scheme (SEAKS), hop-
transmit one way message several times without response need              by-hop authentication and key management scheme in non-
some extra power, thus after some time adversary will stop                transparent Relay-based WiMAX network. This scheme is
sending the message and the denial of service attempt became              suitable for both fixed as well as mobile non-transparent
unsuccessful. As we mentioned previously, reply attack comes              Relays. We have presented our security goals and stated
first and denial of service is the ultimate result of reply attack        security analysis of proposed scheme to evaluate it against
where MR-BS after several reply attacks deny that particular              those goals. SEAKS provides hybrid authentication scheme
certificate thus deny legitimate node. Hence, our scheme                  with distributed authentication and localized re-authentication
works well both denial of service and reply attack in a very              and key maintenance. However, this technique not only helps
efficient manner.                                                         in minimizing the overall authentication overhead on MR-BS
                                                                          and AAA server but also provides efficient way to
C. Evaluation against Interleaving Attack                                 countermeasure the vulnerabilities In this scheme, NRS need
                                                                          to first authenticate itself with MR-BS prior to accept AUTH-
    To avoid Man-in-the-Middle attack, mutual authentication              REQ from other NRS/MS once authenticated and get the
was provided and adds an additional message to provide NRS                required AK/SAID, it continue its AK/TEK authorization state
acknowledgement and achieve X.509 three way                               machines to refresh above keys. After authenticated, it can
authentications, but this enhanced version is also vulnerable to          start broadcasting UL-MAP to accept AUTH-REQ , after
an interleaving attack, which is explained in the previous                receiving any AUTH-REQ it send it to MR-BS for validation,
section. The proposed scheme work well with Multihop non-                 MR-BS authenticate and send AK/SAID for particular request,
transparent relay based WiMAX network. As there are                       NRS receives and encrypt it with public key of requesting
numbers of NRS participating in environment thus it becomes               NRS and send back. Now requesting NRS start authorization
fully self-organized after successful stability time. For                 state machines to refresh above keys with NRS, at any time,




                                                                     36                              http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 9, No. 3, March 2011
all the NRS and MR-BS will maintain their local repositories.                     [13]        D. Johnston and J. Walker, "Overview of IEEE 802.16 Security,"
                                                                                              IEEE Security and Privacy Magazine, vol. 2, no. 3, pp. 40-48,
If any NRS cannot refresh its key within particular given time                                May-June 2004..
due to uncertain circumstances, according to standard, it have                    [14]        Adnan Shahid Khan , N.Fisal , N.N.M.I. Ma’arof , F.E.I.
to re-authenticate with MR-BS, but in our scheme, it will send                                Khalifa ,M. Abbas ,Security Issues and Modified Version of PKM
the request to NRS, NRS will look into its local repositories, if                             Protocol in Non-transparent Multihop Relay in IEEE 802.16j
found then send AK/SAID by itself it will send the AUTH-                                      Networks, International Review on Computers and Software -
                                                                                              January 2011 (Vol. 6 N. 1 pp. 104-109).
REQ to MR-BS for authentication and validation and consider
                                                                                  [15]        Xinmin Dai, Xiaoyao Xie, “Analysis and Research of Security
it as a new NRS/MS.                                                                           Mechanism in IEEE 802.16j” Guizhou Normal University
      In our future work, we will continue to implement a                                     Guiyang, China, 2010
prototype of SEAKS and extend the scale of the experiments                        [16]        Vamsi Krishna Gondi, “Security and Mobility architecture for
and to allow the emergence of other key management                                            isolated wireless networks using Wimax as an Infrastructure”,
                                                                                              Network and Multimedia Systems Group, France, 2009
techniques to come up with highly efficient and secure key
management scheme in terms of throughput, complexicity,
and authentication overhead.
                        ACKNOWLEDGEMENT
The author would like to thanks to all WiMAX research group
                                                                                                         ADNAN SHAHID KHAN received his degree of B.Sc
and especially sincerest gratitude to Ministry of Higher                                                 (Hons) in Computer Science from University of the
Education Malaysia under Malaysian Technical Cooperation                                                 Punjab, Lahore, Pakistan in 2005. Master of
Programme (MTCP) for their full support and Research                                                     Engineering degree in Electrical (Electronics &
                                                                                                         Telecommunication) from Universiti Teknologi
Management Center (RMC), Universiti Teknologi Malaysia                                                   Malaysia, Skudai, Malaysia in 2008.Currently, he is
(UTM) and MIMOS BERHAD for their partial contribution.                                                   pursuing his PhD in Electrical Engineering at the
                                                                                                         Faculty of Electrical Engineering, Universiti Teknologi
                            REFERENCES                                                   Malaysia, Skudai, 81310, Johor Bahru, under the supervision of Prof.
                                                                                         Dr. Norsheila Fisal .His current Research interests are in the area of
[1]      IEEE Std 802.16-2009: Air Interface for Broadband Wireless                      Security Issues in IEEE 802.16 Protocol and Cognitive Radio
         Access Systems, 2009                                                            Networks. He is also student member of IEEE since 2007.
[2]      IEEE Std 802.16j-2009, Amendment to IEEE STD 802.16-2009
[3]      S. Xu and Huang. Attack on PKM protocols of IEEE 802.16 and its
         later version. In international         Symposium on wireless                                 NORSHEILA FISAL received her B.Sc. in Electronic
         Communication System (ISWCS), 2006.                                                           Communication from the University of Salford,
[4]      Sen Xu, Manton Matthews and Chin-Tser Huang. Security Issues                                  Manchester, U.K. in 1984. M.Sc. degree in
                                                                                                       Telecommunication Technology, and PhD degree in
         in Privacy and Key Management Protocols of IEEE 802.16. In
                                                                                                       Data Communication from the University of Aston,
         ACM SE'06. Florida USA. March 2006
                                                                                                       Birmingham, U.K. in 1986 and 1993, respectively.
[5]      Steven W.Peters and Robert W.Heath, Jr,”The Future of Wimax:                                  Currently, she is the Professor with the Faculty of
         Multihop Relaying with IEEE 802.16j”, IEEE communication                 Electrical Engineering, University Technology Malaysia and Director of
         Magazine, January 2009.                                                  Telematic Research Group (TRG) Laboratory. Her current research interests
[6]      Mosato Okuda, Chenxi Zhu and Dorin Viorel, Multihop Relay                are in Wireless Sensor Networks, Wireless Mesh Networks, And Cognitive
         Extension for Wimax Networks- Overview and Benefits of IEEE              Radio Networks
         802.16j Standard, FUJITSU Sci.Tech.J., 44,3, p.292-302 (July
         2008)
[7]      Adnan Shahid Khan et. al. “Efficient Distributed Authentication                                MAZLAN ABBAS received his B.Eng. in Electrical from
         Key Scheme for Multi-hop Relay In IEEE 802.16j Network”,                                       Universiti Teknologi Malaysia in 1984, M.Sc. In
         International Journal of Engineering Science and Technology                                    Telematics from Essex University in 1986, and PhD
         (IJEST), Vol. 2(6), 2010, 2192-2199                                                            degree in Telecommunications from Universiti Teknologi
[8]      Taeshik Shon, Wook Choi: An Analysis of Mobile WiMAX                                           Malaysia in 1992. Currently, he is the Chief Research
         Security:      Vulnerabilities      and      Solutions,     First                              Director of Wireless Communications Cluster of MIMOS
         InternationalConference, NBiS 2007, LNCS, Vol. 4650, pp. 88-97,                                Berhad and also the Adjunct Professor with the Faculty of
                                                                                                        Electrical Engineering, Universiti Teknologi Malaysia.
         2007.
                                                                                         His current research interests are in WiMAX, LTE, IMS and IPv6.
[9]      Y.Lee, H.K.Lee, G.Y.Lee, H.J.Kim and C.K.Leong, “Design of
         Hybrid Authentication Scheme and Key Distribution for Mobile
         Multi-Hop Relay in IEEE 802.16j”, EATIS’09, June 3-5,
                                                                                                            MAZLINA ESA received her BEE (Hons.), MSc in
         Prague,CZ, 2009.
                                                                                                           RF Engg., and PhD in Electrical and Electronics Engg.
[10]     Huang C, Chang J. Responding to security issues in Wimax                                          from Universiti Teknologi Malaysia, Univ. of Bradford
         networks. IT Professional 2008; 10(5):15-21.                                                      (UK), and Univ. of Birmingham (UK), in 1984, 1987,
[11]     Adnan Shahid Khan, Norsheila Fisal, Sharifah Kamilah, Rozeha A                                    and 1996, respectively. She is currently a Professor
         Rashid and M Abbas. Article: Secure and Efficient Multicast                                       with the Faculty of Electrical Engg., UTM. Her
         Rekeying Approach For Non-Transparent Relay-Based IEEE                                            research interests include RF/microwave and antenna
         802.16     Networks. International     Journal   of     Computer                                  engineering, THz/PHz technology, wireless power
         Applications16(4):1–7, February 2011. Published by Foundation of                                  transmission, cognitive radio, and qualitative research.
         Computer Science                                                                She was the IEEE Malaysia AP/MTT/EMC Chapter Chair from 2007 to
                                                                                         Jan 2011, and currently the Counselor of IEEE UTM Student Branch.
[12]     "Draft Standard for Local and Metropolitan Area Networks,                       She is an active Senior Member of IEEE.
         Part16: Air Interface for Broadband Wireless Access Systems",
         IEEE P802.16 Rev2/D9, January 2009




                                                                             37                                      http://sites.google.com/site/ijcsis/
                                                                                                                     ISSN 1947-5500
                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   Vol. 9, No. 3, March 2011
                   SHARIFAH KAMILAH BNT SYED YUSOF
                   received BSc (cum laude) in Electrical Engineering from
                   Geoge Washington University USA in 1988             and
                   obtained her MEE and Ph.D in 1994 and 2006
                   respectively from universiti Tecknologi Malaysia. She is
                   currently Associate Professor with the department of
     Radio Communication, Faculty of Electrical Engineering Universiti
     Teknologi Malaysia. Her research interest includes OFDMA based
     system, Software define Radio and Cognitive radio.



               SHARIFAH HAFIZAH SYED ARIFFIN Received her
               B.Eng (Hons) from University North London in 1987, and
               obtained her M.E.E and Ph.D in 2001 and 2006 from
               Universiti Teknologi Malaysia, and Queen Marry
               University. London respectively. She is currently Senior
               lecturer with Faculty of Electrical Engineering, Universiti
               Teknologi Malaysia. Her current research interest are in
Wireless sensor networks, IPV6, Handoff Management in Wimax,
6loWPAN and Network and Mobile Computing System.




                                                                              38                            http://sites.google.com/site/ijcsis/
                                                                                                            ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 9, No. 3, March 2011




      A digital image encryption algorithm based on
      chaotic logistic maps using a fuzzy controller
                                  Mouad HAMRI #1 , Jilali Mikram #2 , Fouad Zinoun &3
                     #
                      Mathematics and computer science department, Science University of Rabat-Agdal
                                          4 Avenue Ibn Battouta Rabat Morocco
                     &
                       Economical sciences and management department, University of Meknes Morocco
                                                   1
                                                     hamri.mouad@gmail.com
                                                      2
                                                        mikram@fsr.ac.ma
                                                  3
                                                    fouad.zinoun@gmail.com




   Abstract—In this paper we will present a digital image encryp-        and the quantum machines that can be a reality soon.
tion algorithm based on chaotic logistic maps and using fuzzy            Chaotic dynamical systems present a very important tool to
logic (FL-CM-EA). Many papers was published in the recent                build efficient and secure cryptosystems thanks to their high
years about encryption algorithm using chaotic dynamical sys-
tems thanks to the set of very interesting properties guaranteed         sensitivity to initial conditions, their ergodicity propriety, their
by these chaotic dynamical systems: high sensitivity to initial          simplicity of implementation and also the very interesting
conditions, ergodicity, simplicity of implementation..., that can        execution time that help to have a real-time applications.
be used to conceive efficient cryptosystems.                              In this paper we propose an encryption algorithms using not
The main idea of this paper is the usage of a fuzzy logic set            only one logistic map but a map of many logistic maps and
of rules to control the next iteration of our proposed iterative
mechanism using a set of logistic maps.                                  the iterations are defined using a set of fuzzy logic rules.
An introduction to chaotic dynamical systems and logistic map            The rest of this paper will be as follow: section 2 introduces
is given followed by an introduction to fuzzy logic. A complete          chaotic dynamical systems and logistic map, section 3
specification of the proposed algorithm is presented with a set           introduces fuzzy logic, section 4 presents the proposed
of security analysis tests that show the efficiency and the high          algorithm with some results, section 5 presents the security
security level of the algorithm.
                                                                         analysis tests and finally section 6 concludes this paper.
  Keywords: cryptography, logistic map, fuzzy logic, image                II. C HAOTIC DYNAMICAL SYSTEMS AND LOGISTIC MAP
encryption, security analysis, dynamical systems, chaos theory.
                                                                            Roughly speaking, a dynamical system ([1-4],[11-12]) con-
                      I. I NTRODUCTION                                   sists of two ingredients: a rule which is described by a set of
   Today the community network applications in the internet              equations and specify how the system evolves and an initial
are been used by billions of people around the world and this            condition from which the system starts. It can be defined
usage rate is growing continuously. This implies that more               also as a system of equations describing the evolution of a
and more amounts of information is being transmitted over                mathematical model where the model is fully determined by
the internet. The data being transmitted includes all kind of            a set of variables.
information format: text, audio, video, image and a lot of               The logistic map (that will be used in our algorithm) is a very
other special formats.                                                   famous discrete dynamical system used in many researches
Images are used widely in our daily life in almost our                   when dealing with dynamical systems and chaos. It is defined
communications, these communications includes military                   on the set [0, 1] and can be written:
communications, banks transactions and many other
                                                                                               xn+1 = rxn (1 − xn )
communications where the security is really mandatory.
This lead to conclude that image security is a very important            Where x0 represent the initial condition, n ∈ N and r is
topic in our internet communication world.                               positive real number.
                                                                         In reality, there is no universal definition for chaotic
Many algorithm have been proposed in the last years                      dynamical systems. The following definition tries to define a
to solve these security issues, using the classical encryption           chaotic dynamical system using three ingredients that almost
algorithms such as RSA or EL-Gamal or using the elliptic                 everyone would agree on.
curves. The problem with the previous algorithm is that their
security relies on the fact that it is not feasible with today’s         Chaotic dynamical system: Let f : X → Y a function
machines to factorize a large number or to solve the discrete            (X, Y ⊆ R).
logarithm problem but this may not be true in the near future                                    ˙
                                                                         The dynamical system x = f (x) is said to be chaotic if the
especially with the recent advances in machines performances             following proprieties are satisfied:




                                                                    39                              http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                               Vol. 9, No. 3, March 2011


1- Sensitive dependance on initial conditions:∀β > 0,                          For TS fuzzy rules and unlike Mandany fuzzy rules, TS fuzzy
∃ε > 0 there exists a point y0 ∈ X and k > 0, such that:                       rules define the output variables as a function of the input
| x0 − y0 |< β ⇒ | xk − yk |> ε.                                               variables. If we take the same example as before, a TS fuzzy
2- Density of periodic orbits:The ensemble of periodic                         rule can be described as follow:
orbits: {x0 ∈ X, ∃k > 0, xk = x0 } is dense in X.
                                                                                   IF x1 in S1 and x2 in S1 THEN y1 = f (x1 , x2 ) and
3- Deterministic: means that the system has no random or
                                                                                                     y2 = g(x1 , x2 )
noisy inputs or parameters.
                                                                               Where f and g are two real functions of any type.
The definition above is applied to both discrete and                            In general, the steps followed to construct a fuzzy controller
continuous dynamical systems.                                                  are:
The logistic map is a chaotic dynamical system and presents                       1) Identifying and naming the fuzzy inputs and outputs.
a very high sensitivity to initial conditions for r between                       2) Creating the the fuzzy membership functions.
about 3.57 and 4 (approximatively).                                               3) Constructing the fuzzy rules (Mandany or TS rules).
Fig.1 shows the bifurcation diagram of the logistic map.                          4) Defining the defuzzification process (convert fuzzy out-
                                                                                     puts to crisp outputs).
                                                                               The figure Fig.2 shows an example of a possible fuzzy
                                                                               controller.




           Fig. 1.   Bifurcation diagram of the logistic map




                        III. F UZZY LOGIC

   In the 1960s, Lotfi Zadeh invented fuzzy logic [16,17],
which combines the concepts of crisp logic and the
Lukasiewicz sets by defining graded membership. One of
Zadehs main insights was that mathematics can be used to link
language and human intelligence. Many concepts are better
defined by words than by mathematics, and fuzzy logic and its                                   Fig. 2.   Diagram of a fuzzy controller
expression in fuzzy sets provide a discipline that can construct
better models of reality.
                                                                                 In the next section, we will present our encryption algorithm
Fuzzy logic is a form of many-valued logic in the opposite of
                                                                               and we will describe all the parameters of the used fuzzy
the crisp logic which is a two-valued logic (binary logic).
                                                                               controller.
Fuzzy logic involves linguistic variables with a truth value
in the interval [0, 1], it involves also fuzzy sets and fuzzy                                        IV. T HE ALGORITHM
inference.
Every fuzzy model uses fuzzy rules which are linguistic if-                       The proposed algorithm (FL-CM-EA) takes as inputs a
then statements. These rules are linking the inputs variables                  plain-image P and a 128 bits key K then generates as output
to the output variables, they simply define the control logic.                  the cipher-image C.
Two major types of fuzzy rules exist: Mandany fuzzy rules and                  The main idea of the algorithm was to use not only a simple
Takagi-Sugeno (TS) fuzzy rules. An example of a Mandany                        logistic map to generate the encryption (decryption key) but to
fuzzy rule for a fuzzy system with two inputs and two outputs                  use what we have called ”fuzzy-logistic-map”, which is also
can be described as follow:                                                    a function from the interval [0, 1] to itself, using three fuzzy
                                                                               rules and three logistic map (we can use as many logistic maps
  IF x1 in S1 and x2 in S1 THEN y1 in S3 and y2 in S4                          and fuzzy rules as we want but in this paper we will use three).




                                                                          40                                http://sites.google.com/site/ijcsis/
                                                                                                            ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                           Vol. 9, No. 3, March 2011


                                                                                                IT ER
If we call the three logistic maps LM1 , LM2 and LM3 then                          – KFi =( l=1 F M L(i + l)2 ) ×256 mod 256.
the fuzzy rules are as follow:                                                     – Run the FLM generator and stop after IT ER.
   1) IF x IS M1 THEN FLM(x)=LM1 (x) = r1 x(1 − x)
   2) IF x IS M2 THEN FLM(x)=LM2 (x) = r2 x(1 − x)                           •   Step 3:Using the generated key, we will generate the
   3) IF x IS M3 THEN FLM(x)=LM3 (x) = r3 x(1 − x)                               image C as follow:
For the rest of this paper, we will use the following values:                      – C0 (R) = (P0 (R) + KF0 ) mod 256.
r1 = 3.95, r2 = 3.9 and r3 = 3.8.                                                  – C0 (G) = (P0 (G) + KF0 ) mod 256.
The fuzzy sets M1 , M2 and M3 membership functions f1 ,                            – C0 (B) = (P0 (B) + KF0 ) mod 256.
f2 and f3 are defined as follow:                                                  and:
                                                      1
                                                                                 For i in [2, n]:
                       −2x + 1             if    0≤x≤ 2
           f1 (x) =                              1                                 – Ci (R) = (Pi (R) + KFi + Ci−1 (R)) mod 256.
                       0                   if    2 ≤x≤1                            – Ci (G) = (Pi,j (G) + KFi,j + Ci−1 (G))) mod 256.
                       2x                  if    0≤x≤ 1
                                                      2
                                                                                   – Ci (B) = (Pi,j (B) + KFi,j + Ci−1 (B))) mod 256.
           f2 (x) =                              1
                       −2x + 2             if    2 ≤x≤1                      •   Step 4: We reverse the data of the image C :
                                                                                 For i in [1, n]:
                        0                 if    0≤x≤ 1
                                                     2
            f3 (x) =                            1                                  – Ci = Cn−i+1
                        2x − 1            if    2 ≤x≤1
                                                                             •   Step 5: finally we construct the cipher-image C by
  For the defuzzification process, we use a center average                        repeating the step 3 using the image C :
defuzzifier and the crisp value of FLM(x) is:
                                                                                   – C0 (R) = (C0 (R) + KF0 ) mod 256.
                                    3
                                    i=1   µi LMi (x)                               – C0 (G) = (C0 (G) + KF0 ) mod 256.
                  F M L(x) =              3                                        – C0 (B) = (C0 (B) + KF0 ) mod 256.
                                          i=1   µi
                                                                                 and:
Where µi represents the degree of membership of x in Mi .                        For i in [2, n]:
Before presenting the algorithm, the following notations are
                                                                                   – Ci (R) = (Ci (R) + KFi + Ci−1 (R)) mod 256.
presented:
                                                                                   – Ci (G) = (Ci,j (G) + KFi,j + Ci−1 (G))) mod 256.
      P                plain-image                                                 – Ci (B) = (Ci,j (B) + KFi,j + Ci−1 (B))) mod 256.
      K                128 bits key                                          •   End
      C                cipher-image
      Pi               ith pixel of P
                                                                           The decryption algorithm is identical to the encryption algo-
      Pi (R, GorB)     Red, Green or Blue value of the pixel i
      F LMi            fuzzy-logistic-map value after i iteration          rithm, it receives as inputs the cipher-image C and the 128
      Li (x0 , N )     Value of the logistic map i                         bits key K (the same used for the encryption) and returns as
                       starting from x0 after N iterations                 output the plain-image P.
      F                A map from the set of 32 bytes                      The only difference between the two algorithm is the step
                       numbers to the interval [0, 1]                      3 and step 5 which are defined as below for the decryption
                                                                           algorithm.
  The encryption algorithm description can be summarized                      • Step 3:
as following:
                                                                                  – C0 (R) = (C0 (R) − KF0 ) mod 256.
                                                                                  – C0 (G) = (C0 (G) − KF0 ) mod 256.
  •   Begin:                                                                      – C0 (B) = (C0 (B) − KF0 ) mod 256.
  •   Step 1: We begin by generating an initial condition                       and:
      x0 ∈ [0, 1]:                                                              For i in [2, n]:
      x0 = F (K).
                                                                                  – Ci (R) = (Ci (R) − KFi − Ci−1 (R)) mod 256.
                                                                                  – Ci (G) = (Ci (G) − KFi − Ci−1 (G)) mod 256.
  •   Step 2: In this step we generate a key vector KF of
                                                                                  – Ci (B) = (Ci (B) − KFi − Ci−1 (B)) mod 256.
      size n where n is the number of pixels of P using the
      function getKey:                                                        • Step 5:

      KF = getKey(x0 ).                                                           – P0 (R) = (C0 (R) − KF0 ) mod 256.
      The function getKey is defined as bellow:                                    – P0 (G) = (C0 (G) − KF0 ) mod 256.
                                                                                  – P0 (B) = (C0 (B) − KF0 ) mod 256.
          Run the FLM generator and stop after IT ER                            and:
          iterations (the initial value is x0 and IT ER is an                   For i in [2, n]:
          iteration parameter).                                                   – Pi (R) = (Ci (R) − KFi − Ci−1 (R)) mod 256.
          For i in [1, n]:                                                        – Pi (G) = (Ci (G) − KFi − Ci−1 (G)) mod 256.
                                                                                  – Pi (B) = (Ci (B) − KFi − Ci−1 (B)) mod 256.




                                                                      41                                http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 9, No. 3, March 2011


   In the next section, we will present the security analysis             going to be this time :(C1 (i, j),C2 (i, j)).
tests performed on our algorithm.                                         Other measures are going to be used to compare two images
                                                                          C1 and C2 as the Number of Pixels Change Rate (NPCR)
                   V. S ECURITY ANALYSIS
                                                                          defined as below:
   In this section we will discuss the security analysis of
                                                                                                         i,j D(i, j)
our algorithm such as key space analysis, sensitivity analysis                            N CP R =                   × 100%
                                                                                                             n
(with respect to both the key and the plain-image) and finally
                                                                          Where n is the images size (number of pixels) and:D(i, j) = 0
statistical analysis as any robust encryption algorithm should
                                                                          if C1 (i, j) = C2 (i, j) and D(i, j) = 1 otherwise.
resist these attacks.
                                                                          The Unified Average Changing Intensity (UACI) will be used
The computation was done using a PC with the following
                                                                          as well and it is defined as:
characteristics: 1,8GHz Core(TM) 2 Duo, 1.00 Go RAM and
120 Go hard-disk capacity.                                                                      1     C1 (i, j) − C2 (i, j)
                                                                                    U ACI =                                 × 100%
                                                                                                n i,j          255
A. Key space analysis
   The used key for our algorithm is a 128 bits key which                 Here C1 (i, j) and C1 (i, j) are grey-scale values of the images
means that we have 2128 possibilities to generate a secret key.           pixels.
With such large key space, the encryption algorithm can be
                                                                             1) Key sensitivity analysis: Key sensitivity is a required
considered secured. In addition to that, the chaotic system that
                                                                          property to ensure the security of any image encryption
we are using to generate the cipher-image is highly sensitive
                                                                          algorithm against some brute-force attacks.
to initial condition which will guarantee that having this large
                                                                          To test the key sensitivity of the proposed algorithm,
key space both key and plain-image attacks will not affect the
                                                                          we have generated randomly an encryption key:
security of the algorithm as we will see in the next sections.
                                                                          ”0CDA03C2D734F06C48A33ECBE3178632”                 then     we
B. Sensitivity analysis                                                   encrypted an original image P using this key to obtain the
   An efficient image encryption algorithm should be highly                image C1.
sensitive to the secret key and to the plaint-image, which                We       then     slightly    modified      the      key     by
means that a single bit change in the encryption key will lead            changing      the    most    significant    bit   to     obtain:
to a very different cipher-image from the initial cipher-image            ”8CDA03C2D734F06C48A33ECBE3178632”, and using
and similarly, only a pixel change in the plaint-image should             this key we’ve encrypted the same original P message the
lead to a very different cipher-image from the initial cipher-            obtain image C2.
image.                                                                    Finally, we did the same as the last operation but
We will present in this section the results obtained by changing          changing the least significant bit to obtain the key:
one bit in the encryption key and one pixel in the plain-image            ”0CDA03C2D734F06C48A33ECBE3178633” and using
and we will see the effects on the cipher-image.                          this last key we encrypted the original image P to obtain the
Before starting our analysis, we will introduce some famous               image C3 (see figure Fig.3).
statistical measures that we will use in the next sections.
The first measure that we will talk about is the statistical corre-
lation between two vertically adjacent pixels, two horizontally
adjacent pixels and two diagonally adjacent pixels.
To compute this measure, we first take randomly a set of
adjacent pixels (vertically, horizontally or diagonally) from the
image (let’s say 1000 pairs) then we calculate their correlation
using the formulas:                                                          Fig. 3.   From the left to the right: original image P, C1, C2 and C3
                                cov(x, y)
                    rxy =
                               D(x) D(y)                                   We have calculated the correlation, the NCPR and the
                                                                          UACI of each two of the three cipher-images C1, C2 and
Where:
                                       N                                  C3 (Table I, II and III).
                              1
                       E[x] =               xi
                              N       i=1                                 For the obtained results we can see clearly that a negligible
                               N                                          correlation exists among the three images even if they was
                           1
                  D[x] =             (xi − E[x])2                         produced using the same original image and with a slightly
                           N   i=1                                        different keys. We can see also that the rate of change NPCR,
                                                                          the intensity of change UACI are really high,then we can
            cov(x, y) = E[(x − E[x])(y − E[y])]
                                                                          conclude that our algorithm is very sensitive to encryption
We will use also to compare two images C1 and C2 , their                  key change.
correlation defined as above but the used pairs of pixels are




                                                                     42                                  http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                      Vol. 9, No. 3, March 2011


                 Image 1    Image 2     Correlation
                   C1         C2        -0.000291
                   C1         C3         0.000004
                   C2         C3        -0.001109
                           TABLE I
  C ORRELATION BETWEEN THE IMAGES C1, C2AND C3 OBTAINED BY
    SLIGHTLY CHANGING THE ENCRYPTION KEY ( ONE BIT CHANGE )


                                                                                 Fig. 4.   The image P1 (left) and the image C1 (right)
                  Image 1   Image 2      NCPR
                    C1        C2        99.6037%
                    C1        C3        99.6077%
                    C2        C3        99.6039%
                          TABLE II
    NCRP OF THE IMAGES C1, C2 AND C3OBTAINED BY SLIGHTLY
        CHANGING THE ENCRYPTION KEY ( ONE BIT CHANGE )




                  Image 1   Image 2       UACI
                    C1        C2        49.8139%                                 Fig. 5.   The image P2 (left) and the image C2 (right)
                    C1        C3        49.7397%
                    C2        C3        49.8706%
                            TABLE III
    UACI OF THE IMAGES C1, C2 AND C3 OBTAINED BY SLIGHTLY
        CHANGING THE ENCRYPTION KEY ( ONE BIT CHANGE )




   2) Plain-image sensitivity analysis: After studying the key
sensitivity of the proposed image encryption algorithm, we                       Fig. 6.   The image P3 (left) and the image C3 (right)
will study now its plaint-image sensitivity.
The algorithm should be also sensitive to any small change in                              Image 1     Image 2     Correlation
the plaint-image which means that changing only one pixel in                                 C1          C2         -0.0840
                                                                                             C1          C3         -0.0192
the plaint-image should lead to a very different cipher-image.                               C2          C3         -0.0377
This property will guarantee the security of the algorithm
                                                                                                  TABLE IV
against plaint-image brute-force attacks.                                 C ORRELATION BETWEEN THE IMAGES C1, C2  AND C3 OBTAINED BY
To test the sensitivity to plaint-image, we will take an                         CHANGING ONLY ONE PIXEL OF THE ORIGINAL IMAGE
original image (P1) then we will encrypted it (we call
the cipher-image C1), and we will randomly change a
pixel in the original message then will encrypt the im-                                     Image 1    Image 2      NCPR
                                                                                              C1         C2        99.6825%
age again (P2) to obtain a new cipher-image C2. We re-                                        C1         C3        99.8608%
peat this a last time again to obtain a new image (P3)                                        C2         C3        99.8608%
and a third cipher-image C3 (we have used as encryp-                                              TABLE V
tion key:”0CDA03C2D734F06C48A33ECBE3178632”) (see                         NCRP OF THE IMAGES C1, C2 AND C3 OBTAINED    BY BY CHANGING
                                                                                      ONLY ONE PIXEL OF THE ORIGINAL IMAGE
Fig.4, Fig.5 and Fig.6)
As we did for the previous section, we will calculate the
correlation, the NPCR and the UACI between each two of                                      Image 1    Image 2       UACI
the three cipher-images (Tables IV, V and VI).                                                C1         C2        51.3027%
Again, the obtained results show that a negligible correlation                                C1         C3        53.2280%
                                                                                              C2         C3        46.2083%
exists between the three cipher-images and we can see also that
the rate of change (NPCR) and the intensity of change UACI                                             TABLE VI
                                                                         UACI   OF THE IMAGES C1, C2 AND C3 OBTAINED BY CHANGING ONLY
are really high. Form the previous results, we can conclude that                         ONE PIXEL OF THE ORIGINAL IMAGE
our algorithm is very sensitive also to plain-image change.

C. Statistical analysis
  After studying the security of the proposed algorithm                 image P that will be encrypted to obtain a
against some brute-force attacks (key sensitivity and plain-            cipher-image C (we have used also as encryption
image sensitivity), we will study in this section the security          key:”0CDA03C2D734F06C48A33ECBE3178632”).                       We
against statistical attacks.                                            then compare their histograms and compute for each image
To perform this study, we will consider an original                     the values of its two vertically adjacent pixels correlation, two




                                                                   43                                 http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 9, No. 3, March 2011


horizontally adjacent pixels correlation and two diagonally                                     VI. C ONCLUSION
adjacent pixels correlation.                                                 In this paper we presented a digital image encryption
                                                                          algorithm based on chaotic logistic maps and using a fuzzy
                                                                          controller (FL-CM-EA).
                                                                          The introduction of the fuzzy controller helped to use a set
   1) Histogram comparisons: Fig.7 and Fig.8, presents the                of logistic maps instead of one logistic map and therefore
histograms of the images P and C.                                         increased the randomness of the generated inputs.
                                                                          We have tested also the robustness and efficiency of the
                                                                          proposed algorithm by performing a set of security analysis as
                                                                          the key space analysis, the key sensitivity and the plaint-image
                                                                          sensitivity analysis and some other statistical analysis as the
                                                                          histogram and the pixels adjacent correlation analysis and
                                                                          all the results demonstrated the high level of security of the
                                                                          proposed algorithm.

                                                                                                       R EFERENCES
               Fig. 7.   The image P and its histogram                     [1] V.I. Arnold, Ordinary Differential Equations (Springer, Berlin, 1977,
                                                                               1992).
                                                                           [2] Huaguang Zhang, Derong Liu, Zhiliang Wang, Controlling Chaos: Sup-
                                                                               pression, Synchronization and Chaotification (Springer, London, 2009).
                                                                           [3] James D. Meiss, Differential Dynamical Systems (SIAM 2007).
                                                                           [4] V.I. Arnold, Geometrical Methods in the Theory of Differential Equa-
                                                                               tions (Springer, Berlin, 1988).
                                                                           [5] V.I. Arnold and D.V. Anosov, Dynamical Systems I (Springer, Berlin,
                                                                               1988).
                                                                           [6] Yu. A. Mitropolsky and A.K. Lopatin, Nonlinear Mechanics, Groups
                                                                               and Symmetry (Kluwer, Dordrecht, 1995).
                                                                           [7] Yu. N. Bibikov, Local Theory of Analytic Ordinary Differential Equa-
                                                                               tions, Lecture Notes in Mathematics (Springer, Berlin, 1979).
                                                                           [8] A.D. Bruno, Local Methods in Nonlinear Differential Equations
               Fig. 8.   The image C and its histogram                         (Springer, Berlin, 1989).
                                                                           [9] V.I Arnold, Mathematical methods of classical mechanics, (2nd edition,
                                                                               Springer, 1989).
                                                                          [10] Bellman R., Stability theory of differential equations , (MGH, 1953).
   We can see clearly that the histogram of the image C is                [11] R.C. Schmitt and K. Thompson, Nonlinear Analysis and Differential
                                                                               Equations, An Introduction (Aug 2000).
almost uniform and very different from the histogram of the               [12] P. Hartman, Ordinary Differential Equations (John Wiley and Sons
image P. This result confirms that statistical attacks based                    1964)
on the histogram analysis can’t give any clue to break the                [13] George D.Birkhoff, Dynamical systems (American Mathematical Soci-
                                                                               ety 1991)
algorithm as all the statistical information of the image P are           [14] Floriane Anstett, Les systemes dynamiques chaotiques pour le chiffre-
lost after the encryption.                                                     ment : synthese et cryptanalyse (These) (Universite Henri Poincare -
                                                                               Nancy 1)
                                                                          [15] A. Menezes, P. van Oorschot, and S. Vanstone Handbook of Applied
                                                                               Cryptography (CRC Press, 1996)
   2) Adjacent pixels correlation comparisons: The last                   [16] F. Martin McNeill, Ellen Thro, Fuzzy logic a practical approach (AP
                                                                               PROFESSIONAL, 1994)
statistical analysis performed is the adjacent pixels correlation.        [17] Zadeh, L, Fuzzy sets, Information and Control (1965)
We calculate for each image (P and C) the three adjacent                  [18] Kamyar Mehran, Takagi-Sugeno Fuzzy Modeling for Process Control
pixels correlations: vertically, horizontally and diagonally.                  (Newcastle University 2008)
                                                                          [19] Hossam El-din H. Ahmed, Hamdy M. Kalash, and Ossam S. Farag
The table VII shows the obtained results.                                      Allah, An efficient Chaos-Based Feedback Stream cipher (ECBFSC) for
                                                                               Image Cryptosystems (SITIS 2006)
                                                                          [20] Mouad HAMRI, Jilali Mikram and Fouad Zinoun. Chaotic Hash Func-
          Image    H Adj Corr     V Adj Corr    D Adj Corr                     tion Based on MD5 and SHA-1 Hash Algorithms (IJCSIS Vol. 8 No. 9
                                                                               DEC 2010)
            P        0.7773         0.8895        0.7507
            C       -0.0055         0.0093       -0.0007
                             TABLE VII
   VERTICALLY, HORIZONTALLY AND DIAGONALLY ADJACENT PIXELS
               CORRELATION OF THE IMAGES P AND C




   From the obtained results, we can see clearly that the pixels
of the plait-image P are strongly correlated while a negligible
correlation exists between those of the cipher-image C.
This result shows again that the proposed algorithm can be
considered as secure against statistical attacks.




                                                                     44                                 http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                             Volume 9 No. 3, March 2011

   Performance Analysis of Connection Admission
  Control Scheme in IEEE 802.16 OFDMA Networks
                           Abdelali EL BOUCHTI,        Said EL KAFHALI and                Abdelkrim HAQIQ

                                      Computer, Networks, Mobility and Modeling laboratory
                                         e- NGN research group, Africa and Middle East
                                           FST, Hassan 1st University, Settat, Morocco
                                      Emails: {a.elbouchti, kafhalisaid, ahaqiq} @gmail.com


Abstract—IEEE 802.16 OFDMA (Orthogonal Frequency Division                and also it is robust to inter-symbol interference and
Multiple Access) technology has emerged as a promising                   frequency-selective fading. OFDMA has been adopted as the
technology for broadband access in a Wireless Metropolitan Area          physical layer transmission technology for IEEE
Network (WMAN) environment. In this paper, we address the                802.16/WiMAX-based         broadband      wireless    networks.
problem of queueing theoretic performance modeling and
                                                                         Although the IEEE 802.16/WiMAX standard [12] defines the
analysis of OFDMA under broad-band wireless networks. We
consider a single-cell IEEE 802.16 environment in which the base         physical layer specifications and the Medium Access Control
station allocates subchannels to the subscriber stations in its          (MAC) signaling mechanisms, the radio resource management
coverage area. The subchannels allocated to a subscriber station         methods such as those for Connection Admission Control
are shared by multiple connections at that subscriber station. To        (CAC) and dynamic bandwidth adaptation are left open.
ensure the Quality of Service (QoS) performances, a Connection           However, to guarantee QoS performances (e.g., call blocking
Admission Control (CAC) scheme is considered at a subscriber             rate, packet loss, and delay), efficient admission control is
station. A queueing analytical framework for these admission             necessary in a WiMAX network at both the subscriber and the
control schemes is presented considering OFDMA-based                     base stations.
transmission at the physical layer. Then, based on the queueing
                                                                             The admission control problem was studied extensively for
model, both the connection-level and the packet-level
performances are studied and compared with their analogues in            wired networks (e.g., for ATM networks) and also for
the case without CAC. The connection arrival is modeled by a             traditional cellular wireless systems. The classical approach
Poisson process and the packet arrival for a connection by a two-        for CAC in a mobile wireless network is to use the guard
state Markov Modulated Poisson Process (MMPP). We                        channel scheme [5] in which a portion of wireless
determine analytically and numerically different performance             resources (e.g., channel bandwidth) is reserved for handoff
parameters, such as connection blocking probability, average             traffic. A more general CAC scheme, namely, the fractional
number of ongoing connections, average queue length, packet              guard scheme, was proposed [13] in which a handoff
dropping probability, queue throughput and average packet                call/connection is accepted with a certain probability. To
delay.
                                                                         analyze various connection admission control algorithms,
   Keywords-component: WiMAX, OFDMA, MMPP, Queueing                      analytical models based on continuous-time Markov chain,
Theory, Performance Parameters.                                          were proposed [4]. However, most of these models dealt only
                                                                         with call/connection-level performances (e.g., new call
                      I.     INTRODUCTION                                blocking and handoff call dropping probabilities) for the
                                                                         traditional voice-oriented cellular networks. In addition to the
   The evolution of the IEEE 802.16 standard [14] has spurred            connection-level performances, packet-level (i.e., in-
tremendous interest from the network operators seeking to                connection) performances also need to be considered for data-
deploy high performance, cost-effective broadband wireless               oriented packet-switched wireless networks such as WiMAX
networks. With the aid of the Worldwide Interoperability for             networks.
Microwave Access (WiMAX) organization [1], several                           An earlier relevant work was reported by the authors in
commercial implementations of WiMAX cellular networks                    [10]. They considered a similar model in OFDMA based-
have been launched, based on OFDMA for non-line-of-sight                 IEEE 802.16 but they modeled both the connection-level and
applications. The IEEE 802.16/WiMAX [2] can offer a high                 packet-level by tow different Poisson processes and they
data rate, low latency, advanced security, quality of service            compared various QoS measures of CAC schemes. In [15], the
(QoS), and low-cost deployment.                                          authors proposed a Discrete-Time Markov Chain (DTMC)
   OFDMA is a promising wireless access technology for the               framework based on a Markov Modulated Poisson Process
next generation broad-band packet networks. With OFDMA,                  (MMPP) traffic model to analyze VoIP performance. The
which is based on orthogonal frequency division multiplexing             MMPP processes are very suitable for formulating the multi-
(OFDM), the wireless access performance can be substantially             user VoIP traffic and capturing the interframe dependency
improved by transmitting data via multiple parallel channels,            between consecutive frames.




                                                                    45                              http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                             Volume 9 No. 3, March 2011
    In this paper, we present a connection admission control             scheme for subscriber stations are proposed. A threshold C is
scheme for a multi-channel and multi-user OFDMA network,                 used to limit the number of ongoing connections. When a new
in which the concept of guard channel is used to limit the               connection arrives, the CAC module checks whether the total
number of admitted connections to a certain threshold. A                 number of connections including the incoming one is less than
queueing analytical model is developed based on a three-                 or equal to the threshold C. If it is true, then the new
DTMC which captures the system dynamics in terms of the                  connection is accepted, otherwise it is rejected.
number of connections and queue status. We assume that the
connection arrival and the packet arrival for a connection                      III.   FORMULATION OF THE ANALYTICAL MODEL
follow a Poisson process and a two-state MMPP process
respectively. Based on this model, various performance                     A.   Formulation of the Queueing Model
parameters such as connection blocking probability, average                  An analytical model based on DTMC is presented to
number of ongoing connections, average queue length,                     analyze the system performances at both the connection-level
probability of packet dropping due to lack of buffer space,              and at the packet-level for the connection admission schemes
queue throughput, and average queueing delay are obtained.               described before. We assume that packet arrival for a
The numerical results reveal the comparative performance                 connection follows a two-state MMPP process [3] which is
characteristics of the CAC and the without CAC algorithms in             identical for all connections in the same queue. The connection
an OFDMA-based WiMAX network.                                            inter-arrival time and the duration of a connection are assumed
    The remainder of this paper is organized as follows.                 to be exponentially distributed with average 1/  and 1/  ,
Section II describes the system model including the objective            respectively.
of CAC policy. The formulation of the analytical model for
                                                                             An MMPP is a stochastic process in which the intensity of
connection admission control is presented in Section III. In             a Poisson process is defined by the states of a Markov chain.
section IV we determine analytically different performance               That is, the Poisson process can be modulated by a Markov
parameters. Numerical results are stated in Section V. Finally,          chain. As mentioned before, an MMPP process can be used to
section VI concludes the paper.                                          model time-varying arrival rates and can capture the inter-
                                                                         frame dependency between consecutive frames ([6], [7], [8]).
                                                                         The transition rate matrix and the Poisson arrival rate matrix of
                   II.     MODEL DESCRIPTION
                                                                         the two-state MMPP process can be expressed as follows:
  A.    System model                                                                             q  q01         0            0
                                                                                        QMMPP   01        , =                             (1)
    We consider a single cell in a WiMAX network with a base
                                                                                                 q10 q10       0              1 
                                                                                                                                     
station and multiple subscriber stations (Figure 1). Each
subscriber station serves multiple connections. Admission                The steady-state probabilities of the underlying Markov chain
control is used at each subscriber station to limit the number of        are given by:
ongoing connections through that subscriber station. At each                                                           q10        q01
subscriber station, traffic from all uplink connections are                            ( MMPP ,0 ,  MMPP ,1 )  (           ,         ) (2)
aggregated into a single queue [11]. The size of this queue is                                                      q01  q10 q01  q10
finite (i.e., L packets) in which some packets will be dropped if         The mean steady state arrival rate generated by the MMPP is:
the queue is full upon their arrivals. The OFDMA transmitter at                                                      q  q 
                                                                                         MMPP   MMPP  T  10 0 01 1                   (3)
the subscriber station retrieves the head of line packet(s) and                                                         q01  q10
transmits them to the base station. The base station may
                                                                         where  is the transpose of the row vector   (0 , 1 ) .
                                                                                  T
allocate different number of subchannels to different subscriber
stations. For example, a subscriber station with higher priority         The state of the system is described by the
could be allocated more number of subchannels.
                                                                         process X t  ( X , X t1 , X t2 ) , where X is the state of an
                                                                         irreducible   continuous       time    Markov       chain       and   X t1
                                                                                          2
                                                                         (respectively X t ) is the number of packets in the aggregated
                                                                         queue (the number of ongoing connections) at the end of every
                                                                         time slot t.
                         Figure 1. System model                          Thus, the state space of the system is given by:
                                                                                    E  {(i, j , k ) / i  {0,1}, 0  j  L, k  0} .
  B.  CAC Plicy                                                                                             .
    The main objective of a CAC mechanism is to limit the                    For the CAC algorithm, the number of packet arrivals
number of ongoing connections/flows so that the QoS                      depends on the number of connections. The state transition
                                                                         diagram is shown in (Figure 2). Here, (0 , 1 ) and  denote
performances can be guaranteed for all the ongoing
connections. Then, the admission control decision is made to
accept or reject an incoming connection. To ensure the QoS               rates and not probabilities.
performances of the ongoing connections, the following CAC




                                                                    46                                  http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                            Volume 9 No. 3, March 2011
   Note that the probability that n Poisson events with average        C. Transition Matrix for the Queue
rate    occur during an interval T can be obtained as follows:                     The transition matrix P of the entire system can be
                                                                                 expressed as follows. The rows of matrix P represent the
                                        e T ( T )n                            number of packets (j) in the queue.
                            fn ( )                                  (4)
                                              n!
                                                                                                   p 0,0           p0 , A                       
   This function is required to determine the probability of                                                                                     
both connection and packet arrivals.                                                                                                         
                                                                                                   p R ,0           pR ,R  pR ,R  A                                  (7)
                                                                                                P                                               
                                                                                                                                     
                                                                                                                   p j, j R  p j, j  p j, jR 
                                                                                                                                                 
                                                                                                  
                                                                                                                                           
                                                                                     Matrices         p j , j ' represent the changes in the number of
                                                                                 packets in the queue (i.e., the number of packets in the queue
                                                                                 changing from j in the current frame to j ' in the next frame).
                                                                                 We first establish matrices                      v (i , j ),(i , j ') , where the diagonal
                                                                                 elements of these matrices are given as follows.
                                                                                 For r  {0,1, 2,..., D} and n  {0,1, 2,..., (k  A)}, l  1, 2,..., D ,
                                                                                 and m  1,2,...,(k  A) . The non-diagonal elements of
   Figure 2. State transition diagram of discrete time Markov
chain.                                                                           v (i , j ),(i , j ') are all zero.

  B.    CAC Algorithm                                                                           v (i , j );(i , j l ) 
                                                                                                                        k 1,k 1        
                                                                                                                                        n  r l
                                                                                                                                                   f n ( k i )[ R]r
   In this case, the transition matrix Q for the number of
connections in the system can be expressed as follows:                                          v (i , j );(i , j  m ) 
                                                                                                                         k 1, k 1       
                                                                                                                                            r n m
                                                                                                                                                      f n ( k i )[ R]r   (8)

                q 0 ,0 q 0 ,1                                   
                                                                                                                      k 1,k 1   f n ( k i )[ R]r
                                                                                              v (i , j );(i , j ) 
                q 0 ,1 q 1,1 q 1,2                                                           
                                                                                                                                    r n
           Q                                                    (5)
                                                                                   Here A is the maximum number of packets that can arrive
                          q C  2,C 1 q C 1,C 1     q C 1,C                from one connection in one frame, R indicates the maximum
                                                                               number of packets that can be transmitted in one frame
                                        q C 1,C        q C ,C 
                                                                                 and D is the maximum number of packets that can be
where each row indicates the number of ongoing connections.                      transmitted in one frame by all of the allocated subchannels
As the length of a frame T is very small compared with                           allocated to that particular queue and it can be obtained from
connection arrival and departure rates, we assume that the                        D  min (R, j) . This is due to the fact that the maximum
maximum number of arriving and departing connections in a
                                                                                 number of transmitted packets depends on the number of
frame is one. Therefore, the elements of this matrix can be
obtained as follows:                                                             packets in the queue and the maximum possible number of
                                                                                 transmissions in one frame. Note that,  v(i, j );(i, j l )           ,
                                                                                                                                             k 1,k 1
       qk,k1  f1() (1 f1(k)), k=0,1,...,C-1
                                                                                 v(i, j );(i, jm) 
                                                                                                   k1,k 1
                                                                                                              and v(i, j);(i, j ) 
                                                                                                                                  k 1,k 1
                                                                                                                                              represent the probability that
       qk,k1  (1 f1())  f1(k), k=1,2,...,C                      (6)
                                                                                 the number of packets in the queue increases by n, decreases
       qk,k  f1()  f1(k)  (1 f1()) (1 f1(k)), k=0,1,...,C              by m, and does not change, respectively, when there are k
                                                                                 ongoing connections. Here,  v  denotes the element at row i
                                                                                                                                     i, j
where    qk ,k 1 , qk ,k 1 and qk ,k represent the cases that the              and column j of matrix v, and these elements are obtained
number of ongoing connections increases by one, decreases by                     based on the assumption that the packet arrivals for the
one, and does not change, respectively.                                          ongoing connections are independent of each other.
                                                                                    Finally, we obtain the matrices p j , j ' by combining both the
                                                                                 connection-level and the queue-level transitions as follows:

                                                                                                                     p j , j '  Qv ( i , j ),(i , j ')                   (9)




                                                                            47                                               http://sites.google.com/site/ijcsis/
                                                                                                                             ISSN 1947-5500
                                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                             Volume 9 No. 3, March 2011
                IV.     PERFORMANCE PARAMETERS
                                                                                                  1   C      L      A
                                                                                                                          C                    
    In this section, we determine the connection-level and the
                                                                                         Ndrop                   j1  [ p j , j m ]k ,l .(m  (L  j)). (i, j, k ) (14)
packet-level performance parameters (i.e., connection blocking                                   i 0 k 1   j  0 m L   l 1                
probability, average number of ongoing connections in the
                                                                                         where the term  [ p j, jm ]k ,l  indicates the total probability
system, and average queue length) for the CAC scheme.                                                             C

                                                                                                                           
    These performance parameters can be derived from the                                                        l 1       
steady state probability vector of the system states  , which is                        that the number of packets in the queue increases by m at
obtained by solving  P   and  1  1 , where 1 is a column                            every arrival phase. Note that, we consider probability
matrix of ones.                                                                           pj, jm rather than the probability of packet arrival as we have to
     Also, the size of the matrix P needs to be truncated at L                           consider the packet transmission in the same frame as well.
(i.e., the maximum number of packets in the queue) for the
                                                                                            After calculating the average number of dropped packets
scheme.
                                                                                         per frame, we can obtain the probability that an incoming
   The steady-state probability, denoted by                  (i, j , k ) for the        packet is dropped as follows:
state that there are k connections and j  {0,1,..., L} packets                                                                     N drop
in the queue, can be extracted from matrix  as follows:                                                                 pdrop                                            (15)
                                                                                                                                      
     (i, j, k )   i j((C 1) k ) , i  0,1; k  0,1,..., C          (10)
                                                                                         where  is the average number of packet arrivals per frame
                                                                                         and it can be obtained from
  A.   Connection Blocking Probability
   This performance parameter indicates that an arriving                                                                   MMPP N k .                                   (16)
connection will be blocked due to the admission control
decision. It indicates the accessibility of the wireless service                           E.  Queue throughput
and can be obtained as follows:                                                             It measures the number of packets transmitted in one frame
                                            1           L                                and can be obtained from
                        pblock    (i, j , C ).                          (11)                                           MMPP (1  pdrop ).                           (17)
                                        i  0 j 0

    The above probability refers to the probability that the                               F.    Average Packet Delay
system serves the maximum allowable number of ongoing
                                                                                             It is defined as the number of frames that a packet waits in
connections.
                                                                                         the queue since its arrival before it is transmitted. We use
                                                                                         Little’s law [9] to obtain average delay as follows:
   B. Average Number of Ongoing Connections
It can be obtained as                                                                                                            Nj
                                                                                                                          D                                               (18)
                                1       L           C
                                                                                                                                  
                      N k   k . (i, j , k )                           (12)
                            i0 j 0 k 0

                                                                                                                   V.      NUMERICAL RESULTS
  C.    Average Queue Length Average                                                         In this section we present the numerical results of CAC
 It is given by                                                                          scheme. We use the Matlab software to solve numerically and
                            1       C           L
                                                                                         to evaluate the various performance parameters.
                   N j   j. (i, j , k )                               (13)
                           i 0 k 0 j 0
                                                                                           A.    Parameter Setting
                                                                                             As in [10], we consider one queue (which corresponds to a
                                                                                         particular subscriber station) for which five subchannels are
  D.    Packet Dropping Probability                                                      allocated and we assume that the average SNR is the same for
    It refers to the probability that an incoming packet will be                         all of these subchannels. Each subchannel has a bandwidth of
dropped due to the unavailability of buffer space. It can be                             160 kHz. The length of a subframe for downlink transmission
derived from the average number of dropped packets per                                   is one millisecond, and therefore, the transmission rate in one
frame. Given that there are j packets in the queue and the                               subchannel with rate ID = 0 (i.e., BPSK modulation and coding
number of packets in the queue increases by v, the number of                             rate is 1/2) is 80 kbps. We assume that the maximum number
dropped packets is m  ( L  j ) for m  L  j , and zero                                of packets arriving in one frame for a connection is limited to
                                                                                         30 (i.e., A = 30).
otherwise. The average number of dropped packets per frame is
obtained as follows:                                                                        For our scheme, the value of the threshold C is varied
                                                                                         according to the evaluation scenarios.



                                                                                    48                                         http://sites.google.com/site/ijcsis/
                                                                                                                               ISSN 1947-5500
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Volume 9 No. 3, March 2011
   For performance comparison, we also evaluate the queueing                          The packet-level performances under different connection
performance in the absence of CAC mechanism. For the case                         arrival rates are shown in Figures 5 through 8 for average
without CAC, we truncate the maximum number of ongoing                            number of packets in the queue, packet dropping probability,
connections at 25 (i.e. Ctr  25 ) so that (i, j,Ctr )  2.104,  i, j .        queue throughput, and average queueing delay, respectively.
                                                                                  These performance parameters are significantly impacted by
The average duration of a connection is set to ten minutes (i.e.,                 the connection arrival rate. Because the CAC scheme limits the
µ = 10) for all the evaluation scenarios. The queue size is 150                   number of ongoing connections, packet-level performances can
packets (i.e., L = 150). The parameters are set as follows: The                   be maintained at the target level. In this case, the CAC scheme
connection arrival rate is 0.4 connections per minute. Packet                     results in better packet-level performances compared with
arrival rate per connection is one packet per frame for state 0 of                those without CAC scheme.
MMPP process and two packets per frame for state 1 of MMPP
process. Average SNR on each subchannel is 5 dB.
Note that, we vary some of these parameters depending on the
evaluation scenarios whereas the others remain fixed.

  B.   Performance of CAC policy
   We first examine the impact of connection arrival rate on
connection-level performances. Variations in average number
of ongoing connections and connection blocking probability
with connection arrival rate are shown in Figures 3 and 4,
respectively. As expected, when the connection arrival rate
increases, the number of ongoing connections and connection
blocking probability increase.
                                                                                  Figure 5: Average number of packets in queue under different
                                                                                  connection rates.




Figure 3: Average number of ongoing connections under
different connection arrival rates.                                               Figure 6: Packet dropping under different connection arrival
                                                                                  rates.




Figure 4: Connection blocking under different connection                          Figure 7: Queueing throughput under different connection
arrival rates.                                                                    arrival rates.




                                                                             49                              http://sites.google.com/site/ijcsis/
                                                                                                             ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                         Volume 9 No. 3, March 2011




Figure 8: Average packet delay under different connection            Figure 11: Connection blocking probability under different
arrival rates.                                                       channel qualities.

   Variations in packet dropping probability and average                                        VI.    CONCLUSION
packet delay with channel quality are shown in Figures 9 and
10, respectively. As expected, the packet-level performances              In this paper, we have addressed the problem of queueing
become better when channel quality becomes better. Also, we          theoretic performance modeling and analysis of OFDMA
observe that the connection-level performances for the CAC           transmission under admission control. We have considered a
scheme and those without CAC scheme are not impacted by              WiMAX system model in which a base station serves multiple
the channel quality when this later becomes better (the              subscriber stations and each of the subscriber stations is
connection blocking probability remains constant when the            allocated with a certain number of subchannels by the base
channel quality varies) (Figure. 11).                                station. There are multiple ongoing connections at each
                                                                     subscriber station.
                                                                          We have presented a connection admission control
                                                                     scheme for a multi-channel and multi-user OFDMA network,
                                                                     in which the concept of guard channel is used to limit the
                                                                     number of admitted connections to a certain threshold
                                                                          The connection-level and packet-level performances of
                                                                     the CAC scheme have been studied based on the queueing
                                                                     model. The connection arrival is modeled by a Poisson process
                                                                     and the packet arrival for a connection by a two-state MMPP
                                                                     process. We have determined analytically and numerically
                                                                     different performance parameters, such as connection blocking
                                                                     probability, average number of ongoing connections, average
                                                                     queue length, packet dropping probability, queue throughput,
Figure 9: Packet dropping probability under different channel        and average packet delay.
qualities.                                                                Numerical results show that, the performance parameters
                                                                     of connection-level and packet-level are significantly impacted
                                                                     by the connection-level rate, the CAC scheme results in better
                                                                     packet-level performances compared with those without CAC
                                                                     scheme. The packet-level performances become better when
                                                                     channel quality becomes better. On the other hand, the
                                                                     connection-level performances for the CAC scheme and those
                                                                     without CAC scheme are not impacted by the channel quality.
                                                                          All the results showed in this paper remain in correlation
                                                                     with those presented in [10] even if we change here the arrival
                                                                     packet Poisson process by an MMPP process, which is more
                                                                     realistic.


                                                                                                   REFERENCES
Figure 10: Average packet delay under different channel              [1]   B. Baynat, S. Doirieux, G. Nogueira, M. Maqbool, and M. Coupechoux,
qualities                                                                  “An efficient analytical model for wimax networks with multiple traffic
                                                                           profiles,” in Proc. of ACM/IET/ICST IWPAWN, September 2008.




                                                                50                                    http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Volume 9 No. 3, March 2011
[2]   B. Baynat, G. Nogueira, M. Maqbool, and M. Coupechoux, “An                    [9]    R. Nelson, “Probability, stochastic process, and queueing theory”,
      efficient analytical model for the dimensioning of wimax networks,” in               Springer-Verlag, third printing, 2000.
      Proc. of 8th IFIP-TC6 Networking Conference, May 2009.                        [10]   D. Niyato and E. Hossain, "Connection admission control in OFDMA-
[3]   L. B. Le, E. Hossain, and A. S. Alfa, “Queuing analysis for radio link               based WiMAX networks: Performance modeling and analysis," invited
      level scheduling in a multi-rate TDMA wireless network,” in Proc.                    chapter in WiMax/MobileFi: Advanced Research and Technology, (Ed.
      IEEE GLOBECOM’04, vol. 6, pp. 4061–4065, November–December                           Y. Xiao), Auerbach Publications, CRC Press, December 2007.
      2004.                                                                         [11]   D. Niyato and E. Hossain, “Connection admission control algorithms for
[4]   Y. Fang and Y. Zhang, “Call admission control schemes and                            OFDMA wireless networks,” in Proc. IEEE GLOBECOM’05, St. Louis,
      performance analysis in wireless mobile networks,” IEEE Transactions                 MO, USA, 28 November–2 December 2005.
      on Vehicular Technology, vol. 51, no. 2, March 2002, pp. 371–382.             [12]   D. Pareek, “WiMax: Taking Wireless to the MAX,” Auerbach
[5]   D. Hong and S. S. Rappaport, “Traffic model and performance analysis                 Publishers Inc. June, 2006.
      for cellular mobile radio telephone systems with prioritized and              [13]   R. Ramjee, R. Nagarajan, and D. Towsley, “On optimal call admission
      nonprioritized handoff procedures,” IEEE Transactions on Vehicular                   control in cellular networks,” in Proc. IEEE INFOCOM’96, vol. 1,
      Technology, pp. 77–92, August 1986.                                                  San Francisco, CA, March 1996, pp. 43–50.
[6]   H. Lee and D.-H. Cho, “VoIP capacity analysis in cognitive radio              [14]   IEEE Std 802.16e-2005 and IEEE Std 802.16-2004/Cor 1-2005, IEEE
      system,” IEEE Commun. Lett., vol. 13, no. 6, pp. 393–395, Jun. 2009.                 Standard for Local and metropolitan area networks-Part 16: Air Interface
[7]   H. Lee, Hyu-Dae Kim and Dong-Ho Cho, "Smart Resource Allocation                      for Fixed and Mobile Broadband Wireless Access Systems, Dec. 7,
      Algorithm Considering Voice Activity for VoIP Services in Mobile-                    2005.
      WiMAX System," IEEE Transactions on Wireless Communications, vol              [15]   J.-W. So, “Performance analysis of VoIP services in the IEEE 802.16e
      8, issue 9, pp. 4688-4697, Sep. 2009.                                                OFDMA system with inband signaling,” IEEE Trans. Veh. Technol.,
[8]   H. Lee and Dong-Ho Cho, "Capacity Improvement and Analysis for                       vol.57, no. 3, pp. 1876–1886, May 2008.
      VoIP Service in Cognitive Radio System," IEEE Transactions on
      Vehicular Technology, vol 59, issue 4, pp. 1646-1651, May 2010.




                                                                               51                                     http://sites.google.com/site/ijcsis/
                                                                                                                      ISSN 1947-5500
                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                         Vol. 9, No. 3, March 2011




         Enhancement and Minutiae Extraction of
         Touch less Fingerprint Image Using Gabor
                  and Pyramidal Method


                 A.John Christopher
Associate Professor, Department of Computer Science,                                         Dr.T.Jebarajan,
            S.T. Hindu College, Nagercoil                                                       Principal,
                                                                           V.V. College of Engineering., Tisayanvilai
Abstract - Touch based sensing techniques generate lot of errors
in fingerprint minutiae extraction. The solution for this problem          deformation, slippage, smearing or sensor noise. Some of the
is touchless fingerprint technology. They do not receive any               touch based are shown in fig.1. A new generation of
contact between the sensor & finger. Although they reduce the              touchless live scan devices that generate three various
problems of touch based finger prints, other difficulties explore
such as a view difference problem and a limited usable area due
                                                                           representation of fingerprint is appearing in the market. This
to perspective distortion. To solve this problem, proposed                 new sensing technology addresses many of the problems
method for touchless fingerprint image enhancement and                     stated above [3]. From wear and tear of surface coating, to
minutiae extraction is introduced. Image enhancement is mostly             overcome these kinds of problems, a touchless fingerprint
required preprocessing system for finger based biometric                   sensing technology has been proposed that does not require
system. Normally the touchless device is having a single camera            any contact between a sensor and a finger. Thus, the fingers
and two planer mirrors which reflecting side views of a finger.            and ridge information cannot be changed or distorted as it
From this we get three images normally frontal, left and right             will be free of skin deformation. Also, it can capture
finger. Experimental result shows that the enhanced images                 fingerprint images consistently because it is not affected by
increase the biometric accuracy.
                                                                           different skin conditions or latent fingerprints.
Index Terms - pyramidal method, Gabor, touchless fingerprint,
thinning, normalization, finger enhancement, adaptive histogram.

                      I ‐ INTRODUCTION 

          A fingerprint is composed of ridges and valleys.
Ridges have various kinds of discontinuity such as ridge
bifurification, ridge endings, short ridges, islands and ridge
cross over’s. Among this discontinuity, ridge bifurification
and ridge ending are commonly used in fingerprint
identification/verification system and are called minutiae
[1].For the processing of fingerprint images, two stages are of
pivotal importance for the success of biometric
reorganization: image enhancement and minutiae extraction.
The traditional fingerprint processing technologies are
applied immediately after sensing. But a better thing is an
optional image enhancement in fingerprint images. In
                                                                                 Fig. 1: Distorted images acquired from a touch-based sensor.
realistic scenarios though the quality of a fingerprint image
may suffer from various impairments, caused by scores, cuts,
                                                                           Recently, several companies and research groups have
moist or dry skin, sensor noise, blur, wrong handling of
                                                                           developed touchless fingerprint sensors and recognition
sensor, weak ridge and valley pattern of the given fingerprint,
                                                                           systems [4]–[6]. TST Group developed a touchless imaging
etc. The task of the fingerprint enhancement is to counteract
                                                                           sensor (BiRD III) which uses a complementary metal–
the aforesaid quality impairments and to reconstruct the
                                                                           organic–semiconductor (CMOS) camera, and red and green
actual fingerprint pattern as trace to it original as possible. [2]
                                                                           light sources to acquire fingerprint images [4]. Song et al. [5]
Fingerprints are traditionally captured based on contact of the
                                                                           proposed a sensing system with a single charged-coupled
finger on paper or a platen. This often results in partial or
                                                                           device (CCD) camera and double ring-type blue illuminators
degraded images due to improper finger placement, skin
                                                                           to capture high contrast images. Also, Mitsubishi Electric
                                                                           Corporation proposed another touchless approach



                                                                      52                                     http://sites.google.com/site/ijcsis/
                                                                                                             ISSN 1947-5500
                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                      Vol. 9, No. 3, March 2011




transmitting the light through the finger [6], acquiring               can capture three different views of a fingerprint using only
fingerprint patterns under the surface of skin using light with        one Camera and also avoid the synchronization problem
a wavelength of 660 nm. However, such sensing systems [4]–             existing in multiple camera-based systems. In addition, to
[6] have an inherent problem as they use only a single                 obtain high-quality fingerprint images, we need to consider
capturing device, such as CMOS or CCD cameras. when                    several optical components in order to design the device.
capturing an image using a single camera, the geometrical
resolution of the fingerprint image decreases from the
fingerprint center towards the side area [7]. Therefore, false
features may be obtained in the side area and it reduces the
valid and useful region for authentication. Moreover, if there
is a view difference between images due to finger rolling, it
reduces the common area between fingerprints and degrades
system performance. To solve this problem, 3-D touchless
sensing systems using more than one view have been
explored [8]–[11]. TBS [8] used five cameras placed around
a finger to capture nail-to-nail fingerprint images and                                          Fig. 2: Proposed device.
generated a 3-D fingerprint image using the shape-from-                     (a)    Prototype of the device. (b) Schematic view of the device.
silhouette method. They then unwrapped the 3-D finger
image onto a 2-D image by using parametric and                         The specifications of the optical components are as follows:
nonparametric models to make rolled-equivalent images [9].                 1) Camera and lens: We use a 1/3-in progressive scan
Fatehpuria et al. [10] proposed a 3-D touchless device using                   type CCD with 1024 x 768 active pixels, where the
multiple cameras and structured light illumination (SLI). The                  pixel size is 4.65 x 4.65 m. This camera offers a
structured light patterns are projected onto a finger to obtain                sufficient frame rate of 29 Hz, thus avoiding image
its 3-D shape information and 2-D unfolded images are                          blurring caused by typical finger motion. Also, we
generated by applying “Springs algorithm” and some post                        use simple equations [see (1) and (2)] to design an
processing steps. Also, the Hand Shot ID system was                            adequate lens for our system.
developed to acquire a 3-D shape of a hand with fingers by
stitching images from 36 cameras [11]. Although all these
                                                                                          q
methods attempted to solve the problems in touch-based                                  M =                               (1) 
sensors and acquire expanded fingerprint images with less                                 p
skin deformation, they did not raise much interest in the
                                                                                        1 1 1
market because of much higher costs compared to                                          = +                               (2)
conventional touch-based sensors. Considering the above                                 f p q
observations, we adopt a new touchless sensing scheme using
a single camera and a set of mirrors. The mirrors work as                         Where f is the lens focal length, p and q are the lens-
virtual cameras, thus enabling the capture of an expanded                         to-object and lens-to-image distances, respectively,
view of a fingerprint at one time without using multiple                          and M is the optical magnification. Normally, the
cameras. The device consists of a single camera, two planar                       required image resolution for touch-based sensors is
mirrors, light-emitting diode (LED)-based illuminators, and a                     500 dpi. Therefore, to ensure a 500-dpi spatial
lens. Two planar mirrors are used to reflect the left and right                   resolution in the fingerprint area and to cover three
side view of a finger. In this paper, we proposed a new                           view fingerprints, the optical magnification
method to enhance the touchless finger print and to extract                       parameter M, the lens to image distance, and field of
the minutiae data.                                                                view (FOV) are determined as 0.1, 170 mm, and 50
                    II – SYSTEM DESIGN                                            x 38 mm, respectively. By doing this, we can
                                                                                  capture three view images with 500-dpi resolution at
          To overcome the view difference problem and the                         one time. Also, the depth of field (DOF) of the lens
limitation of a single view, some touchless fingerprinting                        ranges from -2.6 to +2.6 mm at a given working
systems capture several different views of a finger by using                      distance and it normally covers the half depth of a
multiple cameras. However, using multiple cameras increases                       finger.
the cost and size of a system. Thus, we adopt a new sensing
system which captures three different views (frontal, right,               2)      Illumination: Considering the reflectance of human
and left) at one time by using a single camera and two planar                     skin to various light sources, we used ring-shaped
mirrors. Figs. 2(a) and (b) show the prototype and schematic                      white LED illuminators and a band pass filter which
view of the device. As shown in Fig. 2, two mirrors are                           can transmit green light to enhance the ridge-to-
placed next to the finger and reflect the right and left side                     valley contrast. Also, the illuminators are placed
views of the finger. Then, the frontal view and two mirror-                       perpendicular to the finger to remove the shadowing
reflected views are captured by a single camera                                   effect. Diffusers are used to illuminate a finger
simultaneously. A mirror-reflected image is regarded as the                       uniformly.
“flipped” image taken by a virtual camera placed at a
different direction compared to the real one. Therefore, we



                                                                  53                                      http://sites.google.com/site/ijcsis/
                                                                                                          ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 9, No. 3, March 2011




                                                                            can be used to fill "holes" of a size equal to or
                                                                            smaller than the structuring element. Used with
                                                                            binary images, where each pixel is either 1 or 0,
                                                                            dilation is similar to convolution. Over each pixel of
                                                                            the image, the origin of the structuring element is
                      Foreground separation                                 overlaid. If the image pixel is nonzero, each pixel of
                                                                            the structuring element is added to the result using
                                                                            the "or" operator. Used with greyscale images,
                           Normalization                                    which are always converted to byte type, the
                                                                            DILATE function is accomplished by taking the
                                                                            maximum of a set of sums. It can be used to
                           Gabor filtering                                  conveniently implement the neighbourhood
                                                                            maximum operator with the shape of the
                                                                            neighbourhood given by the structuring element.
                                                                            Used with greyscale images, which are always
                        Pyramidal method 
                                                                            converted to byte type, the ERODE function is
                                                                            accomplished by taking the minimum of a set of
                                                                            differences. It can be used to conveniently
                              Thinning                                      implement the neighbourhood minimum operator
                                                                            with the shape of the neighbourhood given by the
                                                                            structuring element.
                        Minutiae extraction                              B) Normalisation
                                                                            The process of removing the effects of the sensor
                                                                            noise and gray-level background due to finger
                                                                            pressure differences. The objective of this stage is
      Fig. 3: Overall flowchart of the proposed method                      decrease the dynamic range with gray scale between
                                                                            ridges and valleys of the image. Normalization
    3) Mirror: Two planar mirrors are positioned next to                    factor is calculated according to the mean and the
         the left and right side of the finger and the mirror               variance of the image. Each and every pixel in the
         size is determined to cover the maximum thumb                      fingerprint image has to be processed to find the
         size. To provide enough overlapping area between                   median value. The average value of all the pixels is
         frontal- and side-view images, the angles of the                   calculated i.e, the median value. By comparing the
         mirrors are determined 15 empirically. Also, the                   median value with the current pixel the replacement
         mirrors can be used as pegs to place a user’s finger               can be performed.
         firmly on the device.                                              Normalization facilitates have the subsequent
                                                                            processing steps.
                 III – PROPOSED METHOD                                      Let G (i, j) denote the normalized gray-level value at
In this section, we explain the Enhancement method for                      pixel (i, j). The normalized image is defined as
synthesizing an expanded fingerprint image from frontal- and                follows:
side-view images. The overall scheme of the method is
presented in Fig. 3 The method is mainly composed of six
stages (foreground separation, normalisation, Gabor filtering,
pyramidal method, thinning, minutiae extraction). In                                                                               (3)
foreground separation we will do the morphological
operation, in normalisation we pre-process the image etc.
     A) Foreground separation                                                     Where, M 0 and VAR0 denote the desired
         Using morphological operation we use the erosion                   mean and variance value, respectively.
         followed by dilation, this can be done up to required
                                                                            Most fingerprint images on a live-scan input device
         time. Mathematical morphology is a method of
         processing digital images on the basis of shape. A                 are usually of poor quality. The fingerprint image is
         discussion of this topic is beyond the scope of this               smoothed with an average or median filter.
         manual. A suggested reference is: Haralick,                     C) Gabor filtering
         Sternberg, and Zhuang, "Image Analysis Using                       A Gabor filter is a linear filter used in image
         Mathematical Morphology," IEEE Transactions on                     processing for edge detection. Frequency and
         Pattern Analysis and Machine Intelligence, Vol.                    orientation representations of Gabor filter are similar
         PAMI-9, No. 4, July, 1987, pp. 532-550. Much of                    to those of human visual system, and it has been
         this discussion is taken from that article. Briefly, the           found to be particularly appropriate for texture
         DILATE function returns the dilation of image by                   representation and discrimination. In the spatial
         the structuring element Structure. This operator is                domain, a 2D Gabor filter is a Gaussian kernel
         commonly known as "fill", "expand", or "grow." It                  function modulated by a sinusoidal plane wave. The



                                                                    54                             http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                    Vol. 9, No. 3, March 2011




    Gabor filters are self-similar - all filters can be              Reduce the image size by a factor k        for three times.
    generated from one mother wavelet by dilation and                This is also outlined on the upper left hand side of Table
    rotation. Its impulse response is defined by a                   1. To create images containing only band limited signals
    harmonic function multiplied by a Gaussian                       of the original image, we expand the three images by
    function. Because of the multiplication-convolution              factor and subtract each of them from the next lower
    property (Convolution theorem), the Fourier                      level.
    transform of a Gabor filter's impulse response is the            E)   Thinning
    convolution of the Fourier transform of the                           The THIN function returns the "skeleton" of a bi-
    harmonic function and the Fourier transform of the                    level image. The skeleton of an object in an image is
    Gaussian function.                                                    a set of lines that reflect the shape of the object. The
     g ( x, y; λ,θ ,ϕ,σ , γ )                                             set of skeletal pixels can be considered to be the
                                                                          medial axis of the object. For a much more
                                                                          extensive discussion of skeletons and thinning
                                                                          algorithms, see Algorithms for Graphics and Image
                                                                          Processing, Theo Pavlidis, Computer Science Press,
                                                                          1982. The THIN function is adapted from Algorithm
                                                                          9.1 (the classical thinning algorithm).On input, the
                                                                          bi-level image is a rectangular array in which pixels
                                                    (4)                   that compose the object have a nonzero value. All
                                                                          other pixels are zero. The result is a byte type image
                                                                          in which skeletal pixels are set to 2 and all other
         Where x ' = x cos θ + y sin θ and                                pixels are zero.
                                                                     F)   Minutiae extraction
                                                                          A feature extractor finds the ridge endings and ridge
                   y = − x sin θ + y cos θ
                    '
                                                                          bifurcations from the input fingerprint images. If
                                                                          ridges can be perfectly located in an input
    In this equation, λ represents the wavelength of the                  fingerprint image, then minutiae extraction is just a
    cosine factor, θ represents the orientation of the                    trivial task of extracting singular points in a thinned
    normal to the parallel stripes of a Gabor function, φ                 ridge map. However, in practice, it is not always
    is the phase offset, σ is the sigma of the Gaussian                   possible to obtain a perfect ridge map. The
    envelope and γ is the spatial aspect ratio, and                       performance of currently available minutiae
    specifies the ellipticity of the support of the Gabor                 extraction algorithms depends heavily on the quality
    function.                                                             of the input fingerprint images. Due to a number of
                                                                          factors (aberrant formations of epidermal ridges of
D) Pyramidal method                                                       fingerprints, postnatal marks, occupational marks,
        Pyramid decomposition requires resizing                           problems with acquisition devices, etc.), fingerprint
   (scaling, or other geometric transformation). To                       images may not always have well-defined ridge
   create our Gaussian and Laplacian like pyramids, we                    structures. A reliable minutiae extraction algorithm
   define the reduce(I,K) and expand(I,K) operations,                     is critical to the performance of an automatic
   which decrease and increase an image in size by the                    identity authentication system using fingerprints.
   factor K, respectively. During reduce, the image is
   initially low-pass filtered to prevent aliasing using a
   Gaussian kernel.2. The latter’s standard deviation
   depends on the resizing factor, which here follows
   the lower bound approximation of the corresponding
   ideal low-pass filter                   . We initially
   reduce the original fingerprint image FP by a factor
   of              in order to exclude the highest
   frequencies. In a further step, we                                           Fig. 4: Types of Ridge Patterns

                       Table - 1
               Pyramidal building process

              a)   Pyramidal decomposition
       Gaussian-like               Laplacian-like

     G1=reduce(fp,k0)           L1=g1-expand(g2,k)
                                                                                     Fig. 5: Minutiae points
     G2=reduce(g1.k)            L2=g2-expand(g3,k)




                                                             55                                 http://sites.google.com/site/ijcsis/
                                                                                                ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                           Vol. 9, No. 3, March 2011




        Minutiae are extracted from the thinned image by                 checking methods compare the foreground size of the fingers.
    using the Crossing Number algorithm.                                 Here foreground means the good quality regions of the finger
                                                                         print. The foreground size measures are tabulated as follows:




                                                         (5)

    Where Pi     0 or 1 in the 3*3 Neighbor of P

                     Characteristic of CN
               CN                         Character
                                                                                                     Fig. 8: Minutiae
                0                       Isolated point
                                                                                                          Table - 2
                2                         End point                       Average increasing rate of Foreground size in terms of each measurement

                4                     Bifurcation point                    Quality measurement                  Average increase rate of
                                                                                                                   foreground size
                                                                          Standard deviation [12]                      28.65%
               IV – EXPERIMENTAL RESULTS 
For the experimental results we acquired 100 set of finger                     Coherence [13]                             33.72%
print images, each set contain frontal, left and right view
images. One of the used images set is shown in the Fig: 6 and                 Gradient – based                            30.81%
the enhanced image is also shown in the Fig: 7. The minutiae                    method [14]
extraction results also expressed in Fig: 8. The most definite
indicator of touchless image quality is the number of true               However we can expect that our enhanced image can be
minutiae additionally extracted.                                         making high performance when view difference image are
                                                                         matched. The Table-2 shows the result of our enhanced
                                                                         image.
                                                                                   V – CONCLUSIONS AND FUTURE WORK 
                                                                         This paper proposes a new method for touchless fingerprint
                                                                         sensing images. To get the better minutiae extraction, the
                                                                         three fingerprints (frontal, left, right) are enhanced using
                                                                         Gabor and pyramidal method. For experimental results, the
                                                                         enhanced fingerprints are having better enhanced ridges and
                                                                         the valleys. Also minutiae extraction is handled. The results
                                                                         are analysed and described in tables and graph format. In this
                        Fig. 6: Input images                             paper we limits the research work up to minutiae extraction,
                                                                         this research can be continued on mosaicing of the three
                                                                         enhanced images. Feature work can be done on the same
                                                                         concept. According to the result, it is concluded that the
                                                                         proposed system generate better enhancement on touchless
                                                                         fingerprint than the existing methods.

                                                                                                     REFERENCES 
                                                                             [1]    D. Lee, K. Choi, and J. Kim, “A robust fingerprint matching
                                                                                   algorithm using local alignment,” in Proc. 16th Int. Conf. Pattern
                                                                                   Recognition, 2002, vol. 3, pp. 803–806.
                                                                             [2]   Hartwig Fronthaler, Klaus Kollreider, and Josef Bigun ,Local
                                                                                   Features for Enhancement and Minutiae Extraction in
                                                                                   Fingerprints, IEEE Transactions On Image Processing, VOL. 17,
                                                                                   NO. 3, MARCH 2008
                    Fig. 7: Enhanced images                                  [3]   Yi Chen1, Geppy Parziale2, Eva Diaz-Santana2, and Anil K Jain,
                                                                                   “3d Touchless Fingerprints: Compatibility With Legacy Rolled
Human experts prove that the more true minutiae extracted                          Images” Michigan State University Department of Computer
from the enhanced image. The touchless fingers are better                          Science and Engineering, 2006 Biometrics Symposium,
                                                                             [4]   TST Group Aug. 03, 2009 [Online]. Available: http://www.tst-
than the conventional touch based fingers, that conclusion                         biometrics.com
can be deviate from the results. The finger print quality



                                                                    56                                      http://sites.google.com/site/ijcsis/
                                                                                                            ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                             Vol. 9, No. 3, March 2011




[5]    Y. Song, C. Lee, and J. Kim, “A new scheme for touchless
       fingerprint recognition system,” in Proc. Int. Symp. Intelligent
       Signal Processing and Communication Systems, 2004, pp. 524–
       527.
[6]    Mitsubishi     Touchless     Fingerprint   Sensor     Aug.     03,
       2009[Online].Available: http://global.mitsubishielectric.com
[7]    N. K. Ratha and V. Govindaraju, Advances in Biometrics:
       Sensors, Algorithms and Systems. New York: Springer, 2008
[8]    TBS Touchless Fingerprint Imaging Aug. 03, 2009
       [Online].Available: http://www.tbsinc.com/
[9]    Y. Chen, G. Parziale, E. Diaz-Santana, and A. K. Jain, “3D
       touchless fingerprints: Compatibility with legacy rolled images,”
       in Proc. Biometric Consortium Conf., Baltimore, MD, 2006.
[10]   A. Fatehpuria, D. L. Lau, and L. G. Hassebrook, “Acquiring a 2-
       D rolled equivalent fingerprint image from a non-contact 3-D
       finger,” in SPIE Defense and Security Symp. Biometric
       Technology for Human Identification III, Orlando, FL, 2006, vol.
       6202, pp. 62020C-1–62020C-8.
[11]   Aug.          03,         2009        [Online].         Available:
       http://privacy.cs.cmu.edu/dataprivacy/          projects/handshot
       /index.html
[12]   L. Hong, Y.Wan, and A. K. Jain, “Fingerprint image
       enhancement: Algorithm and performance evaluation,” IEEE
       Trans. Pattern Anal. Mach Intell., vol. 20, no. 8, pp. 777–789,
       Aug. 1998.
[13]   E. Lim, X. Jiang, and W. Yau, “Fingerprint quality and validity
       analysis,”in IEEE Int. Conf. Image Processing (ICIP), Sep. 2002,
       vol. 1,pp. 469–472.
[14]   S. Lee, H. Choi, and J. Kim, “Fingerprint quality index using
       gradientcomponents,” IEEE Trans. Inf. Forensics Security, vol. 3,
       no.          4,          pp.792–800,          Dec.          2008. 
       http://www.youtube.com/watch?v=5ntH8s03ujk&feature=relat
       ed




                                                                            57                          http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                   Vol. 9, No. 3, March 2011




             Automatic parsing For Arabic sentences

Zainab Ali Khalaf*                                                            Dr. Tan Tien Ping

School of computer science                                                    School of computer science
Universiti Sains Malaysia (USM)                                               Universiti Sains Malaysia (USM)
Penang, Malaysia                                                              Penang, Malaysia
E-mail: zak10_com026@student.usm.my                                           E-mail: tienping@cs.usm.my
*(Ass. Prof. In Computer Science Dept.,
Basra University, Iraq)

Abstract__The designed system is a parser for Arabic
sentences using syntactic and semantic relations                      The proposed system aims to use these properties
between deep and surface structures. The system                   to parse Arabic sentences depending on the position
depends on implementation of Case theory of Fillmore.             of the words in the sentence and the functional
                                                                  meaning of them.
The parsing algorithm starts analyzing the input
sentence to check its syntax, semantic and spelling using
Arabic transformation rules proposed in Al_Khouly to
gain semantic strength. The proposed system depends                               II. SYSTEM COMPONENTS
on the effective elements represented by the verb of the
sentence .This element is used to control the parsing
operation.                                                             The syntactical properties of any natural
     The proposed system permits as input different
                                                                  language are formally described by the use of what
surface structures of Arabic sentences to produce
automatic parsing forms for these input sentences.                Chomsky calls production systems. A formal system
                                                                  generally depends on three types of data [2,3,6]:
Keywords__Artificial intelligence; natural language
processing; transformation rules; deep structure and                        A. Data of vocabulary lexicon
surface structure; parsing Arabic sentences .

                                                                       The lexicon plays an important role in any NLP
                                                                  system. It is a huge data base of variable entries
                   I. INTRODUCTION                                describing the meaning of words in synonymy (and
                                                                  antinomy) contextual fashion [3,6]. The implemented
                                                                  lexicon consists of entries saved as a rule ( Entrance
    Arabic language is a parsing language . Parsing               [ Word , Features ] ).
means the relation among the words in the sentence.
The most important component is the verb which acts               • The Entrance is one of the following indicators :-
as the basic unit to control the rules of choosing other            Verb , Noun , Preposition , Determinate , Assistant
elements. Although Arabic sentences have different                  and Negation.
structures , but it is recognized as a ( verb , subject ,
object ) language. The subject or the object may be               The Word is a string index for the lexicon entry.
precede the verb in the Arabic sentences according to
the pragmatic necessity [1,3,4].
                                                                  • The Features is a list of structured integers coded
                                                                    to hold the syntactical and semantic information of
     Arabic Syntactic facilitates the flexibility of the            the word. Each coded integer, written as [Fp],
deep structure and the surface structure of sentence to             consists of two parts F and p. The [p] part is either 1
be connected together strongly. This propriety helps                or 0 depending on whether the feature [F] exists or
Arabic language accept for automatic processing                     not. The [F] part is the feature code.
[4,5].




                                                            58                                http://sites.google.com/site/ijcsis/
                                                                                              ISSN 1947-5500
                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                   Vol. 9, No. 3, March 2011




           B. Data of syntactical rules
                                                                    The presence of the verb is necessary and obligatory,
                                                                    whereas the presence of other elements is optional
       These rules are formalized to describe the                   and dependent on the verb rules [1,4].
  language in order to relate each one deep structure
  into so many corresponding surface structures of the
  same meaning. These rules are actually inductive and
  sequential. Some are obligatory and others are                             III. DENESIGD SYSTEM STRUCTURE
  optional rules. From the optional rules, one can obtain
  various surface structures that act as contextual                   The designed system has many stages : Figure (1)
  linguistics. The transformations are mainly operations            acts flowchart of these stages which are described
  that are addition, deletion, moving forward, moving               below :
  backward and some other secondary operations. These
  operations are, in general, not performed at random,
  but are governed and selected according to a set of                       A. Input sentence stage
  conditions and rules of structure description. These
  operations will generate all surface structures
  emerging from that one deep structure.                            The function of this stage is to input Arabic sentence
                                                                    from the keyboard to the computer , this sentence
                                                                    ended by dot or semicolon or space character .
           C. Data of syntactic structure

        These data are rules described in BNF for
  Arabic language , and acts as constraints and controls                   B.    Segmentation stage
  to form the sentences of Arabic language. The most
  important component, as Fillmore and Shank
  recognized, is the verb element which acts as the                 The function of this stage is to segment the input
  basic unit that controls rules of choosing other                  sentence into words depended on space character
  elements. The dependent phrase structure rules used               (free number of space characters).
  are the following :-
                                                                           C.    Lexicon search stage
<Sentence> ::= <Modality> + < Auxiliary > + <
Proposition >
<Sentence> ::= < Auxiliary > + < Proposition > <                    The function of this stage is to search for all sentence
Modality > ::= < External Condition > + < External                  words in the lexicon . If the word is not found in the
Adverb >                                                            lexicon, the program gives spelling error message
<Proposition > ::= < Verb > + < Theme > + < Indirect                and stop .
Object > + < Place > + < Tool > +         < Agent >
< Theme > ::= < Noun Phrase >                                              D.    Syntactical analysis stage
< Agent > ::= < Noun Phrase >
< Tool >        ::= < Noun Phrase >
< Place > ::= < Noun Phrase >                                       The function of this stage is to ensure and govern the
< Indirect Object > ::= < Noun Phrase >                             correctness of input sentence from its syntactical side
<Noun phrase> ::=<Proposition> + <determinate > + <                 . If the processing found errors , the program gives
  Noun >                                                            syntactical error massage .
< Noun Phrase > ::= < Proposition >+ < Noun>
< External Condition > ::= semi statements used to
combine two sentences such as          ( in spite of                     E. Semantic analysis stage
   ) or ( moreover                  ) etc.
< External Adverb > ::= <Time Adverb>+<
Interrogative Words> +<Negation Words>                              The function of this stage is to ensure and govern the
    < Auxiliary > ::= lexical words such as ( ) or (                correctness of input sentence from its harmony, its
         ) etc.                                                     vocabulary and correctness of its meaning . If the
    < Verb > ::= A dictionary verb such as ( write   )              sentence is not correct in its meaning, the program
    etc.                                                            gives semantic error massage .
    < Noun > ::= A dictionary noun such as ( boy     )
    etc.



                                                            59                                http://sites.google.com/site/ijcsis/
                                                                                              ISSN 1947-5500
                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                               Vol. 9, No. 3, March 2011




    F. Generative deep structure stage

                                                                                      IV. EXAMPLES
Transformational operations will carry out , and try
to compile the addition, deletion, replacement and
other operations to obtain on the sentence structure
which acts as the deep structure .                                 For example we want to know the parsing of the
                                                              following sentences. Figure (2) depicted this
                                                              mechanism :-
    G. Parsing stage
                                                                  A. Example 1
The function of this stage is to parse sentence which
depends on its effective element and its position in
structure phrase . This stage has many Arabic                      The system prints the following parsing :
language rules which control the parsing operations                           .                                                        :
.                                                                                           .                                         :
Here an examples of sentences that the system can                                                                                 :
parse its :-
                                                              .
                                                                  B. Example 2
                                    .1
                                                                                   .
                                    .2                             The system prints the following parsing :

                                    .3                                                                                        :
                                                                                                              .
                                    .4                              .                                                         :
                                                              .                                                               :
                                    .5
                                                                  C. Example 3
                                    .6
                                                                                    .
                                    .7                             The system prints the following parsing :

                                    .8
                                    .9                                                                            :
                                                                                                          .
                                    .10                                                                           :
                                                                                                                          .
                                    .11                                                                           :
                                                                                                                          .
                                    .12                           D. Example 4

                                    .13                                               .
                                                                   The system prints the following parsing :
                                    .14
                                    .15                                   .                                           :
                                                                          .                                           :
                                    .16                                                                               :
                                    .17                                                                               .
                                                                  E. Example 5
                                                                                  .
                                                                   The system prints the following parsing :




                                                        60                                http://sites.google.com/site/ijcsis/
                                                                                          ISSN 1947-5500
                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                     Vol. 9, No. 3, March 2011




                                                                                            References
                   .                        :
                                             :                      [1] Abo-Arafah .A. , "A grammar for the Arabic language suitable
                                                     .              for machine parsing and automatic text generation ", PH.D. thesis ,
                                             :                      Illinois of technology , Chicago , USA,1995 .
                                                 .
  .                                          :
                                                                    [2] Ali .N. , “Arabic language and Computer” , "Al-Tareeb
                                                                    Publishing House, Cairo, Egypt, 1988.

                                                                    [3] Al-Khouly, M. , “ Transformation rules for Arabic language”,
                                                                    Al- Riyadh, 1981.
                       Conclusions
                                                                    [4] Al-Shalabi .R. , Evens .M ." A Computational Morphology
                                                                    System For Arabic " , Dept. Of Computer Science and applied
                                                                    Mathematics , Illinos Institute Of Technology , Chicago , USA ,
    The present research ends up with the following                 W.D.
conclusions :-
                                                                    [5] Gheith .M. , Mashour .M . " A Computer Based System For
  1. The verb is the main component which controls                  understanding Arabic language ", Computer Science Department
  all other component appearing with it . From this                 Inst. Of Statistical Study & Research , Cairo University , Egypt
  point, we consider all deep structures as containing              ,W.D.
  the verb in its structure .
                                                                    [6] Khalaf .Z. , “Computerized Implementation For Processing
                                                                    Arabic Sentences By Interpretation Synonymy Relationships” ,
  2. The word meaning depends on the essential                      M.Sc. thesis, Basra University, Iraq, 2001.
  effective element ( the deep element ) .

  3. The lexicon plays the essential element to
  provide any system by vocabulary and its features .
  By these features, we can control the different
  processing levels of syntax and semantics .

4. The absence of vowelization might bring some
 ambiguities to sentence understanding. However the
 transformation rules are used to remedy these
 ambiguities in an explicit and easy way, as in the
 following sentences which show where, in all the
 sentences, the man is the subject and the lion is object
 .




                 Acknowledgment


I would like to express my sincere appreciation to
TWAS organization , USM university for their
encouragement and continuous financial support
through the providing PHD fellowship. In addition we
would like to thank school of computer science for
their encouragement and motivation of international
students in the faculty.




                                                              61                                  http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                    (IJCSIS) International Journal of Computer Science and Information Security,
                    Vol. 9, No. 3, March 2011




                                  User Interface




                                  Input Stage


                                  Segmentation
  Lexicon
                                     Stage
    Lexical Rules




                                                                                  Spelling
                              Lexicon Stage
                                                                                   Error

                            Initial Descriptive
                                 Structure



                                                                                    Syntactical
                        Transformational Rules                                        Errors
                       Transformational Rules


Semantic
 Stage                        Deep Structure



Semantic                    Parsing stage
  Error



                         User Interface



                             Figure (1) acts flowchart of
                                 Parsing operations




                             62                                http://sites.google.com/site/ijcsis/
                                                               ISSN 1947-5500
                            (IJCSIS) International Journal of Computer Science and Information Security,
                            Vol. 9, No. 3, March 2011




                     Surface structure


                    .


                   Transformation Rules


           An agent (        ) used a tool (          ) to
         perform the verb (          ) to get the object
                           (     )
                        Deep structure



Verb (      ) , Subject (      ) , Object (           ) , Tool (        )

                        Sentence structure



                         Parsing Stage


                                                ­            :
     .                                                       :
 .                                                               :
           .                                                     :




           Figure (2) acts the mechanism to Parse
                       Arabic sentence


                                     63                                http://sites.google.com/site/ijcsis/
                                                                       ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 09, No.03, 2011

   Amelioration of Walsh-Hadamard Texture Patterns
    based Image Retrieval using HSV Color Space
                                    Dr. H.B.Kekre1, Sudeep D. Thepade2, Varun K. Banura3
                     1
                         Senior Professor, 2Ph.D.Research Scholar & Associate Professor, 3B.Tech (CE) Student
                                             Computer Engineering Department, MPSTME,
                                     SVKM‟s NMIMS (Deemed-to-be University), Mumbai, India
                            1
                             hbkekre@yahoo.com, 2sudeepthepade@gmail.com,3varunkbanura@gmail.com

Abstract— The theme of the work presented here is amelioration            A. Content Based Image Retrieval
of Walsh-Hadamard texture pattern based image retrieval using
HSV color space. Different texture patterns namely ‘4-pattern’,               For the first time Kato et.al. [7] described the experiments
‘16-pattern’, ‘64-pattern’ are generated using Walsh-Hadamard             of automatic retrieval of images from a database by colour and
transform matrix and then compared with the bitmap of an                  shape feature using the terminology content based image
image in HSV color space to generate the feature vector as the            retrieval (CBIR). The typical CBIR system performs two major
matching number of ones and minus ones per texture pattern.               tasks [19,20] as feature extraction (FE), where a set of features
The proposed content based image retrieval (CBIR) techniques              called feature vector is generated to accurately represent the
are tested on a generic image database having 1000 images                 content of each image in the database and similarity
spread across 11 categories. For each proposed CBIR technique             measurement (SM), where a distance between the query image
55 queries (randomly selected 5 per category) are fired on the            and each image in the database using their feature vectors is
image database. To compare the performance of image retrieval             used to retrieve the top “closest” images [19,20,29].
techniques average precision and recall of all the queries per
image retrieval technique are computed. The results have shown                 For feature extraction in CBIR there are mainly two
improved performance (higher precision and recall values of               approaches [8] feature extraction in spatial domain and feature
crossover points) with the proposed methods compared to the               extraction in transform domain. The feature extraction in
texture based image retrieval in RGB color space. Further the             spatial domain includes the CBIR techniques based on
performance of proposed image retrieval methods is enhanced               histograms [8], BTC [4,5,19], VQ [24,28,29]. The transform
using even image part. The proposed CBIR methods do not give              domain methods are widely used in image compression, as they
better performance with image bitmaps generated using tiling in           give high energy compaction in transformed image [20,27]. So
HSV color space. In the discussed image retrieval methods, the            it is obvious to use images in transformed domain for feature
combination of original and even image part for 16-pattern                extraction in CBIR [26]. But taking transform of image is time
texture with image bitmaps in HSV color space gives the highest           consuming. Reducing the size of feature vector using pure
crossover point of precision and recall indicating better                 image pixel data in spatial domain and getting the improvement
performance.                                                              in performance of image retrieval is shown in [1,2,3]. But the
                                                                          problem of feature vector size still being dependent on image
    Keywords- CBIR, Walsh-Hadamard           transform,   Texture,
Pattern, Bitmap, HSV color space
                                                                          size persists in [1,2,3]. Here the query execution time is further
                                                                          reduced by decreasing the feature vector size further and
                       I.    INTRODUCTION                                 making it independent of image size. Many current CBIR
                                                                          systems use the Euclidean distance [4-6,11-17] on the extracted
    Today the information technology experts are facing                   feature set as a similarity measure. The Direct Euclidian
technical challenges to store/transmit and index/manage image             Distance between image P and query image Q can be given as
data effectively to make easy access to the image collections of          equation 1, where Vpi and Vqi are the feature vectors of image
tremendous size being generated due to large numbers of
                                                                          P and Query image Q respectively with size „n‟.
images generated from a variety of sources (digital camera,
digital video, scanner, the internet etc.). The storage and                                               n
transmission is taken care of by image compression [4,7,8].                                      ED      (Vpi  Vqi )
                                                                                                         i 1
                                                                                                                          2
                                                                                                                                                (1)
The image indexing is studied in the perspective of image
database [5,9,10,13,14] as one of the promising and important
research area for researchers from disciplines like computer                      II.     TEXTURE PATTERNS USING WALSH-HADAMARD
vision, image processing and database areas. The hunger of                                       TRANSFORM MATRIX
superior and quicker image retrieval techniques is increasing
day by day. The significant applications for CBIR technology                  Walsh transform matrix [21,22,26] is defined as a set of N
could be listed as art galleries [15,17], museums, archaeology            rows, denoted Wj, for j = 0, 1, .... , N - 1, which have the
[6], architecture design [11,16], geographic information                  following properties:
systems [8], weather forecast [8,25], medical imaging [8,21],
trademark databases [24,26], criminal investigations [27,28],                          Wj takes on the values +1 and -1.
image search on the Internet [12,22,23]. The paper attempts to                         Wj[0] = 1 for all j.
provide better and faster image retrieval techniques.



                                                                     64                                  http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 09, No.03, 2011
       WjxWkT=0, for j not equal to k and WjxWkT =N,
        for j=k.
       Wj has exactly j zero crossings, for j = 0, 1, .... , N-1.
       Each row Wj is even or odd with respect to its
        midpoint

      Walsh transform matrix is defined using a Hadamard
matrix of order N. The Walsh transform matrix row is the row
of the Hadamard matrix specified by the Walsh code index,
which must be an integer in the range [0, ..., N - 1]. For the
Walsh code index equal to an integer j, the respective
Hadamard output code has exactly j zero crossings, for j = 0,
1, ... , N - 1.
     Using the Walsh-Hadamard transform assorted texture
patterns namely 4-pattern, 16-pattern and 64-pattern are                          2(a). 4x4 Walsh-Hadamard transform matrix
generated. To generate N2 texture patterns, each column of the
Walsh-Hadamard matrix of size NxN is multiplied with every
element of all possible columns of the same matrix (one
column at a time to get one pattern). The texture patterns
obtained are orthogonal in nature.
     Figure 1(a) shows a 2X2 Walsh-Hadamard matrix. The
four texture patterns generated using this matrix are shown in
figure 1(b). Similarly figure 2(b) shows first four texture
patterns (out of total 16) generated using 4X4 Walsh-
Hadamard matrix shown in figure 2(a).




                                                                             2(b). First four of the sixteen Walsh-Hadamard texture
                                                                                                patterns (16-pattern)
                                                                            Figure 2. Generation of sixteen Walsh-Hadamard texture
        1(a). 2x2 Walsh-Hadamard transform matrix                                             patterns (16-pattern)
                                                                                      III.    GENERATION OF IMAGE BITMAPS
                                                                             Image bitmaps of colour image are generated using three
                                                                          independent red (R), green (G) and blue (B) components of
                                                                          image to calculate three different thresholds. Let
                                                                          X={R(i,j),G(i,j),B(i,j)} where i=1,2,….m and j=1,2,….,n; be
                                                                          an m×n color image in RGB space. Let the thresholds be TR,
                                                                          TG and TB, which could be computed as per the equations
                                                                          given below as 2, 3 & 4.

                                                                                         1 m n
                                                                                   TR         R(i, j)
                                                                                        m * n i 1 j 1
                                                                                                                                       (2)


                                                                                              1 m n
   1(b). Four Walsh-Hadamard texture patterns (4-pattern)
                                                                                  TG               G(i, j)
                                                                                             m * n i 1 j 1
                                                                                                                                       (3)

Figure 1. Generation of four Walsh-Hadamard texture patterns
                          (4-pattern)



                                                                     65                             http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 09, No.03, 2011

                     1 m n                                                                  3
          TB              B(i, j)
                    m * n i 1 j 1
                                                        (4)                    S  1
                                                                                          RG B
                                                                                                 [min( R, G, B)]                            (9)


Here three binary bitmaps are computed as BMr, BMg and
                                                                                            1
BMb. If a pixel in each component (R, G, and B) is greater                               V  ( R  G  B)                                  (10)
than or equal to the respective threshold, the corresponding                                3
pixel position of the bitmap will have a value of 1 otherwise it
will have a value of -1.
                                                                                           V.     PROPOSED CBIR METHODS
                                                                            After generating bitmap of the image in HSV color space,
                      1, if .R(i, j )  TR
                                                                       to generate feature vectors the bitmap of each image is
        BMr(i, j )                                    (5)             compared with the generated texture patterns to find matching
                      1, if .R(i, j )  TR
                                                                       number of ones and minus ones. The size of the feature vector
                                                                        of the image is given by equation 11.
                      1, if .G(i, j )  TG                             Feature vector size=2*3*(no. of considered texture-pattern)               (11)
                     
        BMg(i, j )                                    (6)                Using three assorted texture pattern set along with original
                      1, if .G(i, j )  TG
                                                                       and original-even image in HSV color space, total six novel
                                                                        feature vector generation methods can be used resulting into
                                                                        six new image retrieval techniques. Walsh-Hadamard texture
                      1, if .B(i, j )  TB                            pattern [30,31,32] based image retrieval techniques in RGB
                     
        BMb(i, j )                                    (7)             color space are considered to compare the performance of
                      1, if .B(i, j )  TB                            proposed CBIR techniques. In the proposed CBIR techniques
                     
                                                                        the combination of original and even part of images give better
To generate tiled bitmaps, the image is divided into four non-          results than original image alone [1,2]. The proposed CBIR
overlapping equal quadrants and the average of each quadrant            techniques do not give good performance with bitmaps
is considered to generate the respective tile of the image              generated using tiling [30]. The main advantage of proposed
bitmap.                                                                 CBIR methods is reduced time complexity for query execution
                                                                        due to reduced size of feature vector resulting into faster
                      IV.   COLOR SPACE [33]                            image retrieval with better performance. Also the feature
   Color model is an abstract mathematical model describing             vector size is independent of image size in proposed CBIR
the way colors can be represented as tuples of numbers,                 methods.
typically as three or four values or color components. Color                Table 1. Feature vector size of discussed image retrieval techniques
space is set of colors where the color model is associated with
a precise description of how the components are to be                                                             Feature
interpreted.                                                                                  CBIR               vector size
                                                                                            Technique            for Binary
A. RGB Color Space                                                                                              Image Maps
   RGB uses additive color mixing, because it describes what                                                          8
                                                                                             4-Pattern
kind of light needs to be emitted to produce a given color.
RGB stores individual values for red, green and blue.                                       16-Pattern                 32

B. HSV Color Space                                                                          64-Pattern                128
    The HSV stands for the Hue, Saturation, and Value based
on the artists (Tint, Shade, and Tone). The Value represents
intensity of a colour, which is decoupled from the colour                                       VI.    IMPLEMENTATION
information in the represented image. The Hue and Saturation
components are intimately related to the way human eye                     The implementation of the discussed CBIR techniques is
perceives colour resulting in image processing algorithms with          done in MATLAB 7.0 using a computer with Intel Core 2 Duo
physiological basis. Conversion formula from RGB to HSV is              Processor T8100 (2.1GHz) and 2 GB RAM.
given by equations 8, 9 and 10.                                            The CBIR techniques are tested on the Wang image
                                                                        database [18] of 1000 variable size images spread across 11
               
               
                    1
                      ( R  G)  ( R  B)                           categories of human being, animals, natural scenery and
               1
       H  cos     2
                                                       (8)             manmade things, etc. The categories and distribution of the
                ( R  G )  ( R  B)(G  B) 
                          2
                                                                        images is shown in table 2.
                                            




                                                                   66                                   http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                Vol. 09, No.03, 2011
    To analyze the effectiveness of proposed CBIR techniques,                     Figure 3 shows the performance comparison of proposed
the crossover points of average precision and recall values of                    CBIR methods in RGB and HSV color space. It is observed
the 55 queries (randomly selected 5 from each image                               that the performance of the Walsh-Hadamard texture pattern
category) have been used as statistical comparison parameters                     based image retrieval [30] is improved in HSV color space as
[4,5]. These precision and recall have been defined in the                        compared to RGB color space. Also the performance of the
equations 12 and 13.                                                              texture pattern based image retrieval increases with increase in
                                                                                  number of generated texture patterns up to a certain level (16-
                 Number _ of _ relevant _ images _ retrieved
Pr ecision                                                           (12)        pattern) and beyond this level the results start deteriorating.
                  Total _ number _ of _ images _ retrieved                        The „16-pattern‟ texture based image retrieval with the
                  Number _ of _ relevant _ images _ retrieved                     combination of original and even image in HSV color space
Re call                                                              (13)
            Total _ number _ of _ relevent _ images _ in _ database               has the highest crossover point indicating better performance.
                                                                                  Moreover as the number of texture patterns generated is
                                                                                  increased the size of the feature vector also increases thus
              Table 2. Image Database: Category-wise Distribution                 increasing the time complexity for query execution.
  Category           Monuments              Beaches          Buses
    No. of
                           99                  99              99
   Images
  Category            Dinosaurs             Sunrise          Tribes
    No. of
                           99                  61              85
   Images
  Category            Elephants             Horses           Roses
    No. of
                           99                  99              99
   Images
  Category             Airplanes          Mountains
    No. of
                           100                 61
   Images

                                                                                   Figure 4. Performance comparison of the proposed CBIR methods with the
                    VII.    RESULTS AND DISCUSSIONS                                                combinationof original and even image part
    For testing the performance of each proposed CBIR
method, 55 queries (randomly selected 5 from each category)                       Figure 4 shows the performance comparison of proposed CBIR
are fired on the image database. The feature vector of query                      methods with the combination of original and even image parts. It is
image and database image are matched using the Euclidian                          observed that the proposed CBIR methods give better performance
distance. The average precision and recall values are found for                   with the combination of original and even image part than the
all the proposed CBIR methods. The intersection of precision                      original alone both in RGB and HSV color space. However an
                                                                                  exceptional behaviour has been observed in case of „4-pattern‟
and recall values gives the crossover point. The crossover
                                                                                  texture in HSV color space where original image outperforms the
point of precision and recall is computed for all the proposed                    combination of original and even image part.
CBIR methods. The one with higher value of crossover point
indicates better performance.




                                                                                    Figure 5. Performance comparison of the ‟16-pattern‟ texture based image
                                                                                  retrieval using tiled bitmaps with the combination of original and even image
 Figure 3. Performance comparison of proposed CBIR methods in RGB and                                                   part
                            HSV color space




                                                                             67                                   http://sites.google.com/site/ijcsis/
                                                                                                                  ISSN 1947-5500
                                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              Vol. 09, No.03, 2011
Figure 5 shows performance comparison of the ‟16-pattern‟ texture               2009, K.J.Somaiya College of Engineering, Vidyavihar, Mumbai-77.
based image retrieval using tiled bitmaps with the combination of         [9]   Minh N. Do, Martin Vetterli, “Wavelet-Based Texture Retrieval Using
original and even image part. It is observed that in case of HSV color          Generalized Gaussian Density and Kullback-Leibler Distance”, IEEE
space the proposed CBIR methods do not give better performance                  Transactions On Image Processing, Volume 11, Number 2, pp.146-158,
with tiled bitmaps. The difference in the crossover points of „1Tile‟           February 2002.
and „4Tile‟ for the combination of original and even image part in        [10] B.G.Prasad, K.K. Biswas, and S. K. Gupta, “Region –based image
HSV color space is negligible. Moreover the crossover point of                  retrieval using integrated color, shape, and location index”,
                                                                                International Journal on Computer Vision and Image Understanding
original image in HSV color space with „1Tile‟ is higher than that
                                                                                Special Issue: Colour for Image Indexing and Retrieval, Volume 94,
with „4Tile‟ bitmap.                                                            Issues 1-3, April-June 2004, pp.193-233.
                         VIII.    CONCLUSION                                        [11]   Dr. H.B.Kekre, Sudeep D. Thepade, “Creating the Color Panoramic
                                                                                           View using Medley of Grayscale and Color Partial Images ”, WASET
As compared to the texture pattern based image retrieval using                             International Journal of Electrical, Computer and System Engineering
Walsh-Hadamard transform in RGB color space [30], the                                      (IJECSE), Volume 2, No. 3, Summer 2008. Available online at
                                                                                           www.waset.org/ijecse/v2/v2-3-26.pdf.
performance of image retrieval can be ameliorated using the
                                                                                    [12]   Stian Edvardsen, “Classification of Images using color, CBIR Distance
HSV color space. Moreover, it is observed that the                                         Measures and Genetic Programming”, Ph.D. Thesis, Master of science
performance of proposed CBIR method improves with                                          in Informatics, Norwegian university of science and Technology,
increasing number of texture patterns up to a certain level. The                           Department of computer and Information science, June 2006.
combination of original image with even image part gives                            [13]   Dr. H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “DCT Applied to
better performance than the original image alone. The                                      Row Mean and Column Vectors in Fingerprint Identification”, In
                                                                                           Proceedings of International Conference on Computer Networks and
proposed CBIR methods in HSV color space do not give better                                Security (ICCNS), 27-28 Sept. 2008, VIT, Pune.
results with tiled bitmaps. Among the various texture patterns                      [14]   Zhibin Pan, Kotani K., Ohmi T., “Enhanced fast encoding method for
used for content based image retrieval, 16 Walsh-Hadamard                                  vector quantization by finding an optimally-ordered Walsh transform
texture patterns (16-pattern) in HSV color space give the best                             kernel”, ICIP 2005, IEEE International Conference, Volume 1, pp I -
result with the combination of original image and even image                               573-6, Sept. 2005.
part, as indicated by the highest average precision-recall                          [15]   Dr. H.B.Kekre, Sudeep D. Thepade, “Improving „Color to Gray and
                                                                                           Back‟ using Kekre‟s LUV Color Space”, IEEE International Advanced
crossover point value.
                                                                                           Computing Conference 2009 (IACC‟09), Thapar University, Patiala,
                                                                                           INDIA, 6-7 March 2009. Is uploaded at online at IEEE Xplore.
                          IX.     REFERENCES
                                                                                    [16]   Dr. H.B.Kekre, Sudeep D. Thepade, “Image Blending in Vista Creation
[1]   Dr. H.B.Kekre, Sudeep D. Thepade, Varun K. Banura, “Augmentation                     using Kekre's LUV Color Space”, SPIT-IEEE Colloquium and
      of Colour Averaging Based Image Retrieval Techniques using Even                      International Conference, Sardar Patel Institute of Technology, Andheri,
      part of Images and Amalgamation of feature vectors”, International                   Mumbai, 04-05 Feb 2008.
      Journal of Engineering Science and Technology (IJEST), Volume 2,
                                                                                    [17]   Dr. H.B.Kekre, Sudeep D. Thepade, “Color Traits Transfer to
      Issue 10, (ISSN: 0975-5462) Available online at http://www.ijest.info
                                                                                           Grayscale Images”, In Proc.of IEEE First International Conference on
[2]   Dr. H.B.Kekre, Sudeep D. Thepade, Varun K. Banura, “Amelioration                     Emerging Trends in Engg. & Technology, (ICETET-08), G.H.Raisoni
      of Colour Averaging Based Image Retrieval Techniques using Even                      COE, Nagpur, INDIA. Uploaded on online IEEE Xplore.
      and Odd parts of Images”, International Journal of Engineering Science
                                                                                    [18]   http://wang.ist.psu.edu/docs/related/Image.orig (Last referred on 23
      and Technology (IJEST), Volume 2, Issue 9, (ISSN: 0975-5462)
                                                                                           Sept 2008)
      Available online at http://www.ijest.info.
                                                                                    [19]   Dr. H.B.Kekre, Sudeep D. Thepade, “Using YUV Color Space to Hoist
[3]   Dr. H.B.Kekre, Sudeep D. Thepade, Akshay Maloo, “Query by Image
                                                                                           the Performance of Block Truncation Coding for Image Retrieval”,
      Content Using Colour Averaging Techniques”, International Journal of
                                                                                           IEEE International Advanced Computing Conference 2009 (IACC‟09),
      Engineering Science and Technology (IJEST), Volume 2, Issue 6,
                                                                                           Thapar University, Patiala, INDIA, 6-7 March 2009.
      2010.pp.1612-1622 (ISSN: 0975-5462) Available online at
      http://www.ijest.info.                                                        [20]   Dr. H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah,
                                                                                           Prathmesh Verlekar, Suraj Shirke,“Energy Compaction and Image
[4]   Dr. H.B.Kekre, Sudeep D. Thepade, “Boosting Block Truncation
                                                                                           Splitting for Image Retrieval using Kekre Transform over Row and
      Coding using Kekre‟s LUV Color Space for Image Retrieval”, WASET
                                                                                           Column Feature Vectors”, International Journal of Computer Science
      International Journal of Electrical, Computer and System Engineering
                                                                                           and Network Security (IJCSNS),Volume:10, Number 1, January 2010,
      (IJECSE), Volume 2, Number 3, pp. 172-180, Summer 2008. Available
                                                                                           (ISSN: 1738-7906) Available at www.IJCSNS.org.
      online at http://www.waset.org/ijecse/v2/v2-3-23.pdf
                                                                                    [21]   Dr. H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah,
[5]   Dr. H.B.Kekre, Sudeep D. Thepade, “Image Retrieval using
                                                                                           Prathmesh Verlekar, Suraj Shirke, “Walsh Transform over Row Mean
      Augmented Block Truncation Coding Techniques”, ACM International
                                                                                           and Column Mean using Image Fragmentation and Energy Compaction
      Conference on Advances in Computing, Communication and Control
                                                                                           for Image Retrieval”, International Journal on Computer Science and
      (ICAC3-2009), pp. 384-390, 23-24 Jan 2009, Fr. Conceicao Rodrigous
                                                                                           Engineering (IJCSE),Volume 2S, Issue1, January 2010, (ISSN: 0975–
      College of Engg., Mumbai. Is uploaded on online ACM portal.
                                                                                           3397). Available online at www.enggjournals.com/ijcse.
[6]   Dr. H.B.Kekre, Sudeep D. Thepade, “Scaling Invariant Fusion
                                                                                    [22]   Dr. H.B.Kekre, Sudeep D. Thepade, “Image Retrieval using Color-
      of Image Pieces in Panorama Making and Novel Image
                                                                                           Texture Features Extracted from Walshlet Pyramid”, ICGST
      Blending Technique”, International Journal on Imaging (IJI),
                                                                                           International Journal on Graphics, Vision and Image Processing
      www.ceser.res.in/iji.html, Volume 1, No. A08, pp. 31-46, Autumn
                                                                                           (GVIP), Volume 10, Issue I, Feb.2010, pp.9-18, Available online
      2008.
                                                                                           www.icgst.com/gvip/Volume10/Issue1/P1150938876.html
[7]   Hirata K. and Kato T. “Query by visual example – content-based image
                                                                                    [23]   Dr. H.B.Kekre, Sudeep D. Thepade, “Color Based Image Retrieval
      retrieval”, In Proc. of Third International Conference on Extending
                                                                                           using Amendment Block Truncation Coding with YCbCr Color Space”,
      Database Technology, EDBT‟92, 1992, pp 56-71
                                                                                           International Journal on Imaging (IJI), Volume 2, Number A09,
[8]   Dr. H.B.Kekre, Sudeep D. Thepade, “Rendering Futuristic Image                        Autumn 2009, pp. 2-14. Available online at www.ceser.res.in/iji.html.
      Retrieval System”, National Conference on Enhancements in Computer,
                                                                                    [24]   Dr. H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “Color-Texture
      Communication and Information Technology, EC2IT-2009, 20-21 Mar




                                                                               68                                    http://sites.google.com/site/ijcsis/
                                                                                                                     ISSN 1947-5500
                                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                   Vol. 09, No.03, 2011
       Feature based Image Retrieval using DCT applied on Kekre‟s Median
       Codebook”, International Journal on Imaging (IJI), Volume 2, Number                                   AUTHORS PROFILE
       A09,      Autumn         2009,pp.    55-65.     Available    online    at
       www.ceser.res.in/iji.html (ISSN: 0974-0627).
                                                                                                  Dr. H. B. Kekre has received B.E. (Hons.) in Telecomm.
[25]   Dr. H.B.Kekre, Sudeep D. Thepade, Akshay Maloo “Performance
                                                                                                  Engineering. from Jabalpur University in 1958, M.Tech
       Comparison for Face Recognition using PCA, DCT &WalshTransform
                                                                                                  (Industrial Electronics) from IIT Bombay in 1960,
       of Row Mean and Column Mean”, ICGST International Journal on
                                                                                                  M.S.Engg. (Electrical Engg.) from University of Ottawa in
       Graphics, Vision and Image Processing (GVIP), Volume 10, Issue II,
                                                                                                  1965 and Ph.D. (System Identification) from IIT Bombay
       Jun.2010,                 pp.9-18,            Available            online
                                                                                                  in 1970 He has worked as Faculty of Electrical Engg. and
       http://209.61.248.177/gvip/Volume10/Issue2/P1181012028.pdf..
                                                                                                  then HOD Computer Science and Engg. at IIT Bombay. For
[26]   Dr. H.B.Kekre, Sudeep D. Thepade, “Improving the Performance of                            13 years he was working as a professor and head in the
       Image Retrieval using Partial Coefficients of Transformed Image”,                          Department of Computer Engg. at Thadomal Shahani
       International Journal of Information Retrieval, Serials Publications,                      Engineering. College, Mumbai. Now he is Senior Professor
       Volume 2, Issue 1, 2009, pp. 72-79 (ISSN: 0974-6285)                                       at MPSTME, SVKM‟s NMIMS University. He has guided
[27]   Dr. H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah,                            17 Ph.Ds, more than 100 M.E./M.Tech and several
       Prathmesh Verlekar, Suraj Shirke, “Performance Evaluation of Image                         B.E./B.Tech projects. His areas of interest are Digital Signal
       Retrieval using Energy Compaction and Image Tiling over DCT Row                            processing, Image Processing and Computer Networking. He
       Mean and DCT Column Mean”, Springer-International Conference on                            has more than 320 papers in National / International
       Contours of Computing Technology (Thinkquest-2010), Babasaheb                              Conferences and Journals to his credit. He was Senior
       Gawde Institute of Technology, Mumbai, 13-14 March 2010, The paper                         Member of IEEE. Presently He is Fellow of IETE and Life
       will be uploaded on online Springerlink.                                                   Member of ISTE Recently ten students working under his
[28]   Dr. H.B.Kekre, Tanuja K. Sarode, Sudeep D. Thepade, Vaishali                               guidance have received best paper awards and two have been
       Suryavanshi,“Improved Texture Feature Based Image Retrieval using                          conferred Ph.D. degree of SVKM‟s NMIMS University.
       Kekre‟s Fast Codebook Generation Algorithm”, Springer-International                        Currently 10 research scholars are pursuing Ph.D. program
       Conference on Contours of Computing Technology (Thinkquest-2010),                          under his guidance.
       Babasaheb Gawde Institute of Technology, Mumbai, 13-14 March
       2010, The paper will be uploaded on online Springerlink.
                                                                                                  Sudeep D. Thepade has Received B.E.(Computer) degree
[29]   Dr. H.B.Kekre, Tanuja K. Sarode, Sudeep D. Thepade, “Image                                 from North Maharashtra University with Distinction in 2003.
       Retrieval by Kekre‟s Transform Applied on Each Row of Walsh                                M.E. in Computer Engineering from University of Mumbai
       Transformed VQ Codebook”, (Invited), ACM-International Conference                          in 2008 with Distinction, currently pursuing Ph.D. from
       and Workshop on Emerging Trends in Technology (ICWET                                       SVKM‟s NMIMS, Mumbai. He has about than 07 years of
       2010),Thakur College of Engg. And Tech., Mumbai, 26-27 Feb 2010,                           experience in teaching and industry. He was Lecturer in
       The paper is invited at ICWET 2010. Also will be uploaded on online                        Dept. of Information Technology at Thadomal Shahani
       ACM Portal.                                                                                Engineering College, Bandra(w), Mumbai for nearly 04
[30]   Dr. H. B. Kekre, Sudeep D. Thepade, Varun K. Banura, “Image                                years. Currently working as Associate Professor in Computer
       Retrieval using Texture Patterns generated from Walsh-Hadamard                             Engineering at Mukesh Patel School of Technology
       Transform Matrix and Image Bitmaps”, Springer International                                Management and Engineering, SVKM‟s NMIMS University,
       Conference on Technology Systems and Management (ICTSM 2011),                              Vile Parle(w),     Mumbai, INDIA. He is member of
       MPSTME and DJSCOE, Mumbai, 25-27 Feb 2011. The paper will be                               International Association of Engineers (IAENG) and
       uploaded online on Springerlink.                                                           International Association of Computer Science and
[31]   Dr. H. B. Kekre, Sudeep D. Thepade, Varun K. Banura, “Query by                             Information Technology (IACSIT), Singapore. He has been
       Image Texture Pattern content using Haar Transform Matrix and Image                        on International Advisory Board of many International
       Bitmaps”, Invited at ACM International Conference and Workshop on                          Conferences. He is Reviewer for many reputed International
       Emerging Trends in Technology (ICWET 2011), TCET, Mumbai, 25-                              Journals. His areas of interest are Image Processing and
       26 Feb 2011. The paper will be uploaded online on ACM portal.                              Computer Networks. He has more than 100 research papers
[32]   Dr. H. B. Kekre, Sudeep D. Thepade, Varun K. Banura, “Image                                in National/International Conferences/Journals to his credit
       Retrieval using Shape Texture Patterns generated from Walsh-                               with a Best Paper Award at International Conference
       Hadamard Transform and Gradient Image Bitmaps”, International                              SSPCCIN-2008, Second Best Paper Award at ThinkQuest-
       Journal of Computer Science and Information Security (IJCSIS),                             2009 National Level paper presentation competition for
       Volume 8, Number 9, 2010.pp.76-82 (ISSN: 1947-5500), Available                             faculty, Best Paper Award at Springer International
       online at http://sites.google.com/site/ijcsis                                              Conference ICCCT-2010 and second best project award at
                                                                                                  Manshodhan 2010.
[33]   Dr.H.B.Kekre, Sudeep D. Thepade, Shrikant Sanas, "Improving
       Performance of multileveled BTC based CBIR using Sundry Color
       Spaces”, CSC International Journal of Image Processing (IJIP), Volume                      Varun K. Banura is currently pursuing B.Tech. (CE) from
       4,       Issue        6, Computer        Science      Journals,     CSC                    MPSTME, NMIMS University, Mumbai. His areas of
       Press, www.cscjournals.org                                                                 interest are Image Processing and Computer Networks. He
                                                                                                  has 07 research papers in International Conferences/Journals
                                                                                                  to his credit.




                                                                                   69                             http://sites.google.com/site/ijcsis/
                                                                                                                  ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                Vol. 9, No. 3, March 2011



  Analysis and Comparison of Medical Image Fusion
     Techniques: Wavelet based Transform and
             Contourlet based Transform
   C G Ravichandran,RVS College of Engg. & Tech, Dindigul, e-mail: cg_ravi@yahoo.com
             R. Rubesh Selvakumar, Research Scholar, Anna University of
          Technology,Tricirappalli,    e-mail: gopikarubesh2009@rediffmail.com
      S. Goutham, Surya Engineering College,Erode e-mail: gouthamsanjay00@gmail.com

                               Abstract                                         various systems differ from each other. The CT and MRI are the most
                                                                                commonly used image fusion because the CT image very clearly portrays
              Medical Image Fusion provides additional information              the human body bone tissues. The MRI image brings out the signals of
for diagnosis when a registered and overlaid multiple patient images.           bone tissues and calcific point is weaker but the resolution of soft tissues
Multiple images from the same imaging modality or            multiple           is much better than CT. Both CT and MRI images are fused and they can
modalities can be used to create a fused image. The fused image is of           completer the merits.[1].
immense help and provide more information to the doctor so as to
diagnose the diseases. the CT (Computer Tomography) offers less                 Many new algorithms of Medical Image Fusions have been developed in
detailed information for soft tissues and good information on bony              the formation of two approaches since the 80’s of 20th century namely
structures. But the MRI (Magnetic Resonance Imaging) provides a                 Spatial Domain and Transform Domain. Averaging method, Brovey
more detailed information on the soft tissues but less detailed                 method, Principle Component Analysis(PCA) and intensity-hue-saturation
information on bone structures and the contrast resolution of soft              (IHS) are some of the Spatial oriented Techniques. All the algorithms in
tissues is far better in MRI than in CT. In this paper, Wavelet based           these techniques have the drawback of spatial distortion in the fused
Transform like DWT (Discrete Wavelet Transform) and CWT                         image. When we move on to further processing such as classification
(Complex Wavelet Transform) is analyzed theoretically and it is                 problemical distortion, the spatial distortion becomes a negative factor.[2]
compared with Contourlet based Transform like CCT (Complex                      But the multi-resolution tool which was initiated by burt and adelson in
Contourlet Transform) and NSCT (No-Subsampled Contourlet                        1983 can overcome these drawbacks and it is known as Lapcian
Transform). The experimental results showing the evaluation                     Pyramid.[3] In continuous Function in all types of Pyramid based methods
measures like IE (Information Entrophy) , AG (Average Gradient)                 is in-proper and the decomposes process is very poor. The Wavelet
and SD (Standard Deviation).                                                    Transform (WT) as multi-resolution established by Mallrt and Meyar for
                                                                                continuos function process good frequency division characteristics in the
Key Terms: CT (Computer Tomography), MRI (Magnetic Resonance                    transform region and this has rewulted in its wide use in the medical
Imaging), DWT (Discrete Wavelet Transform), CWT (Complex                        image fusion.[4]
Wavelet Transform), CCT (Complex Contourlet Transform), NSCT
(No-Subsampled Contourlet Transform).                                           Though Wavelet based Transform solves the problems of low contrast and
                                                                                blocking effects in space domain and avoid artifacts, they fail to achieve
                I.         INTRODUCTION                                         good performance. All types of wavelet based transform provide
The process of combining relevant information from two or more images           in-sufficient information like curve shape and edge representation
into a single image is known as Image Fusion. The fused image contains          and also occurs the problems of shift in-variance and lack of
more informations than any one of the input images. Ariel and Satellite
                                                                                directional selectivity. Then the need for a new approach arose
Imaging, Robot Vision, Remote Sensing and Medical Imaging are some
                                                                                Muh D. Do and Vetterlin in 2002 proposed a new approach :
of the available image fusions. Nowadays, Medical Image Fusion
occupies a position of considerable importance in the field of medical          contourlet Transform” known as multi-geometric Analysis.[5]. It
image clinical diagnosis. The Medical Image Fusion objectives is to             happens to be a some powerful and useful tool for analyzing the
obtain a high resolution image replete with many details for the sake of        signals consisting of lines, curves and edges than wavelet
diagnosis.                                                                      transform. Its application to image fusion can provide sufficient
                                                                                information to the doctor for diagnosis purposes. Several methods of
Several of various medical modalities images information that are               contourlet transform such as Discrete Contourlet Transform(DCT),
comprehended together to form one image in order to express its                 Complex Contourlet Transfor(CCT) etc. These types of algorithms
information comprises Medical Image Fusion. As a result, the doctor is          consists of some problesm such as lack of shift in-variance and
provided with a more effective diagnosis information by this image. X-          directional selectivity. To overcome these problem prof. Cunha and Shun
Ray, Ultra Sound(US), ( PET) Scans (Positron Emission Tomography)               was proposed the Non- Subsampled Contourlet Transform (NSCT).
and (SPECT) Scans (Single Photon Emission Computed Tomography) ,
CT(Computer Tomography) and MRI (Magnetic Resonance Imaging( are                II WAVELET TRANSFORM
the various modality Images, that are in use for clinical analysis and          In order to identify the vital details in the image the registered image is
treatment. The single modality image cannot often give complete and             applied by the transform in general concept of transform domain fusion. .
accurate information for this doctor since the formation principles of          First any fusion rule is applied to the transform co-efficient and then




          1
                                                                           70                                    http://sites.google.com/site/ijcsis/
                                                                                                                 ISSN 1947-5500
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                      Vol. 9, No. 3, March 2011

obtained to the fusion decision map, then applied to the decision map by              high-low(HL), los-high(LH) and low-low(LL) image subbands.[9] By
the transform domain. Finally, the resulted image will have more details              recursively applying the same scheme to the low-low subband a multi-
of both the source images.[6].                                                        resolution decomposes process can be achieved.(Fig.3)




                                Fusion           Fusion           Fused
                                decisio          Transfor         Image                         LH1                           HH1
                                                 m co-
                                n map
                                                 efficent




Registered Transform
Image      co-efficients
       Fig. 1. Block diagram of Transform domain image fusion                            LH2             HH2
The main concept and theory of wavelet based multi-resolution is
delivered from mallrt. It can detect local features in a signal process and it
si used to decompose process. Besides texture analysis, data compression
and feature detection, it can also be used for image fusion. Even before                 LH3 HH3                                      HL1
wavelet based transform, the pyramid base transform was introduced by
Burt and Adelson in 1983 and then it was improvised by Toet [7]. But in               Fig.3. Labelled subbands
these type of techniques are not supported to continuous function. So the
wavelet technique is applied to the image fusion.                                     Fusion Rules: there are three fusion rules that were developed and
                                                                                      implemented using DWT based image fusion.
In common , all types of transform domain fusion techniques , the
transformed image are combined in the transform domain using any of the                    1.    maximum selection (MS) scheme: This simple scheme just
                                                                                                 picks the coefficient in each subband with the largest
fusion rule, it is denoted by  of the registered input images I1 (x,y) and I2
                                                                                                 magnitude;
(x,y) together with the fusion rule φ then the inverse wavelet transform W-                2.     weighted average (WA) scheme: This scheme developed by
1
  is applied and the fused image I (x,y) is reconstructed.[8]                                    Burt and Kolczynski [10] uses a normalised correlation
                                                                                                 between the two images’ subbands over a small local area. The
 I(x,y) = w-1 (Φ(w(I1 (x,y)), w(I2 (x,y))))                                                      resultant coefficient for reconstruction is calculated from this
                                                                                                 measure via a weighted average of the two images’
……………… (1)                                                                                       coefficients.

                                                                                           3.    window based verification (WBV) scheme: This scheme
                                                                                                 developed by Li et al. creates a binary decision map to choose
  I1                                                                                             between each pair of coefficients using a majority filter.
                                                                        w-1           In DWT fits most commonly into the decimated form(Mallats Dyadic
                                                                                      Filter Tree0[11]. It is only used for compression but other occus in other
                                                                                      signal analysis such as lack of shift in-variance. Sice the wavelet fitness
                                                                                      are separable, the small shift in the input signal can cause major variations
  I2                                                                                  in the energy distribution between DWT coefficients at different scales
                                                                                      and poor directional selectivity for diagonal features. So the new approach
                                                                                      was introduced to shift in-variance tha is known as Complex Wavelet.
                                                                                      However, the real valued wavelet transform suffers from shift variance
Registered         wavelet                fused wavelet               Fused           and lack of directional selectivity. Nikolov et al. [12] introduced the use of
image                                                                                 the dual tree complex wavelet transform (DT-CWT) for image fusion. The
Input Images       coefficients           coefficients            Images              DT-CWT is approximately shift invariant and has double the amount of
                                                                                      directional selectivity compared to a real valued wavelet transform. Shift
                                                                                      invariance is an important feature of a fusion transform as the magnitude
   Fig.2. Fusion of the Wavelet Transform of two images                               of the coefficients of a shift variant transform may not properly reflect
                                                                                      their importance. The improved directional selectivity of the DT-CWT is
III DISCRETE WAVELET TRANSFORM                                                        also important in order to properly reflect the content of the images across
                                                                                      boundaries and other important directional features. In DT-CWT gives
The main idea of all multi-resolution schemes centers around the human
                                                                                      much better directional selectivity when multi-dimensional signal is
vital system being local contrast ie., edges or corners. Two registered               filtered.[10].
images of the same scene each of the coefficients of each transform
possess significantly different magnitudes within the region Using any
one of the fusion rule to generate the combined coefficients map then the             IV. DUAL-TREE COMPLEX WAVELET TRANSFORM
inverse DWT is applied to the combined coefficients map to produce the
fused image which is give the more information of than the input images.              The motivation of suing the DT-CWT for image fusion is, reduced the
Using separate Filter and down-sampling in the horizontal and vertical                shift in-variance and directional selectivity when compared with the
                                                                                      DWT. In DT-DWT comprising two trees of real filter a and b, which
directions produces four subbands at each scale. It denotes the horizontal
                                                                                      produce the real and imaginary parts of the complex coefficients and odd
frequency and then the vertical frequency. This produces high-high(HH),               and even length bi-orthogonal linear phase filters.




            2
                                                                                 71                                     http://sites.google.com/site/ijcsis/
                                                                                                                        ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                     Vol. 9, No. 3, March 2011



                                                                                     where Igt is the cut-and-paste “ground truth” image, fd is the fused image
                                                                                     and N is the size of the image. Lower values of _ indicate greater
                                                                                     similarity between the images Igt and Ifd and therefore more successful
                                                                                     fusion in terms of quantitatively measurable similarity. Table 1. shows the
                                                                                     results for the various methods with several fusion rules are used.




                                                                                        S.No.         Methods
                                                                                       1.             DWT- MS Fusion Rule                           8.2964
                                                                                       2.             DWT-WA Fusion Rule                            7.6551
                                                                                       3.             DWT-WBV Fusion Rule                           7.5271
                                                                                       4.             DT-CWT- MS Fusion Rule                        7.2284
                                                                                       5.             DT-CWT- WA Fusion Rule                        7.2043
Figure 4: Dual tree of real filters for the CWT, giving real and                       6.             DT-CWT-WBV Fusion Rule                        6.9540
imaginary parts of complex coefficients                                              Table 1. Quantitative results for various fusion methods

Unfortunally , the odd/even length filter approach suffers from certain              There arised some problems in all wavelets based transforms such as the
problems [13] such as a. sub-sampling structures is not very symmetrical             accuracy of edgeand curve localization in the wavelet transform is low
b. two trees have slightly different property response and c. the filter sets        and it has paved the way for an alternative approach which has a high
must be bi-orthogonal. To overcome this problem the same author N. G.                accuracy of curve localization such as the contourlet transform The first
Kingbury slightly modified the existing algorithm to a developed one                 MGA tool was proposed by muh D. Do and Martin Vetterli in 2002
which is Q-Shift DT-CWT for image fusion.[13,14]
                                                                                     [15].The main idea of this method is to construct a multiresolution and
                                                                                     multidirection (MRMD) decomposed representation of image and also it
                                                                                     is more powerful tool for the analysis of the signals consists of lines,
                                                                                     curves and edges than wavelet transform. It applied to image fusion, can
                                                                                     provide more information to the doctor for diagnosis purposes. [16].
                                                                                     Thereare several methods in MGA such as discrete contourlet transform ,
                                                                                     complex contourlet transform. These transform ae lack of shift
                                                                                     invariance,to overcome this problem Prof. cunha and zhna was proposed
                                                                                     non-subsampled contourlet transform(NSCT). It is a fully shift invariant ,
                                                                                     multiscale and multidirectional transformation. [17] In this paper, we
                                                                                     present the comparative and evaluative study of these three contourlet
                                                                                     transform.
     (a) MRI Image              (b) CT Image
                                                                                     V. DISCRETE CONTOURLET TRANSFORM
                                                                                       The Contourlet Transform is a new geometrical image transform , which
                                                                                     represents image formation containing contours and textures. The property
                                                                                     of contourlet is directionality and anisotropy, first contourlet transform
                                                                                     was introduced by Do and Vetterli. A morepowerful MSMD framework is
                                                                                     Contourlet Transform, that consists of Laplacian Pyramid(LP) and
                                                                                     Directional Filter Bank(DFB). The LP decomposes on image into
                                                                                     subbands and DFB is an analysis of each digital image.




(c ) Fused Image –DWT            (d) Fused Image- CT-CWT

Fig. 5(a,b,c) Input and Output Images                                                Fig 6. A Flow graph of the Contourlet Transform

                                                                                       For the Contourlet Transform , first a standard multiscale decomposition
QUANTITATIVE COMPARISONS                                                             into different bands is computed, where the lowpass channel is subbandled
                                                                                     while highpass is not. Then a directional decomposition with a DFB is
Under the circumstances, comparisons of quantitative quality tends to be             applied to each highpass channel. The DFB is a critically sampled filter
misleading or meaningless. But a few authors have tried to formulate such            bank that can decompose on images of directions. So, one can decompose
measures for applications with clear meaning. This is produced using a               each scale into any arbitrary power of two’s number of directions. Before
simple cut-and-paste technique, physically taking the “in focus” areas               contourlet decomposition,registration of the input images must be done.
from each image and combining them. The quantitative measure used to
compare the cut-and-paste image to each fused image was taken from                     There are three basic steps for the proposed contourlet based image
                                                                                     fusion are, 1. The input Images A and B to be decomposed using
                                                                                     decompose images into lowpass coefficients and bandpass coefficients . 2.
                                                                                     To combine the transform coefficients based on two types of way, one is
                                                                                     fusion rules based on pixel and fusion rule based on Region. And 3.
                                                                                     Finally, using contourlet transform to construct the fused images.




           3
                                                                                72                                    http://sites.google.com/site/ijcsis/
                                                                                                                      ISSN 1947-5500
                                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                  Vol. 9, No. 3, March 2011




Fig. 7. Framework of Contourlet Transform
                                                                                  Fig. 9. Procedure of the Complex Contourlet Transform (CCT)

                                                                                    CCT comprises three steps for the fusion of images. Subband and
                                                                                  directional decomposes are used in this proposed transform. Subsequently
                                                                                  followed by applying certain fusion rules and the transform coefficients .
                                                                                  finally, the complete coefficients are used with maximum magnitudes.
                                                                                  The ending of complete fusion scheme applies inverse complex contourlet
                                                                                  transform. The salient features of an image is edges and curve
                                                                                  boundaries.




Fig. 8. The Fusion Framework based on Contourlet Transform

  The Contourlet Transform satisfy anisotropy principle and can capture
intrinsic geometric structure information of images. It achieve better
expression than discrete wavelet transform, especially meant for edges
and contours. It can be utilized for extracting the geometric information         Fig.10. the proposed fusion scheme using CCT
of images very well, and it is usefulto many image processing tasks. Small
features challenge the contourlet that represents the long edges. The
contourlet transform lacks shift in-variance due to the down- sampling and         Based on quantitative Measurements such as, Mean Gradient
up-sampling. Hence the read for another approach for the purpose of               (MG), Average Value(AV) and Correlation coefficient (CC), I
problem solving has risen [18] [19].                                              present the previous experimental results using the PCA based fusion,
                                                                                  SIDWT based fusion, DT-CWT based fusion and finally CCT Transform
VI. COMPLEX CONTOURLET TRANSFORM                                                  based fusion scheme. [22].

   Complex Contourlet Transform incorporates the DT-CWT because                   Method              MG                    CC                AV
after investigation, the DT-CWT tasks advantages of approximate shift in-               PCA              724.6968               0.8446            78.6876
variance and directional selectivity for image fusion. But , the DT-CWT               SIDWT                1041                 0.8803            79.0012
can only handle limited direction informations. So, the researcher Dipeng            DT-CWT               1143.9                0.8952            80.1860
Chen and Qi Li proposed one method called as Complex Contourlet                         CCT               1147.9                0.9166            80.2766
Transform(CCT). It provides simultaneous better directional sensitivity                     Taable.2. Quantitative results for varius fusion methods
and shift invariance. CCT consists of two steps, first one is, A CT-CWT             The CCT based fusion methods has higher spectral quality compared
decomposes, this is the contrast to the critically used in sampled DWT            with the other methods, in terms of the higher values of correlation
[20] and Lappacian Pyramid [21]. After applying DT-CWT decomposes                 coefficients and mean gradient.
then only apply the CCT. It allows for of each scale arbitrary directions
and also approximate shift invariance.                                            VII. NON-SUBSAMPLED CONTOURLET
                                                                                  TRANSFORM(NSCT)

                                                                                   Previous contourlet transformmethods satisfy anisotropy
                                                                                  principles by capturing geometric structures information of
                                                                                  images. The image expressed is better especially for edges and
                                                                                  contours. However during the down sampling and up-sampling
                                                                                  produce lack of shift in-variance and artfacts. So, Cunha and




          4
                                                                             73                                   http://sites.google.com/site/ijcsis/
                                                                                                                  ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                 Vol. 9, No. 3, March 2011


Zhon proposed another method which is called as Non-
Subsampled Contourlet Transform(NSCT). [23]. NSCT means,
combination of Non-Subsampled Pyramid provides multi-scale
decomposes and Non-Subsampled Directional Filter Banks provides
directional decomposes. [24].




                 Fig11.. sampling filters by a quincunx Q                        C.Contourlet Transform         D. Complex Contourlet Transform




            Fig12.. iteratal non-subsampled directional filter
                                                                                   E. Non-Subsampled Contourlet Transform
  In this method it can be repeatedly iterated on lowpass subband outputs        Fig. 14. Input and Output Images
of non- subsampled pyramids. First, non-subsampled pyramid, the input is
split into a lowpass subband and a highpass subband, then only the DFB           VIII. QUANTITATIVE ANALYSIS
decomposes the highpass subband into several directional subband.
Finally, it is iterated repedeatelly, on the lowpass subband.                     There are three evaluation measures are used in this paper, as follows:

                                                                                 1) Information entropy (IE): The IE of the image is an important index
                                                                                 to measure the abound degree of the image information. The larger the IE
                                                                                 is, the more information the image carries. The IE of the image is
                                                                                 definition as



                                                                                                                                      (1)


                                                                                 Where the h(i) is ratio of the number of the pixels with gray value equal to
Fig 13. .a) muliscale decomposes/ directional decomposes                         i over the total number of the pixels.
fig. b)idealized frequency portioning
                                                                                 2) Average Gradient (AG): The definition is denoted as
  The NSCT provides not only muliresolution analysis but also
geometrical and directional representation. The NSCT is shift invariant
and more powerful tool for image fusion and it provides better result
compared with the CT and CCT..

 Images images

                                                                                                                        (2)
                                                                                             The average gradient reflects the contrast degree in
                                                                                 detail and texture change. The larger of the AG is, the more clear the
                                                                                 image is.

                                                                                 4) Root Mean Square Error (RMSE) : Suppose r is the source image
                                                                                 (standard reference image) and f is the fused image; the root mean square
                                                                                 error is defined as follows:




  Output Images
                                                                                                                                            (3)

                                                                                   The RMSE is used to measure the difference between the source image
                                                                                 and the fused image; the smaller the value of RMSE and the smaller the
                                                                                 difference, the better the fusion performance.




          5
                                                                            74                                    http://sites.google.com/site/ijcsis/
                                                                                                                  ISSN 1947-5500
                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                    Vol. 9, No. 3, March 2011

                                                                                    [7]. Mnh N. Do. Martic Vetterli, “ The Contourlet Transform: An
                  Transform           IE       AG       RMSE                        Efficient Directional Multi-resolution Image Representation”, IEEE
                                                                                    Transaction on image processing, vol.14, no.12, pp. 2001-2106, 2005.
                Contourlet        5.616       5.270      15.56
                Transform                                                           [8]. Paul Hill, Nishan Canagarajah and Dave Bull, Dept. of Electrical and
                 Complex                                                            Electronic Engine ering, The University of Bristol, Bristol, BS5 1UB,
                contourlet        5.717       5.401      17.23                      UK,” Image Fusion using ComplexWavelets”,
                Transform
             Nonsubsampled                                                          [9]. [kingbury, 1998] kingbury N.G. (1998). The dual-tree complex
                contourlet        6.170       5.462      15.43                      wavelet transform: a new technique for shift in-variant and directional
                Transform                                                           filters, IEEE digital signal processing workshop (86).
Table 3 Comparison of fusion Results
                                                                                    [10]. S.G. Mallar,” A Theory for multi-resolution Signal Decomposition:
            From the table above we can see that the Nonsubsampled                  the Wavelet representation”, IEEE Transaction on PAMI, 11)7), pp. 674-
contourlet transform have the higher value of Information entropy,                  693, 1989.
Average gradient and lower value of root mean square error for the fused
image. It is shown that the NSCT provide the better fusion result.                  [11].S. Nikolov, P.R. Hill, D.R. Bull, C.N. Canagarajah, “Wavelets for
                                                                                    image fusion,” in Wavelets in Signal and Image Analysis, from Theory to
                      XI. EXPERIMENT RESULT                                         Practice, A. Petrosian and F. Meyer, Eds. Kluwer Academic Publishers,
                                                                                    2001
  We use parietal image of CT and MR for experiment analysis. Fig 8(a)
show the CT image and fig 8(b) shows the MR image .From the figure,                 [12]. N.Kingbury,” A Dual Tree Complex Wavlet Transform with
We can see that the CT image does not shows the soft tissues clearly and            improved orthogonal and symmetry properties”, ICIP, vol2, 2000. Pp.
In MR image, the soft tissues are clear but it does not show the coronal            375-378.
bones clearly. The experiment compared the contourlet transform, with
complex contourlet transform and nonsubsampled contourlet transform.                [13]. N.Kingbury, “ Design of Q-Shift Complex Wavelet for image
The Fig.8(c,d,e) is the result of fusion method using the above three               Processing using frequency domain energy minimization”, International
contourlet transforms.                                                              Conference on Image Processing, 2003(1): 1013-1016.
                           X. CONCLUSION
                                                                                    [14]. Minh N. Do, Martin Vetterli,” The Contourlet Transform: An
  This paper presents the comparison of Wavelet Based Transform                     efficient Directional Multiresolution Image Representation,” IEEE
contourlet transform in terms of various performance measures,like                  Transaction on Image Processing”, Vol.14. No. 12 pp. 2001-2106, 2005.
Quantitative Measure its used for Wavelet Transform and Information
Entropy (IE), Average Gradient (AG) and Root Mean Square                            [15]. YangChai, Yante, Chaolong Ying, “ CT and MRI Image Fusion
Error(RMSE).. Non-Subsampled Contourlet Transform(NSCT ) its used                   based on Contourlet using a Novel Rule”, IEEE Transaction on Image
for Contourlet Transform, provides very good results for pixel level                Processing.
fusion due to its improved directionality , better geometric representation
and good directional selectivity . Hence using the NSCT, one can have               [16]. C.S. Pattichis, M.S. Pattrchis, “ Adaptive Normal Network Imaging
the fused image with better direction, high geometric representation and            in Medical System,” Proceedings of the 35th Asilomar Conference on
better visual quality. This will help the physicians to diagnose diseases in        Signal, System and Computer”, Vol.1, pp. 313-317, 2001.
a better way.
                                                                                    [17]. [10]. R. Eslami and H. Radha, “ New Images Transform using
                                                                                    Hybrid wavelets and Directional Filter Banks: Analysis and Design,”
XI. REFERENCES                                                                      Proc. IEEE Int. Conf. Image Processing, 2005, pp. 733-736
[1]. Ge Wen, Gao Li-Qun, Ge-Wen, “ A CT/MRI image fusion algorithms
combined non-separable wavelet transform and regional priority”, 2008               [18]. V. Vhappelier, C. Guillemot, and S. Marinkovic, “ Image coding
international symposium on computer science and computational                       with iterated contourlet and wavelet transform”, Proc. IEEE Int. Conf.
technology.                                                                         Image Processing, 2004, pp. 3157-3160.

[2]. T.M. Tu, W.C. Cheng, C.P. Cheng, P.S. Hirang, “ Best Tradeoff for              [19]. P.Piello, “ A general Framework for multi-resolution Image Fusion:
high resolution image fusion to preserve spatial details and minimize color         from Pixels to Regions,” PNA-RCW11, ISSN 1386-3711, 2002
distortion”, IEEE Geoscience and Remote Sensing letters, 4 (2) 2007,
302-306.                                                                            [20]. B. Jeon and D.A> Landrebe,” Decision Fusion Approach for multi-
                                                                                    temporal classification,” IEEE Transaction on Geoscience and Remote
[3]. G. Piellar, “ A General Framework for multi-resolution image fusion:           Sensing, Vol.37. No.7. pp. 1227-1233, 1999.
from pixel to region information fusion”, Vol.4. pp.259-280, 2003.
                                                                                    [21]. Dipeng Chen and Qu Li , “ The Use of Complex Countourler
[4]. S.G. Mallert, “ Multi-frequency channel decomposition of image and             Transform on Fusion Scheme”, World Academy of Science, Engineering
wavelet models”, IEEE Transaction on Acoustic , Speech and signal                   and Technology 12, 2005.
processing, vol.37, no.12, p.p. 2091-2110, 1989.
                                                                                    [22]. O. Rockernger, “ Image Sequence Fusion using a Shift invariant
[5]. Mnh N. Do. Martic Vetterli, “ The Contourlet Transform: An                     wavelet transform,” in Proc. Int. Conf. Image Processing, Washington,
Efficient Directional Multi-resolution Image Representation”, IEEE                  1997, Vol.3, p.p. 288-291
Transaction on image processing, vol.14, no.12, pp. 2001-2106, 2005.
                                                                                    [23]. Jlanpring Zhou, Aruther C. Cunho and Minh N. Do, “ Non-
[6]. Y. kirankumar, “ Comparison of Fusion Techniqueapplied to                      Subsampled Contourlet Transform: Construction and Application in
practical images: Discrete Curvlet Transform using Wrapping Technique               Enhancement”,     IEEE     Transaction on     Image  Processing
and Wavelet Transform”, Journal of theoretical and applied Information
Technology.

[7]. Yang Yang, Dang Sin Park, Shuying, Zhijun Eang,’ Wavelet based
approach for Fusion of CT and MRI”,.




           6
                                                                               75                                   http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 09, No.03, 2011

   Performance Comparison of Texture Pattern Based
    Image Retrieval Methods using Walsh, Haar and
     Kekre Transforms with Assorted Thresholding
                      Methods
                                   Dr. H.B.Kekre1, Sudeep D. Thepade2, Varun K. Banura3
                     1
                      Senior Professor, 2Ph.D.Research Scholar & Associate Professor, 3B.Tech (CE) Student
                                          Computer Engineering Department, MPSTME,
                                  SVKM‟s NMIMS (Deemed-to-be University), Mumbai, India
                         1
                           hbkekre@yahoo.com, 2sudeepthepade@gmail.com,3varunkbanura@gmail.com

Abstract— Novel texture pattern based image retrieval                   could be listed as art galleries [15,17], museums, archaeology
techniques using image maps and non-sinusoidal orthogonal               [6], architecture design [11,16], geographic information
image transforms is the theme of the work presented in this             systems [8], weather forecast [8,25], medical imaging [8,21],
section. Different texture patterns namely ‘4-pattern’, ‘16-            trademark databases [24,26], criminal investigations [27,28],
pattern’, ‘64-pattern’ are generated using Haar transform               image search on the Internet [12,22,23]. The paper attempts to
matrix, Walsh transform matrix and Kekre transform matrix.              provide better and faster image retrieval techniques.
The generated texture patterns are then compared with the
image maps (binary image maps for Walsh patterns and Ternary            A. Content Based Image Retrieval
image maps for Haar patterns & Kekre patterns) of an image to
generate the feature vector based on structural matching (as the            For the first time Kato et.al. [7] described the experiments
matching number of ones, minus ones per Walsh texture pattern           of automatic retrieval of images from a database by colour and
and matching number of ones, zeros, minus ones per Haar/Kekre           shape feature using the terminology content based image
texture pattern). Further the image maps are created using four         retrieval (CBIR). The typical CBIR system performs two major
thresholding methods as global, local, intermediate with 4 tiles        tasks [19,20] as feature extraction (FE), where a set of features
(intermediate-4) and intermediate with 9 tiles (intermediate-9).        called feature vector is generated to accurately represent the
Here total 36 variations of proposed novel image retrieval              content of each image in the database and similarity
methods using texture patterns are considered with three image          measurement (SM), where a distance between the query image
transforms (Walsh, Haar & Kekre), three variations in number            and each image in the database using their feature vectors is
of texture patterns (4, 16 & 64) and four different ways to             used to retrieve the top “closest” images [19,20,29].
generate image maps (with global, local, intermediate-4,
intermediate-9 thresholding methods). The proposed texture                   For feature extraction in CBIR there are mainly two
content based image retrieval (CBIR) techniques are tested on           approaches [8] feature extraction in spatial domain and feature
the image database with help of 55 queries (randomly selected 5         extraction in transform domain. The feature extraction in
from each of 11 image categories) fired on image database. The          spatial domain includes the CBIR techniques based on
performance comparison of texture pattern based CBIR methods            histograms [8], BTC [4,5,19], VQ [24,28,29]. The transform
is done with help of precision-recall crossover points.                 domain methods are widely used in image compression, as they
                                                                        give high energy compaction in transformed image [20,27]. So
   Keywords- CBIR; Walsh, Haar & Kekre Transforms;                      it is obvious to use images in transformed domain for feature
Texture; Patterns; Image Maps                                           extraction in CBIR [26]. But taking transform of image is time
                                                                        consuming. Reducing the size of feature vector using pure
                         I.   INTRODUCTION                              image pixel data in spatial domain and getting the improvement
    Today the information technology experts are facing                 in performance of image retrieval is shown in [1,2,3]. But the
technical challenges to store/transmit and index/manage image           problem of feature vector size still being dependent on image
data effectively to make easy access to the image collections of        size persists in [1,2,3]. Here the query execution time is further
tremendous size being generated due to large numbers of                 reduced by decreasing the feature vector size further and
images generated from a variety of sources (digital camera,             making it independent of image size. Many current CBIR
digital video, scanner, the internet etc.). The storage and             systems use the Euclidean distance [4-6,11-17] on the extracted
transmission is taken care of by image compression [4,7,8].             feature set as a similarity measure. The Direct Euclidian
The image indexing is studied in the perspective of image               Distance between image P and query image Q can be given as
database [5,9,10,13,14] as one of the promising and important           equation 1, where Vpi and Vqi are the feature vectors of image
research area for researchers from disciplines like computer            P and Query image Q respectively with size „n‟.
vision, image processing and database areas. The hunger of
superior and quicker image retrieval techniques is increasing
day by day. The significant applications for CBIR technology



                                                                   76                               http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 09, No.03, 2011
                                 n
                     ED         (Vpi  Vqi )   2
                                                         (1)                                1, if .B(i, j )  TB
                                                                                           
                                i 1
                                                                              BMb(i, j )                                                  (8)
                                                                                            1, if .B(i, j )  TB
                                                                                           
               II.    GENERATION OF IMAGE MAPS
    Image bitmaps are prepared using four different types of
thresholding considerations as global, local, intermediate-4
and intermediate-9. For global thresholding, the image maps             B. Generation of Global Ternary Image Maps
of colour image are generated using three independent red (R),              The ternary image maps are used for comparison with
green (G) and blue (B) components of image to calculate three           texture patterns generated using Haar and Kekre transforms as
individual colour thresholds and one overall luminance                  the patterns contain three values one, zero and minus-one.
threshold [35]. Let X={R(i,j),G(i,j),B(i,j)} where i=1,2,….m            Here first for each component (R, G, and B), the individual
and j=1,2,….,n; be an m×n color image in RGB space. Let the             colour threshold intervals (lower-Tshl, and higher-Tshh) are
individual colour thresholds be TR, TG and TB which could               calculated as shown in equations 9, 10 and 11.
be computed as per the equations given below as 2, 3 and 4.
Let the luminance threshold T be as given by equation 5.
                                                                             Tshrl  TR  TR  T    , Tshrh  TR  TR  T                   (9)
                            m     n
                   1
          TR            R(i, j)
                  m * n i 1 j 1
                                                        (2)
                                                                             Tshgl  TG  TG  T    , Tshgh  TG  TG  T                 (10)

                   1 m n
          TG            G(i, j)
                  m * n i 1 j 1
                                                        (3)                  Tshbl  TB  TB  T    , Tshbh  TB  TB  T                 (11)

                                                                           Then the individual color plane global ternary image maps
                1 m n
          TB         B(i, j)
               m * n i 1 j 1
                                                        (4)
                                                                        are computed (TMr, TMg and TMb) as given in equations 12,
                                                                        13 and 14. If a pixel value of respective color component is
                                                                        greater than the respective higher threshold interval (Tshh),
                                                                        the corresponding pixel position of the image map gets a value
                     TR  TG  TB                                       „one‟; else if the pixel value is lesser than the respective lower
              T                                        (5)
                                                                        threshold interval (Tshl), the corresponding pixel position of
                           3
                                                                        the image map gets a value of „minus one‟; otherwise it gets a
                                                                        value „zero‟.
A. Generation of Global Binary Image Maps
                                                                                           1,        if .R(i, j )  Tshrh
    In binary image maps using global thresholding, the image                             
                                                                            TMr (i, j )   0, if .Tshrl  R(i, j )  Tshrh
is converted in to ones and minus ones only. Binary image                                  1,                                           (12)
                                                                                                     if .R(i, j )  Tshrl
bitmaps are generated using the individual color component
threshold values (TR, TG, TB) as BMr, BMg and BMb. If a
pixel in each component (R, G, and B) is greater than or equal
to the respective threshold, the corresponding pixel position of                           1,        if .G(i, j )  Tshgh
                                                                                          
the bitmap will have a value „one‟ otherwise it will have a                 TMg (i, j )   0, if .Tshgl  G(i, j )  Tshgh
value „minus one‟, as given by equations 6, 7 and 8. The                                   1,       if .G(i, j )  Tshgl                (13)
                                                                                          
binary image maps are used for comparison with texture
patterns generated using Walsh transform.

                                                                                          1,        if .B(i, j )  Tshbh
                    1, if .R(i, j )  TR                                               
                                                                           TMb(i, j )   0, if .Tshbl  B(i, j )  Tshbh
      BMr(i, j )                                      (6)                               1,                                            (14)
                                                                                                    if .B(i, j )  Tshbl
                    1, if .R(i, j )  TR
                   


                   1, if .G(i, j )  TG                               C. Generation of Global Ternary Image Maps
                  
     BMg(i, j )                                       (7)                The binary or ternary image maps are generated using four
                   1, if .G (i, j )  TG
                                                                       sundry methods of thresholding as global, intermediate-4,
                                                                        intermediate-9 and local. For global thresholding based image



                                                                   77                                    http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 09, No.03, 2011
maps, the global threshold values are computed as average of              (4-pattern). The 4x4 Walsh transform matrix is given in 2(d)
all pixel intensity values in the respective plane of considered          and visualization of 16 Walsh transform patterns generated
colour image (as given by equations 2,3,4 and 5). In case of              using it is shown in 2(e), where black color represent the
intermediate-4 thresholding, the image is divided into four               values „1‟ in the pattern and values „-1‟ are represented by
equal non overlapping parts (as shown in (b) of figure 1) and             white color. The obtained Walsh texture patterns then are
the image map of each of the part is generated using average              resized as the size of image for which texture features have to
of pixels of only that part as threshold. In case of intermediate-        be extracted.
9 thresholding based image maps, the image is divided into
nine non overlapping equal parts. For local thresholding, each
non distinct 2x2 pixel window is considered separately. Figure
1 shows the pixel group consideration for respective
thresholding methods, where the gray shading indicates the
group of pixel values to be considered for threshold
calculation in respective thresholding methods and number of
such pixel groups possible are given by black lines.                       (a) 2x2 Walsh         (b) 2x2 Walsh            (c) Generated 4
                                                                               Matrix                Matrix                Walsh Texture
                                                                                                                         Patterns (4-pattern)




           (a) Global                   (b) Intermediate-4


                                                                           (d) 4x4 Walsh       (e) Generated 16 Walsh Texture Patterns
                                                                               Matrix                        (16-pattern)

                                                                                   Figure 2. Walsh Texture Pattern Generation
                                                                          B. Haar Texture Pattern Generation
       (c) Intermediate-9                    (d) Local                       The 2x2, 4x4 and 8x8 Haar transform matrices are used to
Figure 1. Pixel group consideration for respective thresholding           generate the 4, 16 and 64 Haar texture patterns respectively.
 methods considered in image map generation for CBIR with                 The generation of four and sixteen Haar texture patterns [32]
                        texture patterns                                  is shown in figure 3. 2x2 Haar transform matrix [9,30,31] is
                                                                          shown as 3(a), each row of this matrix is considered one at a
             III.   TEXTURE PATTERN GENERATION                            time and is multiplied with all rows of the same matrix to
    Using the non-sinusoidal transform matrices assorted                  generate Haar texture patterns as shown in 3(b). Figure 3(c)
texture patterns namely 4-pattern, 16-pattern and 64-pattern              gives the visualization 4 Haar texture patterns (4-pattern). The
are generated. To generate N2 texture patterns (N2-pattern)               4x4 Haar transform matrix is given in 3(d) and 16 Haar
texture patterns, NxN transform matrix is considered and the              transform patterns generated using it, are shown in 3(e), where
element wise multiplication of each row of the transform                  black colour represent the values „1‟ in the pattern, grey colour
matrix is taken with all possible rows of the same matrix                 represents values „0‟ and values „-1‟ are represented by white
(consideration of one row at a time gives one pattern). The               colour. The obtained Haar texture patterns then are resized as
texture patterns obtained are orthogonal in nature. The                   the size of image for which texture features have to be
generation methods of Walsh transform, Haar transform and                 extracted. All the generated texture patterns are orthogonal to
Kekre transform patterns are elaborated respectively in                   each other.
sections A, B and C as given below.
A. Walsh Texture Pattern Generation
   The 4, 16 and 64 Walsh texture patterns are generated using
Walsh transform matrices [21,22,26,36] of size 2x2, 4x4 and
8x8 respectively. The generated four and sixteen Walsh
texture patterns [34] are shown in figure 2, 2x2 Walsh
transform matrix is shown as 2(a), each row of this matrix is               (a) 2x2 Haar          (b) 2x2 Haar            (c) Generated 4
considered one at a time and is multiplied with all rows of the                Matrix                Matrix                 Haar Texture
same matrix to generate Walsh texture patterns as shown in                                                               Patterns (4-pattern)
2(b). Figure 2(c) gives the envisioned 4 Walsh texture patterns



                                                                     78                              http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                    Vol. 09, No.03, 2011
                                                                                      IV.     CBIR USING TEXTURE PATTERNS
                                                                            In all total thirty six variations of proposed CBIR method
                                                                        are possible using the four methods of image maps (alias local,
                                                                        global, intermediate-4 and intermediate-9), three image
                                                                        transforms (Haar, Kekre and Walsh) and three different sets of
                                                                        texture patterns (4-pattern, 16-pattern and 64-pattern). For
                                                                        feature extraction in CBIR using texture patterns first the
                                                                        image map is generated using appropriate thresholding method
  (d) 4x4 Haar        (e) Generated 16 Haar Texture Patterns
                                                                        (local or global or intermediate-4 or intermediate-9). Then the
     Matrix                        (16-pattern)
                                                                        desired texture pattern set is generated (4-pattern or 16-pattern
                                                                        or 64-pattern) using the corresponding image transform (Haar
          Figure 3. Haar Texture Pattern Generation
                                                                        or Kekre or Walsh).
C. Kekre Texture Pattern Generation
                                                                            To generate feature vectors the binary image map of the
   The 4, 16 and 64 Kekre texture patterns are generated using          image is compared with each pattern of the generated Walsh
Kekre transform matrices [33,36] of size 2x2, 4x4 and 8x8               texture patterns to find matching number of ones & minus
respectively. Figure 4 gives generation of four and sixteen             ones in case of CBIR with Walsh texture patterns. The feature
Kekre texture patterns. 2x2 Kekre transform matrix is shown             vactor will have two values (number of matching „1‟ & „-1‟)
as 4 (a), each row of this matrix is considered one at a time           per pattern per colour plane in Walsh texture patterns. The per
and is multiplied with all rows of the same matrix to generate          image feature vector size for Walsh texture pattern based
Kekre texture patterns as shown in 4 (b) with all the negative          CBIR is given by equation 15.
values are replaced by „-1‟. Figure 4 (c) gives visualization of
the 4 Kekre texture patterns (4-pattern). The 4x4 Kekre
transform matrix is given in 5.16 (d) and visualization of 16            Feature vector size=2*3*(no. of considered texture-pattern)             (15)
Kekre transform patterns generated using it is shown in 5.16
(e), where black colour represent the values „1‟ in the pattern,            In case of Haar or Kekre texture patterns based CBIR, he
grey colour represent the values „0‟ and values „-1‟ are                ternary image map of the image is compared with each pattern
represented by white colour. The obtained Kekre texture                 of Haar or Kekre texture patterns to find three values per
patterns then are resized as the size of image for which texture        colour plane per pattern as number of matching ones, zeros &
features have to be extracted.                                          minus ones. The feature vector is formed using all these
                                                                        number of matches (for „1‟, „0‟ and „-1‟). The size of the
                                                                        feature vector of the image for Haar or Kekre texture patterns
                                                                        based CBIR is given by equation 16. Table 1 shows the feature
                                                                        vector size for 4, 16 and 64 texture patterns of respective
                                                                        image transforms.

                                                                         Feature vector size=3*3*(no. of considered texture-pattern)             (16)
 (a) 2x2 Kekre         (b) 2x2 Kekre        (c) Generated 4
     Matrix                Matrix            Kekre Texture
                                           Patterns (4-pattern)               Table 1. Feature vector of image retrieval using texture paterns
                                                                                                                          16-              64-
                                                                          CBIR Technique              4-Pattern
                                                                                                                        Pattern          Pattern
                                                                            Walsh Texture
                                                                                                           8               32              128
                                                                              Patterns
                                                                            Haar Texture
                                                                                                           12              48              192
                                                                              Patterns
                                                                            Kekre Texture
                                                                                                           12              48              192
                                                                              Patterns
 (d) 4x4 Kekre       (e) Generated 16 Kekre Texture Patterns
     Matrix                        (16-pattern)
                                                                            Using three assorted texture pattern set generated using
         Figure 4. Kekre Texture Pattern Generation                     Walsh, Haar and Kekre transform matrices along with image
                                                                        maps formed by different thresholds namely global,
                                                                        intermediate-4, intermediate-9 and local, total 36 novel feature
                                                                        vector generation methods have been tested resulting into six
                                                                        new image retrieval techniques. The main advantage of
                                                                        proposed CBIR methods is reduced time complexity for query
                                                                        execution due to reduced size of feature vector resulting into



                                                                   79                                   http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                      Vol. 09, No.03, 2011
faster image retrieval with better performance. Also the                                pattern based CBIR methods with respective image map
feature vector size is independent of image size in proposed                            thresholding techniques are shown in figure 6. In CBIR using
CBIR methods.                                                                           texture patterns intermediate-4 thresholding has given better
                                                                                        performance than other considered thresholding methods. For
                           V.     IMPLEMENTATION                                        each thresholding method 16 pattern has given better
    The implementation of the discussed CBIR techniques is                              performance than 4 or 64. Except global thresholding, for 16
done in MATLAB 7.0 using a computer with Intel Core 2 Duo                               patterns Haar transform based 16-patterns have given better
Processor T8100 (2.1GHz) and 2 GB RAM. The CBIR                                         performance. Except local thresholding, for 64 pattern Kekre
techniques are tested on the Wang image database [18] of                                transform has shown better performance.
1000 variable size images spread across 11 categories of
human being, animals, natural scenery and manmade things,
etc. The categories and distribution of the images is shown in
table 2.
              Table 2. Image Database: Category-wise Distribution

  Category               Tribes              Buses              Beaches
    No. of
                            85                 99                     99
   Images
  Category               Horses           Mountains            Airplanes
    No. of
                            99                 61                     100
   Images
  Category             Dinosaurs          Elephants              Roses
    No. of                                                                               Figure 5. Crossover points of precision-recall for proposed texture pattern
                            99                 99                     99
   Images                                                                                  based CBIR methods with respect to the considered image transforms
  Category           Monuments              Sunrise
    No. of
                            99                 61
   Images

To assess the retrieval effectiveness, we have used the
precision and recall as statistical comparison parameters [4,5]
for the proposed CBIR techniques. The standard definitions
for these two measures are given by the equations 17 and 18.
                 Number _ of _ relevant _ images _ retrieved
Pr ecision                                                                 (17)
                  Total _ number _ of _ images _ retrieved
                  Number _ of _ relevant _ images _ retrieved
Re call                                                                    (18)
            Total _ number _ of _ relevent _ images _ in _ database

                                                                                         Figure 6. Crossover points of precision-recall for proposed texture pattern
                                                                                         based CBIR methods with respective image map thresholding techniques
        VI.       RESULTS OF CBIR USING TEXTURE PATTERNS
                                                                                        Figure 7 gives crossover points of precision-recall for
    The crossover point of precision-recall plays very                                  proposed texture pattern based CBIR methods with
important role in performance comparison of image retrieval                             corresponding number of patterns considered. Here 16 texture
methods, higher crossover point value indicates better image                            patterns have shown better performance than 4 or 64 texture
retrieval. The crossover points of average precision–recall                             patterns. In case of 4-texture patterns all transforms have
values of firing 55 queries on image database for proposed                              shown same performance (because of the 2x2 transform
texture pattern based image retrieval methods are computed                              matrices for all transforms are alike). In 16 patterns except
and plotted in figures 5, 6 and 7. Figure 5 gives crossover                             global thresholding Haar transform performs better. In case of
points of precision-recall for proposed texture pattern based                           64 patterns better performance is given by Kekre transform
CBIR methods with respect to the considered image                                       except local thresholding. The Haar 16 pattern based CBIR
transforms alias Walsh, Haar and Kekre. In all transform                                with intermediate 4 thresholding has shown best performance
texture pattern based CBIR methods except Walsh local                                   among all CBIR variations considered. In texture pattern
thresholding, 16 patterns have consistently performed well.                             based CBIR, image retrieval using the Haar 16 patterns with
Also for all transforms intermediate-4 thresholding has given                           intermediate 4 thresholding has given best performance with
better performance in all 4, 16 and 64 texture patterns. The                            precision-recall crossover point value 0.461524. The second
crossover points of precision-recall for proposed texture                               best performance with precision-recall crossover point value



                                                                                   80                                   http://sites.google.com/site/ijcsis/
                                                                                                                        ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                  Vol. 09, No.03, 2011
0.45 is given by CBIR with Kekre 16 patterns with                                   Table 4. Performance Comparison of Number of Texture Patterns considered
                                                                                                     for image retrieval using Texture Patterns
intermediate 4 thresholding, the next in the performance is
image retrieval based on Kekre 16 patterns with global                                                            Number of             Average
                                                                                            Comparative
thresholding with crossover point value 0.4489 followed by                                                         Texture             Crossover
                                                                                            Performance
Haar 16 patterns with global thresholding and crossover point                                                      Patterns           Point Value
value 0.44834.                                                                                  Best              16 Pattern            0.43466
                                                                                             Second Best          64 Pattern           0.415209
                                                                                               Poorest             4 Pattern           0.412495

                                                                                     Table. 5 Performance Comparison of Thresholding method used to prepare
                                                                                               image maps for image retrieval using Texture Patterns

                                                                                                                                          Average
                                                                                            Comparative          Thresholding
                                                                                                                                         Crossover
                                                                                            Performance            Method
                                                                                                                                        Point Value
                                                                                               Best             Intermediate 4            0.43597
                                                                                            Second Best              Global               0.42180
                                                                                             Third Best          Intermediate 9           0.41363
 Figure 7. Crossover points of precision-recall for proposed texture pattern                  Poorest                 Local               0.41171
  based CBIR methods with corresponding number of patterns considered

 VII. PERFORMANCE COMPARISION OF VARIANTS IN TEXTURE                                   Total four varied thresholding methods are considered for
                  PATTERN BASED CBIR METHODS                                        image maps preparation for image retrieval using texture
                                                                                    patterns whose performance comparison is given in table 5 by
   The novel image retrieval methods using texture patterns
                                                                                    means of average precision-recall crossover point values of
are presented in this section. Here in all 36 variations of
                                                                                    texture based CBIR variants using respective thresholding
proposed image retrieval methods with texture patterns are
                                                                                    method. Intermediate 4 thresholding has been proven better.
proposed using three image transforms (Haar, kekre & Walsh),
                                                                                    The performance ranking for thresholding methods used in
three types of texture patterns (16, 64 & 4) and four ways of
                                                                                    proposed CBIR with texture patterns, starting with the best can
thresholding used to prepare image maps (intermediate-4,
                                                                                    be given as intermediate 4, global, intermediate 9 and local.
global, intermediate-9 & local). The average of precision-
recall crossover point values for respective variation is                                                     VIII. REFERENCES
considered for the performance ranking of these variations.                         [1]   Dr. H.B.Kekre, Sudeep D. Thepade, Varun K. Banura, “Augmentation
Three image transforms namely Walsh, Haar and Kekre are                                   of Colour Averaging Based Image Retrieval Techniques using Even
considered to generate texture patterns. From the results after                           part of Images and Amalgamation of feature vectors”, International
experimentation it is found that the Haar transform is showing                            Journal of Engineering Science and Technology (IJEST), Volume 2,
                                                                                          Issue 10, (ISSN: 0975-5462) Available online at http://www.ijest.info
best performance followed by Kekre transform and then
                                                                                    [2]   Dr. H.B.Kekre, Sudeep D. Thepade, Varun K. Banura, “Amelioration
Walsh transform in proposed CBIR methods as indicated by                                  of Colour Averaging Based Image Retrieval Techniques using Even
average precision-recall crossover point values of texture                                and Odd parts of Images”, International Journal of Engineering Science
based CBIR variants using respective image transform given                                and Technology (IJEST), Volume 2, Issue 9, (ISSN: 0975-5462)
in table 3. The number of texture patterns considered here are                            Available online at http://www.ijest.info.
4, 16 and 64. The 16 texture patterns have shown better                             [3]   Dr. H.B.Kekre, Sudeep D. Thepade, Akshay Maloo, “Query by Image
                                                                                          Content Using Colour Averaging Techniques”, International Journal of
performance in CBIR using texture patterns. The 64 texture                                Engineering Science and Technology (IJEST), Volume 2, Issue 6,
patterns have given second best performance followed by 4                                 2010.pp.1612-1622 (ISSN: 0975-5462) Available online at
texture patterns with poorest performance as per the average                              http://www.ijest.info.
precision-recall crossover point values of texture based CBIR                       [4]   Dr. H.B.Kekre, Sudeep D. Thepade, “Boosting Block Truncation
variants using respective number of texture patterns given in                             Coding using Kekre‟s LUV Color Space for Image Retrieval”, WASET
                                                                                          International Journal of Electrical, Computer and System Engineering
table 4.                                                                                  (IJECSE), Volume 2, Number 3, pp. 172-180, Summer 2008. Available
   Table 3. Performance Comparison of Image Transforms used in image                      online at http://www.waset.org/ijecse/v2/v2-3-23.pdf
                     retrieval using Texture Patterns                               [5]   Dr. H.B.Kekre, Sudeep D. Thepade, “Image Retrieval using
                                                                                          Augmented Block Truncation Coding Techniques”, ACM International
                                                      Average                             Conference on Advances in Computing, Communication and Control
        Comparative              Image
                                                     Crossover                            (ICAC3-2009), pp. 384-390, 23-24 Jan 2009, Fr. Conceicao Rodrigous
        Performance            Transform
                                                    Point Value                           College of Engg., Mumbai. Is uploaded on online ACM portal.
           Best                   Haar                0.42320                       [6]   Dr. H.B.Kekre, Sudeep D. Thepade, “Scaling Invariant Fusion
        Second Best               Kekre               0.41998                             of Image Pieces in Panorama Making and Novel Image
                                                                                          Blending Technique”, International Journal on Imaging (IJI),
          Poorest                 Walsh               0.41916                             www.ceser.res.in/iji.html, Volume 1, No. A08, pp. 31-46, Autumn
                                                                                          2008.




                                                                               81                                   http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                  Vol. 09, No.03, 2011
[7]    Hirata K. and Kato T. “Query by visual example – content-based image            [23]   Dr. H.B.Kekre, Sudeep D. Thepade, “Color Based Image Retrieval
       retrieval”, In Proc. of Third International Conference on Extending                    using Amendment Block Truncation Coding with YCbCr Color Space”,
       Database Technology, EDBT‟92, 1992, pp 56-71                                           International Journal on Imaging (IJI), Volume 2, Number A09,
[8]    Dr. H.B.Kekre, Sudeep D. Thepade, “Rendering Futuristic Image                          Autumn 2009, pp. 2-14. Available online at www.ceser.res.in/iji.html.
       Retrieval System”, National Conference on Enhancements in Computer,             [24]   Dr. H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “Color-Texture
       Communication and Information Technology, EC2IT-2009, 20-21 Mar                        Feature based Image Retrieval using DCT applied on Kekre‟s Median
       2009, K.J.Somaiya College of Engineering, Vidyavihar, Mumbai-77.                       Codebook”, International Journal on Imaging (IJI), Volume 2, Number
[9]    Minh N. Do, Martin Vetterli, “Wavelet-Based Texture Retrieval Using                    A09,      Autumn      2009,pp.    55-65.    Available     online      at
       Generalized Gaussian Density and Kullback-Leibler Distance”, IEEE                      www.ceser.res.in/iji.html (ISSN: 0974-0627).
       Transactions On Image Processing, Volume 11, Number 2, pp.146-158,              [25]   Dr. H.B.Kekre, Sudeep D. Thepade, Akshay Maloo “Performance
       February 2002.                                                                         Comparison for Face Recognition using PCA, DCT &WalshTransform
[10]   B.G.Prasad, K.K. Biswas, and S. K. Gupta, “Region –based image                         of Row Mean and Column Mean”, ICGST International Journal on
       retrieval using integrated color, shape, and location index”,                          Graphics, Vision and Image Processing (GVIP), Volume 10, Issue II,
       International Journal on Computer Vision and Image Understanding                       Jun.2010,              pp.9-18,           Available              online
       Special Issue: Colour for Image Indexing and Retrieval, Volume 94,                     http://209.61.248.177/gvip/Volume10/Issue2/P1181012028.pdf..
       Issues 1-3, April-June 2004, pp.193-233.                                        [26]   Dr. H.B.Kekre, Sudeep D. Thepade, “Improving the Performance of
[11]   Dr. H.B.Kekre, Sudeep D. Thepade, “Creating the Color Panoramic                        Image Retrieval using Partial Coefficients of Transformed Image”,
       View using Medley of Grayscale and Color Partial Images ”, WASET                       International Journal of Information Retrieval, Serials Publications,
       International Journal of Electrical, Computer and System Engineering                   Volume 2, Issue 1, 2009, pp. 72-79 (ISSN: 0974-6285)
       (IJECSE), Volume 2, No. 3, Summer 2008. Available online at                     [27]   Dr. H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah,
       www.waset.org/ijecse/v2/v2-3-26.pdf.                                                   Prathmesh Verlekar, Suraj Shirke, “Performance Evaluation of Image
[12]   Stian Edvardsen, “Classification of Images using color, CBIR Distance                  Retrieval using Energy Compaction and Image Tiling over DCT Row
       Measures and Genetic Programming”, Ph.D. Thesis, Master of science                     Mean and DCT Column Mean”, Springer-International Conference on
       in Informatics, Norwegian university of science and Technology,                        Contours of Computing Technology (Thinkquest-2010), Babasaheb
       Department of computer and Information science, June 2006.                             Gawde Institute of Technology, Mumbai, 13-14 March 2010, The paper
                                                                                              will be uploaded on online Springerlink.
[13]   Dr. H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, “DCT Applied to
       Row Mean and Column Vectors in Fingerprint Identification”, In                  [28]   Dr. H.B.Kekre, Tanuja K. Sarode, Sudeep D. Thepade, Vaishali
       Proceedings of International Conference on Computer Networks and                       Suryavanshi,“Improved Texture Feature Based Image Retrieval using
       Security (ICCNS), 27-28 Sept. 2008, VIT, Pune.                                         Kekre‟s Fast Codebook Generation Algorithm”, Springer-International
                                                                                              Conference on Contours of Computing Technology (Thinkquest-2010),
[14]   Zhibin Pan, Kotani K., Ohmi T., “Enhanced fast encoding method for
                                                                                              Babasaheb Gawde Institute of Technology, Mumbai, 13-14 March
       vector quantization by finding an optimally-ordered Walsh transform
                                                                                              2010, The paper will be uploaded on online Springerlink.
       kernel”, ICIP 2005, IEEE International Conference, Volume 1, pp I -
       573-6, Sept. 2005.                                                              [29]   Dr. H.B.Kekre, Tanuja K. Sarode, Sudeep D. Thepade, “Image
                                                                                              Retrieval by Kekre‟s Transform Applied on Each Row of Walsh
[15]   Dr. H.B.Kekre, Sudeep D. Thepade, “Improving „Color to Gray and
                                                                                              Transformed VQ Codebook”, (Invited), ACM-International Conference
       Back‟ using Kekre‟s LUV Color Space”, IEEE International Advanced
                                                                                              and Workshop on Emerging Trends in Technology (ICWET
       Computing Conference 2009 (IACC‟09), Thapar University, Patiala,
                                                                                              2010),Thakur College of Engg. And Tech., Mumbai, 26-27 Feb 2010,
       INDIA, 6-7 March 2009. Is uploaded at online at IEEE Xplore.
                                                                                              The paper is invited at ICWET 2010. Also will be uploaded on online
[16]   Dr. H.B.Kekre, Sudeep D. Thepade, “Image Blending in Vista Creation                    ACM Portal.
       using Kekre's LUV Color Space”, SPIT-IEEE Colloquium and
                                                                                       [30]   Haar, Alfred, “ZurTheorie der orthogonalenFunktionensysteme”.
       International Conference, Sardar Patel Institute of Technology, Andheri,
                                                                                              (German), MathematischeAnnalen, volume 69, No. 3, 1910, pp. 331–
       Mumbai, 04-05 Feb 2008.
                                                                                              371.
[17]   Dr. H.B.Kekre, Sudeep D. Thepade, “Color Traits Transfer to
                                                                                       [31]   Charles K. Chui, “An Introduction to Wavelets”, Academic Press, 1992,
       Grayscale Images”, In Proc.of IEEE First International Conference on
                                                                                              San Diego, ISBN 0585470901.
       Emerging Trends in Engg. & Technology, (ICETET-08), G.H.Raisoni
       COE, Nagpur, INDIA. Uploaded on online IEEE Xplore.                             [32]   Dr. H. B. Kekre, Sudeep D. Thepade, Varun K. Banura, “Query by
                                                                                              Image Texture Pattern content using Haar Transform Matrix and Image
[18]   http://wang.ist.psu.edu/docs/related/Image.orig (Last referred on 23
                                                                                              Bitmaps”, Invited at ACM International Conference and Workshop on
       Sept 2008)
                                                                                              Emerging Trends in Technology (ICWET 2011), TCET, Mumbai, 25-
[19]   Dr. H.B.Kekre, Sudeep D. Thepade, “Using YUV Color Space to Hoist                      26 Feb 2011.
       the Performance of Block Truncation Coding for Image Retrieval”,
                                                                                       [33]   Dr. H. B. Kekre, Sudeep D. Thepade, “Image Retrieval using Non-
       IEEE International Advanced Computing Conference 2009 (IACC‟09),
                                                                                              Involutional Orthogonal Kekre‟s Transform”, International Journal of
       Thapar University, Patiala, INDIA, 6-7 March 2009.
                                                                                              Multidisciplinary Research and Advances in Engineering (IJMRAE),
[20]   Dr. H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah,                        Ascent Publication House, 2009, Volume 1, No.I, pp 189-203, 2009.
       Prathmesh Verlekar, Suraj Shirke,“Energy Compaction and Image                          Abstract available online at www.ascent-journals.com
       Splitting for Image Retrieval using Kekre Transform over Row and
                                                                                       [34]   Dr. H. B. Kekre, Sudeep D. Thepade, Varun K. Banura, “Image
       Column Feature Vectors”, International Journal of Computer Science
                                                                                              Retrieval using Texture Patterns generated from Walsh-Hadamard
       and Network Security (IJCSNS),Volume:10, Number 1, January 2010,
                                                                                              Transform Matrix and Image Bitmaps”, (Selected) Springer
       (ISSN: 1738-7906) Available at www.IJCSNS.org.
                                                                                              International Conference on Technology Systems and Management
[21]   Dr. H.B.Kekre, Sudeep D. Thepade, Archana Athawale, Anant Shah,                        (ICTSM 2011), MPSTME and DJSCOE, Mumbai, 25-27 Feb 2011.
       Prathmesh Verlekar, Suraj Shirke, “Walsh Transform over Row Mean
                                                                                       [35]   Dr. H.B.Kekre, Sudeep D. Thepade, Shrikant Sanas, “Improving
       and Column Mean using Image Fragmentation and Energy Compaction
                                                                                              Performance of multileveled BTC based CBIR using Sundry Color
       for Image Retrieval”, International Journal on Computer Science and
                                                                                              Spaces”, CSC International Journal of Image Processing (IJIP), Volume
       Engineering (IJCSE),Volume 2S, Issue1, January 2010, (ISSN: 0975–
                                                                                              4, Issue 6, Computer Science Journals, CSC Press,
       3397). Available online at www.enggjournals.com/ijcse.
                                                                                              www.cscjournals.org
[22]   Dr. H.B.Kekre, Sudeep D. Thepade, “Image Retrieval using Color-
                                                                                       [36]   Dr. H.B.Kekre, Sudeep D. Thepade, Akshay Maloo “Performance
       Texture Features Extracted from Walshlet Pyramid”, ICGST
                                                                                              Comparison of Image Retrieval Using Fractional Coefficients of
       International Journal on Graphics, Vision and Image Processing
                                                                                              Transformed Image Using DCT, Walsh, Haar and Kekre‟s Transform”,
       (GVIP), Volume 10, Issue I, Feb.2010, pp.9-18, Available online
                                                                                              CSC International Journal of Image Processing (IJIP), Volume 4, Issue
       www.icgst.com/gvip/Volume10/Issue1/P1150938876.html




                                                                                  82                                    http://sites.google.com/site/ijcsis/
                                                                                                                        ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 09, No.03, 2011
2, pp 142-157, Computer         Science    Journals,   CSC     Press,                  Management and Engineering, SVKM‟s NMIMS University,
www.cscjournals.org                                                                    Vile Parle(w),     Mumbai, INDIA. He is member of
                                                                                       International Association of Engineers (IAENG) and
                                                                                       International Association of Computer Science and
                 AUTHORS PROFILE
                                                                                       Information Technology (IACSIT), Singapore. He has been
                                                                                       on International Advisory Board of many International
       Dr. H. B. Kekre has received B.E. (Hons.) in Telecomm.                          Conferences. He is Reviewer for many reputed International
       Engineering. from Jabalpur University in 1958, M.Tech                           Journals. His areas of interest are Image Processing and
       (Industrial Electronics) from IIT Bombay in 1960,                               Computer Networks. He has more than 100 research papers
       M.S.Engg. (Electrical Engg.) from University of Ottawa in                       in National/International Conferences/Journals to his credit
       1965 and Ph.D. (System Identification) from IIT Bombay                          with a Best Paper Award at International Conference
       in 1970 He has worked as Faculty of Electrical Engg. and                        SSPCCIN-2008, Second Best Paper Award at ThinkQuest-
       then HOD Computer Science and Engg. at IIT Bombay. For                          2009 National Level paper presentation competition for
       13 years he was working as a professor and head in the                          faculty, Best Paper Award at Springer International
       Department of Computer Engg. at Thadomal Shahani                                Conference ICCCT-2010 and second best project award at
       Engineering. College, Mumbai. Now he is Senior Professor                        Manshodhan 2010.
       at MPSTME, SVKM‟s NMIMS University. He has guided
       17 Ph.Ds, more than 100 M.E./M.Tech and several
       B.E./B.Tech projects. His areas of interest are Digital Signal                  Varun K. Banura is currently pursuing B.Tech. (CE) from
       processing, Image Processing and Computer Networking. He                        MPSTME, NMIMS University, Mumbai. His areas of
       has more than 320 papers in National / International                            interest are Image Processing and Computer Networks. He
       Conferences and Journals to his credit. He was Senior                           has 07 research papers in International Conferences/Journals
       Member of IEEE. Presently He is Fellow of IETE and Life                         to his credit.
       Member of ISTE Recently ten students working under his
       guidance have received best paper awards and two have been
       conferred Ph.D. degree of SVKM‟s NMIMS University.
       Currently 10 research scholars are pursuing Ph.D. program
       under his guidance.


       Sudeep D. Thepade has Received B.E.(Computer) degree
       from North Maharashtra University with Distinction in 2003.
       M.E. in Computer Engineering from University of Mumbai
       in 2008 with Distinction, currently pursuing Ph.D. from
       SVKM‟s NMIMS, Mumbai. He has about than 07 years of
       experience in teaching and industry. He was Lecturer in
       Dept. of Information Technology at Thadomal Shahani
       Engineering College, Bandra(w), Mumbai for nearly 04
       years. Currently working as Associate Professor in Computer
       Engineering at Mukesh Patel School of Technology




                                                                        83                             http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                  Vol. 9, No. 3, 2011



              A Generic Rule-Based Agent for Monitoring Temporal Data Processing


             S. Laban                                               A.I. El-Desouky                                 A. S. ElHefnawy
 International Data Centre (IDC)                         Computer and Systems Department,                       Information Technology
    Comprehensive Nuclear Test-Ban                        Faculty of Engineering, Mansoura                   Department, Faculty of Computer
     Treaty Organization (CTBT),                            University, Mansoura, Egypt                         & Information, Mansoura
           Vienna, Austria*                                                                                    University, Mansoura, Egypt
       shaban.laban@ctbto.org


Abstract—Most of the current real-time monitoring tools are task                   restrictions. The agent proposes a unified and flexible format
specific, lacking alerts capabilities, inflexible, and consuming                   for representing the temporal data and objects (intervals) as
many of the organization resources in maintaining and adding                       well as their different attributes. Also, the agent is using
newer monitored objects. This paper introduces the design and                      customized rules for workflow monitoring and generating
implementation of a generic rule-based agent model that                            different exception reports and alerts when necessary.
minimizes the previous limitations and restrictions. The proposed
intelligent agent is using dedicated rules for workflow monitoring
and generating alerts as well as exception reports to the
                                                                                       The overall structure of the system that the proposed agent
operators. A unified data model is proposed to reduce the
irregularity and complexity of the monitored data and objects.                     is modeled is explained in Section 2. In Section 3, detailed
The suggested rule-based monitoring agent is generic,                              architecture and design of the rule-based agent are presented.
autonomous, configurable, and platform-independent.                                Real-time practical implementation of the proposed system is
                                                                                   illustrated in Section 4. Section 5 evaluates the performance of
     Keywords- rule-based; monitoring; workflow; agent.                            the suggested agent and presents the results of the comparison
                                                                                   with the available tools. Section 6 concludes this paper by
                          I.     INTRODUCTION                                      summarizing the contributions of the proposed approach and
    The workflow of real-time data processing systems consist                      discussing future directions.
generally of series or sequence of complex stages and
processes [1]. In every stage, the states of the different objects
or elements, as well as their internal attributes, are dynamically                                 II.   THE OVERALL FRAMEWORK
changed during its different steps. Generally, real-time systems                       In order to achieve the maximum scalability, efficiency,
use monitoring tools to increase their productivity and                            and robustness of the overall system, the web-enabled
efficiency by detecting anomalies, potential workflow failures                     monitoring approach is structured into three main different
and tracing workflow progress of the different processes [2-4].                    modules as shown in Fig. 1. The arrows in the graph indicate
The operators of the real-time systems use several instances of                    the direction of data flow between the different modules. The
the monitoring tools to supervise, control, monitor workflow                       dashed line indicates the direct data flow for intranet
progress and trace the states of the different objects and                         monitoring agents.
resources during the life cycle of monitored systems.


    Most of the traditional monitoring tools have many                                Real-time Data                                   Rule-Based
limitations. Such monitoring tools are usually platform                                Processing &          Web server/               Monitoring
                                                                                                              Services                   Agent
dependent, task specific, inflexible, and having limited                               Repositories
resources management. Also, the monitoring tools consume
most of the organization resources, need long time, more                                                 Figure 1. Overall framework
human resources to maintain or configure newer monitored                               The first module comprises a set of knowledge-based
objects. Moreover, usually they lack intelligence decision                         repositories as well as intelligent scheduled or continuous
support. In addition, the monitoring tools are not available to                    running programs. These programs act as data collectors,
remote users and are not portable. Furthermore, the monitored                      processors, and producers. They regularly collect states of
data have no standard format or unified structures which                           objects from the different data sources of the monitored system.
complicate building those monitoring tools.                                        Then, they infer the monitored data and classify the status of
   The aim of this paper is to present an approach for                             the different objects as well as their necessary
implementing a platform-independent and task-unspecific rule-                      attributes/properties from the different stages of the monitored
based agent model that minimizes the previous limitations and                      system. Finally, these programs store the data into dedicated
*
 Disclaimer: The views expressed on this paper are those of the authors and
do not necessarily reflect the view of the CTBTO.




                                                                              84                            http://sites.google.com/site/ijcsis/
                                                                                                            ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                   Vol. 9, No. 3, 2011
data repositories either in an Extensible Markup Language                         A. Unified Data Representation Model
(XML) format or in a unified standard data format [8,9].
                                                                                      In the monitoring agent, and in order to reduce data
                                                                                  irregularity and complexity, the monitored objects are
   The second module consists of any ordinary web server and                      represented and modeled as temporal (interval-based) elements
a set of CGI programs or services (act as intermediate data                       [15]. Each interval-based element, in our case
producers) in charge to read the data generated from the first                    IntervalElementInfo class, is composed of a set of well defined
module, prepare the browser configuration data and a user                         fields or slots and array of optional attributes (AttributeInfo
specific views and to send all the information to the monitoring                  class). The simplified UML class diagram and composition
agents in Hypertext Markup Language (HTML) using                                  relationship are shown in Fig. 3.
Hypertext Transfer Protocol (HTTP) [10,11].
                                                                                           IntervalElementInfo                        AttributeInfo
                                                                                              ElementId:int                            Name:String
                                                                                           ElementClass:String                         Value:String
   The third module is our proposed monitoring agent (data                                ElementName:String                            Alias:String
consumers). This agent consists of a set of Java classes or                                ElementState:String                          color:Color
programs that can be run either from the web browsers for all                             ElementEpocStart:int                              ….
known platforms as a Java applet or in standalone mode [12-                               ElementEpocEnd:int                         getName():String
14].                                                                                       ElementColor:Color                        getValue():String
                                                                                         Attributes[]:AttributeInfo                  getAlias():String
                                                                                                   …..                               setColor(Color)
                                                                                                                                     getColor():Color
      III.   THE PROPOSED MONITORING AGENT MODEL                                          addAttribute(AttributeInfo)
                                                                                          setElementClass(String)
    In this paper, the rule-based techniques and agent                                               …
programming concepts are applied to workflow monitoring in                               getAttribute():AttributeInfo
order to provide more flexibility and intelligence to the                                getElementClass():String
monitoring process [18]. The agent reads regularly the                                               ….
                                                                                              getState():String
necessary information from the web server using the HTTP                                      setState(String)
protocol. Then, the agent aggregate, sort, infer, and finally                                  Figure 3. Class diagram and composition relationship
display the monitored data accordingly in the web browser.
The agent is using a dedicated customized rule engine for                            The implementation of such unified data format is
monitoring purposes as well as generating exceptional reports                     necessary for the monitoring agent model in order to support
and early warning alerts to the operators. The agent is able to                   new generations of portable digital assistance devices such as
communicate with its user/operator. It reads the user updated                     iPhone, embedded systems, as well as other similar devices.
configuration parameters and reacts with user requests and
needs. The internal structure including the main functions of
the monitoring agent is shown in Fig. 2.                                          B. Configuration and Control
                                                                                     The agent configuration is carried out either during
                                                                                  program loading (default configuration) or while the agent is
                                                                                  running (user preferences). During program initialization, the
                                 Requests         Configuration/                  parameters are fed to the agent using the HTML tag
                                                   Monitored                      <APPLET> or fed from a configuration file if the agent is
                                                      Data
                                                                                  running in a standalone mode.
                                       External interface                             During run time, the user can change most of loaded
        Configuration                                                             parameters. This is done by selecting either “Sort/Filter
         & Control           Update         Update                                Viewed Data” tab,              “View/Update Rules” tab or
                                            Agent knowledge-                      “Configuration” tab. The configuration and control parameters
                                                  base                            are classified into four main categories as follows:
               Working Memory
                (facts, reports,
                    alerts)                 Monitoring rules                          The first category contains the connection parameters that
                                                                                  are necessary for identifying and connecting to the specified
                                                                                  HTTP server.
         GUI components                   Inference engine
                                                                                      The second category contains the parameters that are
                                                                                  needed to read, parse, and display the received data from the
        Figure 2. The internal structure of the monitoring agent model            server.




                                                                             85                               http://sites.google.com/site/ijcsis/
                                                                                                              ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 9, No. 3, 2011
    While, the third category contains the necessary parameters
that are needed to set up the graphical display of the applet
itself.


    Finally, the last category contains the parameters needed to
read, parse, map the various rules definitions, generate alerts
and exception reports and display the various states of the
interval.


C. Rule Engine
    The agent uses a time-dependent rule engine for states
identification and classification. The engine is a forward-
chaining rule engine based on JRuleEngine [16]. Default rules
definition is loaded at start up time from the HTTP server.
   Simplified rule syntax can be described as follows:


   RULE-n [description]
   [SALIENCE salience]
   IF                                                                                        Figure 4. Agent rules window

         IntervalElement.Property Operator State1 [State2]
          [IntervalElement.Property Operator state1 [State2] ]
   THEN
         set State Identifier <color code>
   [THEN           generate alert for Operator]


    Due to the nature of the monitored temporal data and the
agent, currently one rule will be fired per matched interval for
state setting. The rule with higher salience will have high
priority over the rule with lower salience. The agent is using by
default depth strategy for conflict resolution. This means that
newly activated rules are placed above all rules of the same
salience. Additionally, an exceptional conflict report entry will
be available to the operator for conflicting rules.
    The agent graphical rule implementation of the proposed
rule syntax is illustrated in Fig.4. The rules can be dynamically
inserted during run-time as well as adding new conditions for
any existing rule. The rules set are displayed in an editable
table that could be altered by the user. In addition, the user can
change directly the state-color relationship, by pressing the
color itself. A pop up dialog will appear and the user can select
his new color and hit the button “OK”. Also, check box is
available for every rule to allow the agent to generate alerts
automatically when rule matched. To activate the overall
changes, the user needs to press the button “Update Workflow”
in order to force the agent to immediately reprocess the new
rules definition. Fig. 5 shows a snapshot of an alert window.                            Figure 5. Example of an alerts window




                                                                     86                          http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                             Vol. 9, No. 3, 2011
D. Main Graphical User Interface Components                                   to view older intervals. Another vertical scrollbar is linked to
   Fig. 6 shows the different components of the monitoring                    the two previous panels. It enables the user to view the hidden
agent. The main component of the agent is the tabbed pane.                    data if the number of pairs is greater than the number of objects
Currently, it contains six child panels. The child panels include             of the “Object Property” panel. A right mouse click on any
the graphical panel, the sort panel, the rules panel, the                     brick will pop up a window that displays all available
exceptional reports panel, the alerts panel and the configuration             attributes/information for this particular interval. Also, a
panel.                                                                        middle mouse click outside the bricks will pop up a menu that
                                                                              allows the user to change the current view. Finally, a zoom or
    The graphical panel consists of many different widgets or                 time expander scrollbar is created to allow the user to expand
components to support monitoring the data over time. The first                or zoom into the viewed area to display more detailed
panel is the “Object Property” panel. Each “Object Property”                  information about those intervals.
pair is displayed in a separate row. The main component is the
data display panel. This panel displays the status         of the
interval data in colored horizontal rows. The rows are
displayed adjacent to their corresponding “class name” objects.
Each horizontal row is composed of time interval columns or
bricks. The bricks are colored according to the status of that
interval where the mapping between states and colors are
defined and configured by specified rules. A dedicated
horizontal scrollbar is linked to that panel that allows the user




               Tabbed pane used for switching between
                                                                                                                 Current time
                       different agent panels.




                                               “Object Property” Panel
                                                                                                                                     Object &
                                                                                                                                    attributes
                                                                                                                                      pop up
                                                                                                                                     Window,
                                                                                                                                   alerts are in
                  “Object Property” scroll




                                                                                                                                        red




                                             Zoom scroll bar                          Interval time scroll bar




                                                        Figure 6. Agent main graphical components




                                                                         87                                http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                             Vol. 9, No. 3, 2011
                    IV.     IMPLEMENTATION                                      The results of the comparison between the proposed agent
    The International Data Centre (IDC) of the Preparatory                  and the other available monitoring tools are summarized in
Commission for the Comprehensive Nuclear-Test-Ban Treaty                    Table I. If we compare the proposed system with the other
Organization (CTBTO) is receiving and processing, in near real              tools on the basis of N different instances, it is clear that the
time, data from all the International Monitoring System (IMS)               database and/or resources utilization have been decreased from
stations or facilities via a dedicated Global Communications                N to 1 per monitored system. Also, the response time of
Infrastructure (GCI) [17]. Monitoring IMS data includes                     proposed rule-based monitoring agent is faster time than the
receiving, forwarding, automatic processing, interactive                    current tools. Moreover, it is dynamic and simpler in
review, generation and distribution of IDC scientific bulletins             configuration, integration and implementation. Furthermore,
and reports, and archiving of the data.                                     the proposed system is portable, dynamically configurable, and
                                                                            generic. Finally, the proposed system is more advantageous
                                                                            than the other tools because of its web-oriented characteristics
                                                                            as well as alerts and reports capabilities.
    The proposed prototype agent has been implemented in
monitoring the real-time processing of the data received                                      TABLE I.        COMPARISON RESULTS
continuously from the IMS stations, including radionuclide and
meteorological data, or similar temporal data. The monitored                     No.           Attribute              Other           Proposed
data are from different technologies and different                                                                   Available          Agent
characteristics IDC seismic data processing and pipeline,                                                             Tools
radionuclide data processing and atmospherics data processing.
A recent real-time snapshot of the proposed agent is shown in                     1     Response Time                   Fast            Faster
Fig. 7.                                                                           2     Flexibility                     Hard             Easy
                                                                                  3     Configurability                Simple           Simple
                                                                                  4     Portability                      No              Yes
                                                                                  5     Dynamic                       Not Easy           Yes
                                                                                        Configuration
                                                                                  6     Web-Enabled                     No              Yes
                                                                                  7     Task Specific                   Yes            Generic
                                                                                  8     Scalability                   Not Easy          Easy
                                                                                  9     Resources                        N               1
                                                                                        Utilization
                                                                                        (per monitored
                                                                                        object)
                                                                                  10    Reports Capability              N/A              Yes
                                                                                  11    Alerts Capability               N/A              Yes


                                                                                        VI.     CONCLUSION AND FUTURE WORK
                                                                                The proposed lightweight rule-based agent is generic,
                                                                            portable, and flexible to handle any time interval, and platform
                                                                            independent. By implementing the proposed agent, status of
                                                                            the different objects, as well as their attributes or properties,
                                                                            can be easily monitored either via the organization Intranet, or
                                                                            remotely using Virtual Private Network (VPN) connections
                                                                            through the Internet. The agent allows timely generating
                                                                            exceptional reports and alerts. This will help the operators and
                                                                            decision making users to act faster on the anomalies that could
          Figure 7. Workflow of IDC radionuclide data processing            be occurred during the workflow monitoring process.

               V.     COMPARISON AND RESULTS                                    Future work will be dedicated to create special monitoring
                                                                            agent, for simulation and analysis purposes. Also, it is planned
    More than ten performance attributes have been identified               to integrate or enable the use of the proposed monitoring agent
to evaluate the proposed agent. Also, a comparison between the              in grid monitoring systems in order to share and distribute
proposed agent and the current available monitoring tools has               generated alerts and exceptional reports with the other intranet
been made. Selected datasets from different records, extracted              agents
from IDC databases and different critical processes, have been
used to evaluate the performance of the proposed system with                                       ACKNOWLEDGMENT
the existing similar monitoring tools.                                        The authors express their gratitude to thank CTBTO
                                                                            management for encouraging the research activities. Special




                                                                       88                                http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                  Vol. 9, No. 3, 2011
thanks to the IDC staff and the radionuclide analysts for the                       [7] M. Klusch, “Information agent technology for the internet: a Survey”,
constructive comments during testing and implementing the                                 Data & Knowledge Engineering, vol. 36, 2001, 2001.
proposed agent.                                                                     [8] Extensible Markup Language, http://www.xml.org/
                                                                                    [9] T. Bray et al., Extensible Markup Language (XML): W3C (World-Wide
                              REFERENCES                                                  Web Consortium), http://www.w3c.org/.
                                                                                    [10] The Internet Society (ISOC), http://www.isoc.org/.
[1] F. J. Kurfess and D. P. Shah, "Monitoring distributed process with              [11] Request For Comments (RFC) Editor, http://www.rfc-editor.org/.
      intelligent agents," In: Proceedings of IEEE Conference and Workshop          [12] Java virtual machine, http://java.sun.com/.
      on Engineering of Computer-Based Systems, 1998.                               [13] J.E. Smith and R.Nair., “The architecture of virtual machines”, IEEE
[2] J.-J. Jeng, J. Schiefer, and H. Chang, "An agent-based architecture for               Computer, Volume 38(Issue 5):Pages 32–38, May 2005.
      analyzing business processes of real-time enterprises," Proceedings of        [14] S. Orlando, and S. Russo, “Java virtual machine for dependability
      the IEEE 7th International Conference on Enterprise Distributed Object              benchmarking”, Proceedings of the Ninth IEEE International
      Computing, pp. 86, 2003.                                                            Symposium on Object and Component-Oriented Real-Time Distributed
[3] I. Seilonen, T. Pirttioja, A. Halme, K. Koskinen, and A. Pakonen,                     Computing (ISORC'06),2006,pp. 433-440.
      ,”Indirect process monitoring with constraint handling agents,” 4th           [15] Bradshaw, J.M., Software agents. San Francisco, CA, USA: AAAI
      International IEEE Conference on Industrial Informatics, Aug. 2006 ,                Press/MIT Press, 1997.
      pp.1323 – 1328.                                                               [16] The Java Rule Engine, http://jruleengine.sourceforge.net/.
[4] L. Bunch, M. Breedy, J. M. Bradshaw, M. Carvalho, N. Suri, A. Uszok, J.         [17] Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty
      Hansen, M. Pechoucek, V.Marik, “Software agents for process                         (CTBT) Organization, http://www.ctbto.org/.
      monitoring and notification,” ACM Symposium.
                                                                                    [18] S. Russell and P. Norvig, Artificial Intelligence, A Modern Approach,
[5] A. M. Shaw, S. Yada, “A comprehensive agent-based architecture for                    Prentice Hall, 1995.
      intelligent information retrieval in a distributed heterogeneous
      environment”, Decision Support Systems, vol. 32, 2002, pp. 401-415.
[6] R. Jennings, “An agent-based approach for building complex software
      systems”, Communications of the ACM, Vol. 44, No. 4, 2001,pp. 35-41.




                                                                               89                                http://sites.google.com/site/ijcsis/
                                                                                                                 ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. 9, No. 3, March 2011

     A New Approach for Model based Gait Signature
                     Extraction
       Mohamed Rafi                   Shanawaz Ahmed J                       Md. Ekramul Hamid                     R.S.D Wahidabanu
      Dept. of CS&E                Dept. of Computer Science          Dept. of Computer Network Engg.               Dept. of E&C
   HMS Institute of Tech.         College of Computer Science            College of Computer Science         Government college of Engg
  Tumkur, Karnataka, India        King Khalid University, KSA         King Khalid University, Abha, KSA       Salem, Tamil Nadu, India.
  mdrafi2km@yahoo.com                jshanawaz@gmail.com                  ekram_hamid@yahoo.com               drwahidabanu@gmail.com


Abstract— Identifying individuals for security purposes are                     Gait is defined as, “A particular way or manner of moving
becoming essential now-a-day. Gait recognition aims essentially            on foot”. Early psychological studies into gait by Murray [2],
to address this problem by identifying people at a distance based          suggested that gait is a unique personal characteristic, with
on the way they walk. In this paper, a model is proposed for gait          cadence and is cyclic in nature. Gait recognition as a
signature extraction consists of gait capture, segmentation and            physiological biometric technique has become popular in
feature extraction steps. We use Hough transform technique that            recent times. Gait as a biometric can be seen as advantageous
helps to read all parameters which are used to generate gait               [3] over other forms of biometric identification techniques for
signatures that may result a better gait recognition rate. In the          it is unobtrusive, can be captured at a distance, does not require
preprocessing steps, the picture frames taken from video
                                                                           high quality images and it is difficult to disguise. The first
sequences are given as input to Canny edge detection algorithm
to detect edges of the image by extracting foreground from
                                                                           scientific article on animal walking gaits has been written
background also it reduces the noise using Gaussian filter. The            350BC by Aristotle [4]. He observed and described different
output from edge detection is an input to the Hough transform to           walking gaits of bipeds and quadrupeds and analyzed why all
extract gait parameters. We have used five parameters to                   animals have an even number of legs. Recognition approaches
successfully extract gait signatures. It is observed that when the         to gait were first developed in the early 1990s and were
camera is placed at 90 and 270 degrees, all the parameters used            evaluated on smaller databases and showed promise. DARPA’s
in the proposed work are clearly visible. Then using Hough                 Human ID at a Distance program [5] then collected a rich
transform, a clear line based model is designed to extract gait            variety of data and developed a wide variety of technique and
signatures. The utility of the model is tested on a variety of body        showed not only that gait could be extended to large databases
and stride parameters recovered in different viewing conditions            and could handle covariate factors. Since the DARPA program,
on a database consisting of 15 to 20 subjects walking at both an           research has continued to extend and develop technique, with
angled and frontal-parallel view with respect to the camera, both          especial consideration of practical factors such as feature
indoors and outdoors and find the method to be highly successful.          potency.
   Keywords- Biometric, Gait signature extraction, Hough                       In Silhouette Analysis-Based Gait Recognition for Human
Transform, Canny Edge detection, Gaussian filter                           Identification [6] a combination of background subtraction
                                                                           procedure and a simple correspondence method was used to
                                                                           segment and track spatial silhouettes of an image, but this
                       I.    INTRODUCTION                                  method generates more noise which leads to poor gait signature
                                                                           extraction. Therefore the rate of recognition was low. In gait
    The demand for automatic human identification system is                recognition by symmetry analysis[7], the Generalized
strongly increasing and growing in many important                          Symmetry Operator was used which locates features according
applications, especially at a distance and it has recently gained          to their symmetrical properties rather than relying on the
great interest from the pattern recognition and computer vision            borders of a shape or on general appearance and hence does not
researchers for it is widely used in many security-sensitive               require the shape of an object to be known in advance. The
environments such as banks, parks and airports. Biometrics is a            evaluation was done by masking with a rectangular bar of
new powerful tool for reliable human identification and it                 different widths: 5, 10 and 15 pixels in each image frame of the
makes use of human physiology or behavioral characteristics                test subject and at the same position. The area masked was on
such as face, iris, fingerprints and hand geometry for                     average 13.2%, 26.3% and 39.5% of the image silhouettes,
identification. However, these biometrics methodologies are                respectively. A recognition rate of 100% was obtained for bar
either instructive or restricted to many controlled environments.          size of 5 pixels. For a bar width of 10 pixels the test failed as
For example, most face recognition methods are capable of                  the test subject could not be recognized as subject was
recognizing only frontal or nearly frontal faces, other                    completely covered in most of the image frames. This suggests
biometrics such as fingerprint and iris are no longer applicable           that recognition is likely to be adversely affected when a
when the persons suddenly appears in the surveillance.                     subject walks behind a vertically placed object. There were also
Therefore, new biometrics recognition methods are strongly                 other limitations, Mark Ruane Dawson [8], like the legs were
needed in many surveillance applications, especially at a                  not being tracked to a high enough standard for gait
distance [1].                                                              recognition. The segmentation process leads to a very crude




                                                                      90                               http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     Vol. 9, No. 3, March 2011
model fitting procedure which in turn adversely affects the rate               A. Gait Capturing
of recognition. In other method of gait recognition, the subjects                  At this step the subjects are asked to walk for capturing of
in the video are always walking perpendicular to the camera                    gait. This is a very important step as the total results depend on
[9]. This would not be the case in real life as people would be                the quality of the gait captured. So the care should be taken to
walking at all angles to the video camera. Using of fewer                      see that the quality of gait capturing is maintained, this step
parameters for gait signatures was another major drawback                      includes video sequence and XML data store. In our proposed
which has to be addressed.                                                     research the following preprocessing steps are carried out
    The motivation behind this research is to find whether                     before segmenting a captured Image
increase in number of gait signature can improve the                                   •    [Reading a RGB image]
recognition rate? Improvement over model fitting can give us
better results? What factors affect gait recognition and to what                       •    [Converting an RGB image to Gray Scale]
extent? And what are the critical vision components affecting
gait recognition from video? The objective of this paper is to                         •    [Converting Gray Scale Image to Indexed Image]
explore the possibility of extracting a gait biometric from a                  The indexed image is the input to the segmentation algorithm
sequence of images of a walking subject without using                          for further processing. The above preprocesses of an image is
markers. Sophisticated computer vision techniques are                          shown in figure 2.
developed, aimed to extract a gait signature that can be used for
person recognition.                                                            B. Segmentation
                                                                                   In computer vision, segmentation refers to the process of
    Using video feeds from conventional cameras and without
                                                                               partitioning a digital image into multiple segments (sets of
the use of special hardware, implicates the development of a
                                                                               pixels, also known as super pixels). The goal of segmentation is
marker less body motion capture system. Research in this
                                                                               to simplify and/or change the representation of an image into
domain is generally based on the articulated-models approach.
                                                                               something that is more meaningful and easier to analyse. Image
Haritaoglu et al. [10] presented an efficient system capable of
                                                                               segmentation is typically used to locate objects and boundaries
tracking 2D body motion using a single camera. Amos Y.
                                                                               (lines, curves, etc.) in images. More precisely, image
Johnson[11] used a single camera with the viewing plane
                                                                               segmentation is the process of assigning a label to every pixel
perpendicular to the ground plane, 18 subjects walked in an
                                                                               in an image such that pixels with the same label share certain
open indoor-space at two view angles: a 45◦ path (angle view)
                                                                               visual characteristics.
toward the camera, and a frontal-parallel path (side-view) in
relation to the viewing plane of the camera. The side-view data                The Canny Edge Detection Algorithm:
was captured at two different depths, 3.9 meters and 8.3 meters
from camera. These three viewing conditions are used to                        The picture frames taken from video sequences are given as
evaluate our multi-view technique. In this research, we use                    input to Canny edge detection algorithm to detect the edges of
images captured at different views as the image captured from                  the image frames.
the frontal or perpendicular view does not give required                       The algorithm consists of 5 steps:
signatures. Segmentation is done on the captured image in
order to extract foreground from back ground using Canny                       1. Image Smoothing:
edge detection technique, as the purpose of edge detection in                      Image smoothing is used to reduce noise within an image.
general is to significantly reduce the amount of data in an                    The Canny edge detector uses a filter based on the first
image, while preserving the structural properties to be used for               derivative of a Gaussian, in the form:
further image processing. In order to obtain the gait model the
output of segmentation is processed using Hough transform,
which is a technique that can be used to isolate features of a
particular shape within an image                                                                                                 (1)
                                                                               2. Finding gradients
          II.    MODEL FOR GAIT SIGNITURE EXTRACTION
We propose a gait signature extraction model having the                            The edges of the image is marked where the gradients of
                                                                               the image has large magnitudes. The Canny algorithm basically
following steps- Picture frame capture, Segmentation, feature
                                                                               finds edges where the grayscale intensity of the image changes
Extraction which leads to gait signature identification which
                                                                               the most. These areas are found by determining gradients of the
shown in figure.1.                                                             image. First step is to approximate the gradient in the x- and y-
                                                                               direction respectively by applying the kernels. Then the
                                                                               gradient magnitudes (also known as the edge strengths) are
                                                                               determined as an Euclidean distance measure by applying the
                                                                               law of Pythagoras is given by equation
                                                                                   |G| = SQRT(Gx2 + Gy2)                         (2)
                                                                                  It is simplified by applying Manhattan distance measure is
                                                                               given by |G| = |Gx| + |Gy|, where Gx and Gy are the gradients in
Figure1. Components of the proposed model for Gait Signature Extraction
System.
    Identify applicable sponsor/s here. (sponsors)



                                                                          91                               http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         Vol. 9, No. 3, March 2011
the x- and y-directions, respectively. The direction of the edges                  parametric form, the classical Hough transform is most
is determined and stored as given by the equation below                            commonly used technique for the detection of regular curves
                                                                                   such as lines, circles, ellipses, etc. A convenient equation for
    ∂ = arctan |Gy|/|Gx|                             (3)                           describing a set of lines uses parametric or normal notion:
3. Non-maximum suppression                                                                  x cosθ + ysin θ = r                       (4)
    In the proposed study, only local maxima are marked as                            where r is the length of a normal from the origin to this line
edges. The purpose of this step is to convert the “blurred”                        and θ is the orientation of r with respect to the x-axis. For any
edges in the image of the gradient magnitudes to “sharp” edges.                    point (x,y) on this line, r and θ are constant.
Basically, this is done by preserving all the local maxima in
the gradient image, and deleting everything else.                                      In an image analysis context, the coordinates of the point(s)
                                                                                   of edge segments (i.e.(xi,yi)) in the image are known and
    The algorithm for each pixel in the gradient image:                            therefore serve as constants in the parametric line equation,
     a. Round the gradient direction to nearest 45 degrees,                        while r and θ are the unknown variables we seek. If we plot the
        corresponding to the use of an 8-connected                                 possible (r,θ) values defined by each (xi,yi), points in Cartesian
        neighborhood.                                                              image space map to curves (i.e. sinusoids) in the polar Hough
                                                                                   parameter space. This point-to-curve transformation is the
     b. Compare the edge strength of the current pixel with                        Hough transformation for straight lines. When viewed in
        the edge strength of the pixel in the positive and                         Hough parameter space, points which are collinear in the
        negative gradient direction, i.e., if the gradient                         Cartesian image space become readily apparent as they yield
        direction is north (θ =90 degrees), compare with the                       curves which intersect at a common (r, θ) point.
        pixels to the north and south.
                                                                                       The transform is implemented by quantizing the Hough
     c.   If the edge strength of the current pixel is largest;                    parameter space into finite intervals or accumulator cells. As
          preserve the value of the edge strength. If not,                         the algorithm runs, each (xi,yi) is transformed into a discretized
          suppress (i.e. remove) the value.                                        (r,θ ) curve and the accumulator cells which lie along this curve
4. Double thresholding                                                             are incremented. Resulting peaks in the accumulator array
                                                                                   represent strong evidence that a corresponding straight line
    Potential edges are determined by thresholding.                                exists in the image.
5. Edge tracking by hysteresis                                                          The main advantage of the Hough transform technique is
    Finally edges are determined by suppressing all edges that                     that it is tolerant of gaps in feature boundary descriptions and is
are not connected to a very certain (strong) edge as shown in                      relatively unaffected by image noise. We use this technique to
figure 2                                                                           extract lines from the segmented image. The Hough transform
                                                                                   can be used to identify the parameter(s) of a curve which best
                                                                                   fits a set of given edge points. This edge description is obtained
                                                                                   from the Canny edge detector and may be noisy, i.e. it may
                                                                                   contain multiple edge fragments corresponding to a single
                                                                                   whole feature. Furthermore, as the output of an edge detector
                                                                                   defines only where features how many are in an image, the
                                                                                   work of the Hough transform is to determine both what the
                                                                                   features are (i.e. to detect the feature(s) for which it has a
                                                                                   parametric (or other) description) and of them exist in the
                                                                                   image.
                                                                                   Hough Transform Algorithm for Straight Lines:
                                                                                      1.   Identify the maximum and minimum values of r and θ
                                                                                      2.   Generate an accumulator array A(r, θ) and set all
                                                                                           values to zero
                                                                                      3.   For all edge points (xi, yi) in the image
                                                                                                a.   Use gradient direction for θ
                                                                                                b.   Compute r from the equation
Figure 2: [a] Original Image [b]. RGB to Grayscale [c] Grayscale to Indexed                     c.   Increment A(r, θ) by one
Image [d] Edge Detected Image.
                                                                                      4.   For all cells in A(r, θ)
C. Gait Feature Extraction
    The Hough transform is a technique which can be used to                                     a.   Search for the maximum value of A(r, θ)
isolate features of a particular shape within an image. Because                                 b.   Calculate the equation of the line
it requires that the desired features be specified in some




                                                                              92                                http://sites.google.com/site/ijcsis/
                                                                                                                ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            Vol. 9, No. 3, March 2011
    5.    To reduce the effect of noise more than one element
          (elements in a neighbourhood) in the accumulator array
          are increased.
    The edge detected image and the image after Hough
transform are shown in figure 3.
                                                                                       Table 2: Parameters and percentage of clarity when camera placed at 270
                                                                                       degree angle to the subject for frame 1.




            Figure 3: Images before and after the Hough Transform
                                                                                       Figure 5. Graphical representation of clarity for frame 1, when camera placed at
          III.   EXPERIMENTAL RESULTS AND DISCUSSION                                   270 degree.
    One of the most important aspects of this research is to                               Table 1 and Table 2 show the result. While the camera is
extract the gait signatures for a successful recognition rate.                         placed at 90 degrees and 270 degrees, it is found that for frame
Below table shows the number of parameters which are used to                           1 the clarity for the parameter distance between the legs is
generate a gait signature for different view of a subject(90                           higher; the y axis is taken as the reference axis for the subject.
degree and 270 degree).The attempts column shows how many                              Therefore this can be used to extract gait signatures for better
persons are used to extract the signature. The success column                          recognition. It is also observed that the parameter right thigh
shows how many of the subjects give successful gait                                    length can also be considered for extraction of gait signature. It
signatures.                                                                            is also observed that while the camera is placed at 90 degrees
                                                                                       and 270 degrees for frame 2, the clarity for the parameter right
                                                                                       thigh length is higher. Therefore, this can be used to extract
                                                                                       gait signatures for better recognition. While placing camera at
                                                                                       90 degrees and 270 degrees for frame 3, it is found that the
                                                                                       clarity for the parameter left thigh length is higher. Therefore
                                                                                       this can also be used to extract gait signatures for better
                                                                                       recognition.
Table 1: Parameters and percentage of clarity when camera placed at 90 degree                                        CONCLUSIONS
angle to the subject for frame 1.
                                                                                           The presented research has shown that gait signatures can
                                                                                       be extracted in a better way by using Hough transform. When
                                                                                       the camera is placed at 90 and 270 degrees it is found that most
                                                                                       parameters listed in the research are providing us clarity. Since
                                                                                       the lines are clearly visible they can easily be labeled and the
                                                                                       distance and angle between them can be measured accurately.
                                                                                       The proposed research gives best results if the camera is
                                                                                       placed at 90 degrees to the subject and it is recommended that
                                                                                       the subjects must be made to pass through an area which has a
                                                                                       white background because it will help in getting a better gait
                                                                                       signature extraction model. The research achieved 100 percent
                                                                                       clarity if the parameters length of left, right thigh and distance
                                                                                       between the legs are analyzed at 90 degree angle. The
                                                                                       signatures thus extracted can be used to get better gait
                                                                                       recognition rate. In future work it is recommended that the
                                                                                       lines extracted from Hough transform should be labeled by
Figure 4. Graphical representation of clarity for frame 1,When camera placed at        using an effective line labeling algorithm to calculate the
90 degree.
                                                                                       angles and distances between the various parameters to get
                                                                                       effective gait recognition.




                                                                                  93                                     http://sites.google.com/site/ijcsis/
                                                                                                                         ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 9, No. 3, March 2011


                               REFERENCES
[1]  Jiwen Lu A, Erhu Zhang B .”Gait recognition for human identification
     based on ICA and fuzzy SVM through multiple views fusion”, School of
     Electrical and Electronic Engineering, Nanyang Technological
     University, Nanyang Avenue, Singapore 639798, 25 July 2007.
[2] Murray, M. P., “Gait as a total pattern of movement”, American journal
     of Physical medicine, 46(1):290-333, 1967.                                                        Shanawaz Ahmed J received his MCA
[3] Davrondzhon Gafurov ,Einar Snekkenes ,Patrick Bours , “Improved Gait                              degree from the Department of Computer
     Recognition Performance Using Cycle Matching”, In proceedings of                 Science Bharathidasan University, India. After that he
     IEEE, 24th International Conference on Advanced Information
     Networking and Applications Workshops, 2010.                                     obtained the Masters of Philosophy degree from Periyar
[4] Aristotle (350BC), “On the Gait of Animals”, Translated by A. S. L.               University, India. He is presently pursuing his PhD degree
     Farquharson 2007.                                                                in Anna University, India. During 2004-2007, he was a
[5] Sarkar, S. , Phillips, P. J., Liu, Z.,Vega I. R., Grother, P., and Bowyer,        lecturer in the Department of Computer Science The New
     K., “The humanID gait challenge problem: Data sets,performance and               College, Chennai, India. Since 2007, he has been serving as
     analysis”, IEEE Trans.Pattern Anal. Mach. Intell., vol. 27, no. 2,pp.
     162–177, Feb. 2005.
                                                                                      a Lecturer in college of computer science at King Khalid
[6] Liang Wang, Tieniu Tan, Huazhong Ning, and Weiming Hu, “Silhouette
                                                                                      University, Abha, KSA. His research interests include
     Analysis-Based Gait Recognition for Human Identification”, IEEE                  image processing and image retrieval.
     Transactions on pattern analysis and machine intelligence, vol. 25, no.
     12, december 2003.                                                                                Md. Ekramul Hamid received his B.Sc
[7] James B. Hayfron-Acquah, Mark S. Nixon, John N. Carter, ”Automatic                                 and M.Sc degree from the Department of
     gait recognition by symmetry analysis”, Image, Speech and Intelligent
     Systems Group, Department of Electronics and Computer Science,                                    Applied Physics and Electronics, Rajshahi
     University of Southampton, Southampton, S017 1BJ, United Kingdom.                                 University, Bangladesh. After that he
[8] Dawson, M. R., ”Gait Recognition”, Imperial College of Science,                                    obtained the Masters of Computer Science
     Technology & Medicine, London, June, 2002.                                       from Pune University, India. He received his PhD degree
[9] Han, X.,”Gait Recognition Considering Walking Direction”, University              from Shizuoka University, Japan. During 1997-2000, he
     of Rochester, USA, August 20, 2010.                                              was a lecturer in the Department of Computer Science and
[10] Haritaoglu, I., Harwood, D., Davis, L.”A real-time system for detecting          Engineering, Rajshahi University. Since 2007, he has been
     and tracking people in 2.5D”, Proceedings of the 5th European Conf.
     Computer Vision 1998, Freiburg Germany, 1, pp.877-892 ,1998.                     serving as an associate professor in the same department.
[11] Amos Y. Johnson and Aaron F. Bobick. “A Multi-view Method for Gait               He is currently working as an assistant professor in the
     Recognition Using Static Body Parameters”.Electrical and Computer                college of computer science at King Khalid University,
     Engineering Georgia Tech., Atlanta, GA 30332.                                    Abha, KSA. His research interests include Digital Signal
                                                                                      Processing and Speech Enhancement.
                         AUTHORS PROFILE
                                                                                      Dr. R.S.D Wahidabanu received her BE (Electronics &
                                                                                      Communication) and ME degree (Applied Electronics)
              Mohamed Rafi received his BE and ME                                     from University of Madras Chennai, India. Obtained PhD
             degree in Computer Science & Engineering                                 from Anna University, Tamil nadu, India. Having more
             from Bangalore University, India. Presently                              than 30 years of experience in Teaching and research.
             Pursuing PhD from Vinayaka Mission                                       Working as Professor & Head, Dept of Electronics &
             University, Salem, Tamil nadu, India. From                               communication engineering, Government college of
August 2007 to till date working as a Professor, Dept of                              Engineering, Salem. More than 13 students obtained phd
Computer Science & Engineering, HMS institute of                                      and more than 20 students are pursuing phd under the
Technology, Tumkur, Karnataka, India. From November                                   guidance. Published more than 30 papers in international
2001 to July 2007 Worked as Assistant Professor in the                                journals.
Department of Computer Science and Information
Technology, at Adama University, Ministry of Education,
Ethiopia. His research interests include Image Processing,
Database system and software engineering.




                                                                                 94                            http://sites.google.com/site/ijcsis/
                                                                                                               ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 9, No. 3, March 2011


                               Mining Fuzzy Cyclic Patterns
              F. A. Mazarbhuiya                           M. A. Khaleel                                P. R. Khan
         Department of Computer Science           Department of Computer Science              Department of Computer Science
         College of Computer Science              College of Computer Science                 College of Computer Science
         King Khalid University                   King Khalid University                      King Khalid University
         Abha, Kingdom of Saudi Arabia            Abha, Kingdom of Saudi Arabia               Abha, Kingdom of Saudi Arabia
         e-mail: fokrul_2005@yahoo.com           e-mail: khaleel_dm@yahoo.com                 e-mail: drpervaizrkhan@yahoo.com

Abstract— The problem of mining temporal association rules                  to define the similarity between fuzzy time intervals associated
from temporal dataset is to find association between items that             with a frequent itemsets. Similarly, fuzzy distance between any
hold within certain time intervals but not throughout the dataset.          two consecutive fuzzy time intervals associated with a frequent
This involves finding frequent sets that are frequent at certain            itemset can be used to find the fuzzy time gaps between the
time intervals and then association rules among the items present           fuzzy time intervals and its variance can be used to define
in the frequent sets. In fuzzy temporal datasets as the time of             similarity among fuzzy time gaps associated with the frequent
transaction is imprecise, we may find set of items that are                 itemsets.
frequent in certain fuzzy time intervals. We call these as fuzzy
locally frequent sets and the corresponding associated association              In section II we give a brief discussion on the works related
rules as fuzzy local association rules. The algorithm discussed [5]         to our work. In section III we describe the definitions, terms
finds all fuzzy locally frequent itemsets where each frequent item          and notations used in this paper. In section IV, we give the
set is associated with a list of fuzzy time intervals where it is           algorithm proposed in this paper for mining fuzzy locally
frequent. The list of fuzzy time intervals may exhibit some                 frequent sets. We conclude with conclusion and lines for future
interesting properties e.g. the itemsets may be cyclic in nature. In        work in section V. In the last section we give some references.
this paper we propose a method of finding such cyclic frequent
itemsets.                                                                                        II.     RELATED WORKS
   Keywords- Fuzzy time-stamp, Fuzzy time interval, Fuzzy                   The problem of discovery of association rules was first
temporal datasets, Fuzzy locally frequent sets, Fuzzy distance,             formulated by Agrawal et al in 1993. Given a set I, of items
Variance of a fuzzy interval                                                and a large collection D of transactions involving the items,
                                                                            the problem is to find relationships among the items i.e. the
                       I.    INTRODUCTION                                   presence of various items in the transactions. A transaction
The problem of mining association rules has been defined                    t is said to support an item if that item is present in t. A
initially [10] by R. Agarwal et al for application in large super           transaction t is said to support an itemset if t supports each
markets. Large supermarkets have large collection of records of             of the items present in the itemset. An association rule is
daily sales. Analyzing the buying patterns of the buyers will               an expression of the form X ⇒ Y where X and Y are subsets
help in taking typical business decisions such as what to put on            of the itemset I. The rule holds with confidence τ if τ% of
sale, how to put the materials on the shelves, how to plan for              the transaction in D that supports X also supports Y. The
future purchase etc.
                                                                            rule has support σ if σ% of the transactions supports X ∪ Y.
    Mining for association rules between items in temporal                  A method for the discovery of association rules was given
databases has been described as an important data-mining                    in [9], which is known as the A priori algorithm. This was
problem. Transaction data are normally temporal. The market                 then followed by subsequent refinements, generalizations,
basket transaction is an example of this type.                              extensions and improvements.
     Mining fuzzy temporal dataset is also an interesting data                   Temporal Data Mining is now an important extension
mining problem. In [5], author proposed a method of finding                 of conventional data mining and has recently been able to
fuzzy locally frequent sets from such datasets. The algorithm               attract more people to work in this area. By taking into
discussed in [5] extracts all frequent itemsets along with a set            account the time aspect, more interesting patterns that are
of list of fuzzy time intervals where each frequent itemset is              time dependent can be extracted. There are mainly two
associated a list of fuzzy time intervals. The list of fuzzy time           broad directions of temporal data mining [6]. One concerns
intervals associated with a frequent itemsets can be used to find           the discovery of causal relationships among temporally
some interesting results. In this paper we propose to study the             oriented events. Ordered events form sequences and the
problem of cyclic nature of a list of time intervals associated             cause of an event always occur before it. The other
with a frequent itemset and devise a method to extract all                  concerns the discovery of similar patterns within the same
frequent itemsets which are cyclic. We call such frequent                   time sequence or among different time sequences. The
itemset as fuzzy cyclic frequent itemsets as the time intervals             underlying problem is to find frequent sequential patterns
are fuzzy in nature. In such case, as the variance of the fuzzy             in the temporal databases. The name sequence mining is
intervals is invariant with respect to translation, it can be used          normally used for the underlying problem. In [7] the




                                                                       95                                http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. 9, No. 3, March 2011

problem of recognizing frequent episodes in an event                       In general, a generalized fuzzy number A is described as
sequence is discussed where an episode is defined as a                  any fuzzy subset of the real line R, whose membership function
collection of events that occur during time intervals of a              A(x) satisfying the following conditions
specific size.                                                              (1) A(x) is continuous mapping from R to the closed
     The association rule discovery process is also extended                    interval [0, 1]
to incorporate temporal aspects. In temporal association
rules each rule has associated with it a time interval in                   (2) A(x)=0, -∝< x ≤ c
which the rule holds. The problems associated are to find                   (3) A(x)= L(x) is strictly increasing on [c, a]
valid time periods during which association rules hold, the
discovery of possible periodicities that association rules                  (4) A(x)=w, a ≤ x ≤ b
have and the discovery of association rules with temporal                   (5) A(x)= R(x) is strictly decreasing on [b, d]
features. In [8], [14], [15] and [16], the problem of
temporal data mining is addressed and techniques and                        (6) A(x)=0, d ≤ x<∝,
algorithms have been developed for this. In [8] an                      where 0 <w ≤1, a, b, c and d are real numbers. This type of
algorithm for the discovery of temporal association rules is            generalized fuzzy number is denoted by A=(c, a, b, d; w)LR.
described. In [2], two algorithms are proposed for the                  When w=1, the above generalized fuzzy number will be a
discovery of temporal rules that display regular cyclic                 fuzzy interval and is denoted by A=(c, a, b, d)LR. When L(x)
variations where the time interval is specified by user to              and R(x) are straight line, then A is a trapezoidal fuzzy number
divide the data into disjoint segments like months, weeks,              and is denoted by (c, a, b, d). If a=b, then the above trapezoidal
days etc. Similar works were done in [4] and [17]                       number will be a triangular fuzzy number denoted by (c, a, d).
incorporating multiple granularities of time intervals (e.g.
                                                                            The h-level of the fuzzy number [t1-a, t1, t1+a] is a closed
first working day of every month) from which both cyclic
                                                                        interval [t1+(α-1).a, t1+(1-α).a]. Similarly the h-level of the
and user defined calendar patterns can be achieved. In [1],
                                                                        fuzzy interval [t1-a, t1, t2, t2+a] is a closed interval [t1+(α-1).a,
the method of finding locally and periodically frequent sets
                                                                        t2+(1-α).a].
and periodic association rules are discussed which is an
improvement of other methods in the sense that it                       Chen and Hsieh [11, 12, 13] proposed graded mean integration
dynamically extracts all the rules along with the intervals             representation for generalized fuzzy number as follows:
where the rules hold. In ([18], [19]) fuzzy calendric data              Suppose L-1 and R-1are inverse functions of the functions L and
mining and fuzzy temporal data mining is discussed where                R respectively and the graded mean h-level value of
user specified ill-defined fuzzy temporal and calendric                 generalized fuzzy number A=(c, a, b, d; w)LR is h[L-1(h)+ R-
                                                                        1
patterns are extracted from temporal data.                                (h)]/2. Then the graded mean integration representation of
     In [5], the authors propose a method of extracting                 generalized fuzzy number based on the integral value of graded
fuzzy locally frequent sets from fuzzy temporal datasets.               mean h-levels is
The algorithm discussed in [5], extracts all frequent                           w                                        w
itemsets with each itemset is associated with a list fuzzy                                 L−1 ( h ) + R −1 ( h )
time intervals where the itemset is frequent. The list fuzzy
time intervals associated with a frequent itemset exhibits
                                                                        P(A)=   ∫
                                                                                0
                                                                                      h(             2              )dh / ∫ hdh
                                                                                                                          0
some interesting properties e.g. the size of the fuzzy time
intervals may be almost same and also the time gap                      where h is between 0 and w, 0<w≤1.
between two consecutive fuzzy time intervals is also almost             B. Fuzzy distance
same. We call such frequent itemset as a fuzzy cyclic
frequent itemset. So the study is basically an intra-itemset            Chen and Wang [13] proposed fuzzy distance between any two
study.                                                                  trapezoidal fuzzy numbers as follows: Let A=(a1, a2, a3, a4),
                                                                        B=(b1, b2, b3, b4) be two trapezoidal fuzzy numbers, and their
                III.   PROBLEM DEFINITION                               graded mean integration representation are P(A), P(B)
                                                                        respectively. Assume
A. Some Definition related to Fuzzy sets
                                                                                    si=(ai-P(A)+bi-P(B))/2, i=1, 2, 3, 4;
   Let E be the universe of discourse. A fuzzy set A in E is
characterized by a membership function A(x) lying in [0,1].                         ci=P(A)-P(B)+ si , i=1, 2, 3, 4;
A(x) for x ∈ E represents the grade of membership of x in A.
Thus a fuzzy set A is defined as                                        then the fuzzy distance of A, B is C=(c1, c2, c3, c4). Obviously
                                                                        the fuzzy distance between two trapezoidal numbers is also a
         A={(x, A(x)), x ∈ E }                                          trapezoidal number.
  A fuzzy set A is said to be normal if A(x) =1 for at least one        C. Possibilistic variance of a fuzzy number
x∈E                                                                     Let F be a family of fuzzy number and A be a fuzzy number
                                                                        belonging to F. Let Aγ =[a1(γ), a2(γ)], γ∈[0, 1] be a γ-level of




                                                                   96                                         http://sites.google.com/site/ijcsis/
                                                                                                              ISSN 1947-5500
                                                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 9, No. 3, March 2011

A. Carlsson and Fuller [3] defined the possibilistic variance of             gaps) and see whether it is almost equal to the variance of the
fuzzy number A∈ F as                                                         fuzzy distance (time gap) between the third and the fourth
                                                                             fuzzy time intervals. If the average of the variance of the first
                1                                                            two fuzzy time intervals is almost equal with the variance of
Var(A) =    1
            2   ∫       γ (a 2 (γ ) − a1 (γ )) 2 dγ                          the third interval we proceed further or otherwise stop. In
                0
                                                                             general if the average of the variance of the first (n-1) fuzzy
Before proceeding further we review the theorem given by                     time intervals is almost equal to the variance of the n-th fuzzy
Carlsson and Fuller [3].                                                     time interval and the average of variance of first (n-2) fuzzy
                                                                             distances (time gaps) are almost equal to the n-1 th time gap,
D. Theorem: The variance of a fuzzy number is invariant to                   then the average of variances of n fuzzy time intervals is
shifting.                                                                    compared with variance of (n+1)th fuzzy time interval and that
                                                                             of the first n-1 time gaps is compared with the n th time gap.
Proof: Let A∈ F be a fuzzy number and let θ be a real number.                This way we can extract fuzzy cyclic patterns if such patterns
If A is shifted by value θ, then we get a fuzzy number, denoted              exist. We describe below the algorithm for extracting periodic
by B, satisfying the property B(x)=A(x-θ) for all x∈ R. Then                 item sets.
from the relationship
                                                                             Algorithm for extracting fuzzy cyclic frequent item sets
           Bγ=[a1(γ)+θ, a2(γ)+θ]                                             For each fuzzy locally frequent item set iset do
we find                                                                       { t1          first fuzzy time interval for iset
                1
                                                                                v1 var(t1)
Var(B)=    1
           2    ∫       γ ((a 2 (γ ) + θ ) − (a1 (γ ) + θ )) 2 dγ
                0
                                                                                t2          second fuzzy time interval for iset

                    1
                                                                                 v2         var( t2)
           =1
            2       ∫    γ (a 2 (γ ) − a1 (γ )) dγ
                                               2
                                                                                 if not almostequal(v1, v2,sgma) then
                    0
                                                                                      { report that iset is not fuzzy cyclic in nature
           =Var(A)                                                                      continue /* go for the next frequent item set */
E. Almost equal fuzzy intervals                                                         }
Given two fuzzy intervals A and B, we say A is almost equal B                    n=1
or B is almost equal to A if the variances of both A and B are
equal up to a small variation say λ%. i.e.                                       ftg1         fuzzydist(t1,t2)
           var(A)+ λ% of var(B)=var(B)                                            v(ftg1)          var(fuzzydist(t1,t2))
           var(B)+ λ% of var(A)=var(A)                                            avgvar             (v1 +v2)/2
where λ is specified by the user.                                                    flag = 0

                            IV.   PROPOSED ALGORITHM                         while not end of fuzzy interval list for iset do
A. Extraction of fuzzy cyclic patterns                                           { tint        current fuzzy time interval
One way to extract these sets is to find the fuzzy distance                          ftg      fuzzydist(tint, t2)
between any two consecutive fuzzy time intervals of the same                      v(ftg)        var(fuzzydist(tint, t2))
frequent set. If the fuzzy distance (time gap) between any two
consecutive frequent time intervals are found to be almost                           if almostequal(v(ftg),v(ftg1), sgma) then
equal and also the fuzzy time intervals are found to be almost                              avv(gftg)      (n*v(ftg1) +v(ftg))/(n+1)
equal (the definition of almost equal fuzzy time interval is
given in Definition E of section III) then we call these frequent                    else
sets as fuzzy cyclic frequent sets. Now to find out such type of
                                                                                        { flag = 1; break;}
cyclic nature for each frequent item set we proceed as follows.
If the first fuzzy time interval is almost equal to the second                               var     var(tint)
fuzzy time interval then we see whether the fuzzy distance
                                                                                if almostequal(var, avgvar, sgma) then
(time gap) between the first and the second fuzzy time interval
is almost equal to the fuzzy distance (time gap) between the                          avgvar           ((n+1)*avgvar +var) /(n+2)
second and third fuzzy time intervals. If it is, then we take the
                                                                                else
average of the variance of the first two fuzzy distances (time



                                                                        97                                        http://sites.google.com/site/ijcsis/
                                                                                                                  ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 9, No. 3, March 2011

            {flag = 1; break;}                                                       [3]    C. Carlsson and R. Fuller; On Possibilistic Mean Value and Variance of
                                                                                            Fuzzy Numbers, Fuzzy Sets and Systems 122 (2001), pp. 315-326.
             t2       tint;                                                          [4]    G. Zimbrao, J. Moreira de Souza, V. Teixeira de Almeida and W. Araujo
                                                                                            da Silva; An Algorithm to Discover Calendar-based Temporal
          n n+1;                                                                            Association Rules with Item’s Lifespan Restriction, Proc. of the 8th
                                                                                            ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining
                  }                                                                         (2002) Canada, 2nd Workshop on Temporal Data Mining, v. 8, 701-706,
                                                                                            2002.
          if (flag == 1)
                                                                                     [5]    F. A. Mazarbhuiya, and Ekramul Hamid; Finding Fuzzy Locally
            report that iset is not fuzzy cyclic in nature                                  Frequent itemsets, International Journal of Computer Science and
                                                                                            Information Security, ISSN: 1947-5500, LJS Publisher and IJCSIS Press,
          else                                                                              USA, 2011
                                                                                     [6]    J. F. Roddick, M. Spillopoulou; A Biblography of Temporal, Spatial and
            report that iset is fuzzy cyclic in nature                                      Spatio-Temporal Data Mining Research; ACM SIGKDD, June 1999.
                                                                                     [7]    H. Manilla, H. Toivonen and I. Verkamo; Discovering frequent episodes
      }                                                                                     in sequences; KDD’95; AAAI, 210-215, August 1995.
Here the function var() returns the fuzzy variance value of the                      [8]    J. M. Ale and G.H. Rossi; An approach to discovering temporal
                                                                                            association rules; Proceedings of the 2000 ACM symposium on Applied
fuzzy time intervals associated with a fuzzy locally frequent                               Computing, March 2000.
set, fuzzydist() returns fuzzy distance between two consecutive                      [9]    R. Agrawal and R. Srikant; Fast algorithms for mining association rules,
fuzzy time intervals associated with a fuzzy locally frequent                               Proceedings of the 20th International Conference on Very Large
set i.e. fuzzy time gap, v() returns fuzzy variance value of the                            Databases (VLDB ’94), Santiago, Chile, June 1994.
fuzzy time gap between two consecutive fuzzy time intervals                          [10]   R. Agrawal, T. Imielinski and A. Swami; Mining association rules
associated with a fuzzy locally frequent itemset, avv() returns                             between sets of items in large databases; Proceedings of the ACM
                                                                                            SIGMOD ’93, Washington, USA, May 1993.
the average of the variances of fuzzy time gaps avgvar
                                                                                     [11]   S. H. Chen and C. H. Hsieh; Graded mean integration representation of
returns average of the fuzzy variances of fuzzy time intervals                              generalised fuzzy number, Proc. of Conference of Taiwan Fuzzy System
associated with a fuzzy locally frequent set, sgma is the                                   Association , Taiwan, 1998.
threshold value upto which the variation of fuzzy variance                           [12]   S. H. Chen and C. H. Hsieh; Graded mean integration representation of
values can be permitted.                                                                    generalised fuzzy number, Journal of the Chinese Fuzzy Sytem
                                                                                            Association, Vol. 5, No. 2, pp. 1-7, Taiwan, 1999.
             V. CONCLUSIONS AND LINES FOR FUTURE WORKS                               [13]   S. H. Chen and C. H. Hsieh; Representation, Ranking, Distance,
                                                                                            Similarity of L-R type fuzzy number and Application, Australia Journal
An algorithm for finding fuzzy cyclic frequent sets from fuzzy                              of Intelligent Information Processing Systems, Vol. 6, No. 4, pp. 217-
temporal data is given in the paper. The algorithm [5] gives all                            229, Australia.
fuzzy locally frequent itemsets where each frequent itemsets is                      [14]   X. Chen and I. Petrounias; A framework for Temporal Data Mining;
associated with a list of fuzzy time intervals where it is                                  Proceedings of the 9th International Conference on Databases and Expert
frequent. Our algorithm takes the input from the result of the                              Systems Applications, DEXA ’98, Vienna, Austria. Springer-Verlag,
                                                                                            Berlin; Lecture Notes in Computer Science 1460, 796-805, 1998.
algorithm [5]. As the the variance of the fuzzy interval is
                                                                                     [15]   X. Chen and I. Petrounias; Language support for Temporal Data Mining;
invariant with respect to shifting, we can consider two fuzzy                               Proceedings of 2nd European Symposium on Principles of Data Mining
intervals having same variance value belonging in two different                             and Knowledge Discovery, PKDD ’98, Springer Verlag, Berlin, 282-
parts of time domain as equal. Using this notion we define the                              290, 1998.
equality of the fuzzy time intervals associated with a fuzzy                         [16]   X. Chen, I. Petrounias and H. Healthfield; Discovering temporal
locally frequent itemsets. Our algorithm supplies all frequent                              Association rules in temporal databases; Proceedings of IADT’98
sets which are fuzzy cyclic.                                                                (International Workshop on Issues and Applications of Database
                                                                                            Technology, 312-319, 1998.
We may have some patterns where the fuzzy time gaps are                              [17]   Y. Li, P. Ning, X. S. Wang and S. Jajodia; Discovering Calendar-based
almost equal but the fuzzy time intervals of frequency are not                              Temporal Association Rules, In Proc. of the 8th Int’l Symposium on
equal even in the approximate sense. We may also have some                                  Temporal Representation and Reasonong, 2001.
patterns where the fuzzy time-gaps are not equal but durations                       [18]   W. J. Lee and S. J. Lee; Discovery of Fuzzy Temporal Association
                                                                                            Rules, IEEE Transactions on Systems, Man and Cybenetics-part B;
of the time intervals are almost equal. The above algorithm can                             Cybernetics, Vol 34, No. 6, 2330-2341, Dec 2004.
be modified accordingly to find such patterns. In future we                          [19]   W. J. Lee and S. J. Lee; Fuzzy Calendar Algebra and It’s Applications to
would like to devise methods to find various kinds of patterns                              Data Mining, Proceedings of the 11th International Symposium on
which may exist in fuzzy temporal datasets.                                                 Temporal Representation and Reasoning (TIME’04), IEEE, 2004.
                                  References                                                                  AUTHOR’S PROFILE
[1]       A. K. Mahanta, F. A. Mazarbhuiya and H. K. Baruah; Finding Locally         Fokrul Alom Mazarbhuiya received B.Sc. degree in Mathematics
          and Periodically Frequent Sets and Periodic Association Rules,             from Assam University, India and M.Sc. degree in Mathematics from
          Proceeding of 1st Int’l Conf on Pattern Recognition and Machine
          Intelligence (PreMI’05),LNCS 3776, 576-582, 2005.
                                                                                     Aligarh Muslim University, India. After this he obtained his Ph.D.
                                                                                     degree in Computer Science from Gauhati University, India. Since
[2]       B. Ozden, S. Ramaswamy and A. Silberschatz; Cyclic Association
          Rules, Proc. of the 14th Int’l Conference on Data Engineering, USA,
                                                                                     2008 he has been serving as an Assistant Professor in College of
          412-421, 1998.                                                             Computer Science, King Khalid University, Abha, kingdom of Saudi
                                                                                     Arabia. His research interest includes Data Mining, Information
                                                                                     security, Fuzzy Mathematics and Fuzzy logic.




                                                                                98                                     http://sites.google.com/site/ijcsis/
                                                                                                                       ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  Vol. 9, No. 3, March 2011

Mohammed Abdul Khaleel received B.Sc. degree in Mathematics
from Osmania University, India and M.C.A degree from Osmania
University, India. After that worked in Global Suhaimi Company
Dammam Saudi Arabia as Senior Software Developer.Since 2008
serving as Lecturer at College of Computer Science, King Khalid
University, Abha, Kingdom of Saudi Arabia. His research interest
includes Data Mining, Software Engineering.
Pervaiz Rabbani Khan received his B.Sc. B.Sc.(Hons) and M.Sc. in
Physics from Punjab University, Lahore, Pakistant. After this he has
done M.Sc. in Computer Science from New Castle upon Tyne, U.K.
After this he obtained his Ph.D. from the same University. Since
2001 he has been working as an Assistant Professor in College of
Computer Science, King Khalid University, Abha, Kingdom of Saudi
Arabia. His research interest includes Simulation and Modeling,
Fuzzy Logic.




                                                                       99                            http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                          Vol. 9, No. 3, March 2011 
 

            Robust Color Image Watermarking Using
             Nonsubsampled Contourlet Transform
          C.Venkata Narasimhulu                                               K.Satya Prasad
           Professor, Dept of ECE                                             Professor, Dept of ECE,
          HIET, Hyderabad, India                                              JNTU Kakinada, India
          narasimhulucv@gmail.com                                             prasad_kodati@yahoo.co.in,

     Abstract-                                                       applications, embedded watermark should be invisible,
     In this paper, we propose a novel hybrid spread                 robust and have a high capacity. Invisibility refers to
spectrum watermarking scheme for authentication of                   degree of distortion introduced by the watermark and its
color images using nonsubsampled contourlet transform                affect on the viewers and listeners. Robustness is the
and singular value decomposition. The host color image               resistance of an embedded watermark against
and color watermark images are decomposed into                       intentional attack and normal signal processing
directional sub- bands using Nonsubsampled contourlet                operations such as noise, filtering, rotation, scaling,
transform and then applied Singular value decomposition              cropping and lossey compression etc. Capacity is the
to mid frequency sub-band coefficients. The singular                 amount of data can be represented by embedded
values of mid frequency sub-band coefficients of color
                                                                     watermark.[1]
watermark image are embedded into singular values of
mid frequency sub-band coefficients of host color image in                    Watermarking techniques may be classified in
Red, Green and Blue color spaces simultaneously based on             different ways. The classification may be based on the
spread spectrum technique. The experimental results                  type of watermark being used, i.e., the watermark may
shows that the proposed hybrid watermarking scheme is
robust against common image processing operations such
                                                                     be a visually recognizable logo or sequence of random
as, JPEG, JPEG 2000 compression, cropping, Rotation,                 numbers. A second classification is based on whether
histogram equalization, low pass filtering ,median                   the watermark is applied in the spatial domain or the
filtering, sharpening, shearing ,salt & Pepper noise,                transform domain. In spatial domain, the simplest
Gaussian noise, grayscale conversion etc. It has also been           method is based on embedding the watermark in the
shown the variation of visual quality of watermarked                 least significant bits (LSB) of image pixels. However,
image for different scaling factors. The comparative                 spatial domain techniques are not resistant enough to
analysis reveals that the proposed watermarking scheme               image compression and other image processing
out performs the color image watermarking schemes                    operations.
reported recently.
    Keywords: Color image watermarking, Nonsubsampled
                                                                         Transform domain watermarking schemes such as
Contourlet Transform, Singular value decomposition, Peak             those based on the discrete cosine transform (DCT), the
signal to noise ratio, normalized Correlation coefficient.           discrete wavelet transform (DWT), contourlet
                                                                     transforms along with numerical transformations such
                      1. INTRODUCTION:                               as Singular value Decomposition (SVD) and Principle
    In recent years, multimedia products were rapidly                component analysis (PCA) typically provide higher
distributed over the fast communication systems such                 image fidelity and are much robust to image
as Internet, so there exist strong requirement to protect            manipulations.[2]Of the so far proposed algorithms,
the ownership and authentication of the multimedia                   wavelet domain algorithms perform better than other
data. Digital watermarking is a method of securing the               transform domain algorithms since DWT has a number
digital data by embedding additional information called              of advantages over other transforms including time
water mark into the digital multimedia content. This                 frequency localization, multi resolution representation,
embedding information can be later extracted from or                 superior HVS modeling, and linear complexity and
detected in the multimedia to make an assertion about                adaptively and it has been proved that wavelets are
the data authenticity. Digital watermarks remain intact              good at representing point wise discontinuities in one
under transmission/transformation, allowing us to                    dimensional signal. However, in higher dimensions,
protect our ownership rights in digital form. Absence of             e.g. image, there exists line or curve-shaped
watermark in a previously watermarked image would                    discontinuities. Since, 2D wavelets are produced by
lead to the conclusion that the data content has been                tensor products of 1D wavelets; they can only identify
modified. A watermarking algorithm consists of                       horizontal, vertical, diagonal discontinuities (edges) in
watermark structure, an embedding algorithm and                      images, ignoring smoothness along contours and
extraction or detection algorithm. In multimedia                     curves. Curvelet transform was defined to represent two




                                                               100                               http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                     (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                        Vol. 9, No. 3, March 2011 
 
dimensional discontinuities more efficiently, with least                      2. NONSUBSAMPLED CONTOURLET
square error in a fixed term approximation. Curvelet                                   TRANSFORM
transform was proposed in continuous domain and its
discretisation was a challenge when critical sampling is           The Nonsubsampled contourlet transform is a new
desired. Contourlet transform was then proposed by DO              image decomposition scheme introduced by Arthur
and Vetterli as an improvement of Curvelet transform.              L.Cunha, Jianping Zhou and Minh N.Do [8]. NSCT is
The Contourlet transform is a directional multi                    more effective in representing smooth contours in
resolution expansion which can represents images                   different directions of in an image than contourlet
contains contours efficiently. The CT employs                      transform and discrete wavelet transform. The NSCT is
Laplacian pyramids to achieve multi resolution                     fully shift invariant, Multi scale and multi direction
decomposition and directional filter banks to achieve              expansion that has a fast implementation. The NSCT
directional decomposition [3] Due to down sampling                 exhibits a similar sub band decomposition as that of
and up sampling, the Contourlet transform is Shift                 contourlets, but without down samplers and up samplers
variant. However shift invariance is desirable in image            in it. Because of its redundancy the filter design problem
analysis applications such as edge detection, Contour              of nonsubsampled contourlet is much less constrained
characterization, image enhancement [4] and image                  than that of contourlet. The NSCT is constructed by
watermarking. Here, we present a NonSubsampled                     combining         nonsubsampled         pyramids       and
Contourlet transform (NSCT) [5] which is shift                     nonsubsampled directional filter bank as shown in
invariant version of the contourlet transform. The                 figure (1).The nonsubsampled pyramid structure results
NSCT is built upon iterated nonsubsampled filter banks             the multi scale property and nonsubsampled directional
to obtain a shift invariant image representation.
                                                                   filter bank results the directional property.
In all above transform domain watermarking techniques
including NSCT the watermarking bits would be
directly embedded in the locations of sub band
coefficients. Though here the visual of perception of
original image is preserved, the watermarked image
when subjected to some intentional attacks like
compression the watermark bits will get damaged.
Coming to the spatial domain watermarking using
numerical transformation like SVD (Gorodetski [6], liu
et al [7]) they provide good security against tampering
and common manipulations for protecting rightful
ownership. But these schemes are non adaptive, thus
unable to offer consistent perceptual transparency of                                                                                
watermarking of different images. To provide adaptive
transparency, robustness to the compressions and                                         (a)                    (b)
insensitivity to malicious manipulations, we propose a                  Figure 1 The nonsubsampled contourlet transform (a)
novel image hybrid watermarking scheme using NSCT                  nonsubsampled filter bank structure that implements the NSCT.
and SVD.                                                           (b) Idealized frequency partitioning obtained with NSCT

                                                                       2.1 Nonsubsampled Pyramids
In this paper, proposed method is compared with
another which is based on Contourlet Transform and
singular value decomposition (CT-SVD). The peak                       The nonsubsampled pyramid is a two channel
signal to noise ratio (PSNR) between the original image            nonsubsampled filter bank as shown in figure
and watermarked image and the normalized correlation               2(a).The H0(z) is the low pass filter and one then sets
coefficients (NCC) and bit error rate (BER) between                H1(z) =1-H0(z). the corresponding synthesis filters
the original watermark and extracted were calculated                   G0(z) =G1(z)=1.
with and without attacks. The results show high
improvement detection reliability using proposed                      the perfect reconstruction condition is given by
method. The rest of this paper is organized as follows.            Bezout identity
Section 2 describes the Nonsubsampled contourlet
transform, section 3 describes singular value
decomposition, section 4 illustrates the details of                    H0(z)G0(z)+H1(Z) G1 (Z) =1………………(1)
proposed method, in section 5 experimental results are
discussed without and with attacks, conclusion and
future scope are given in section 6.




                                                             101                                  http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                              Vol. 9, No. 3, March 2011 
 
:




                                                                                                                                 
                           (a)                                                     (b)
    Figure (2): Nonsubsampled pyramidal filter (a). Ideal frequency response of nonsubsampled pyramidal filter
              (b).The cascading analysis of three stages nonsubsampled pyramid by iteration of two channels
                     Nonsubsampled filter banks .

Multi scale decomposition is achieved from                                        invariant is constructed by eliminating the down and
nonsubsampled       pyramids     by     iterating the                             up samplers in the DFB.The ideal frequency response
nonsubsampled filter banks by up sampling all filters                             of nonsubsampled filter banks is shown in figure3 (a)
by 2 in both direction the next level decomposition is
achieved. The complexity of filtering is constant                                    To obtain multi directional decomposition, the
whether the filtering is with H(z) or an up sampled                               nonsubsampled DFBs are iterated. To obtain the
filter H(z m ) computed a Trous algorithm The                                     next level decomposition, all filters are up
cascading of three stage analysis part is shown in                                sampled by a quincunx matrix given by
figure 2( b)
                                                                                                             1   1 
    2.2 Nonsubsampled directional Filter Banks:                                                   Q=
The directional filter bank (DFB) is constructed from                                                        1  ‐1       ……………..(2)
the combination of critically-sampled two-channel
fan filter banks and resampling operations. The
outcome of this DFB is a tree-structured filter bank
splitting the 2-D frequency plane into wedges. The                                The analysis part of iterated nonsubsampled filter
nonsubsampled directional filter bank which is shift                              bank is shown in figure 3 (b)




                        (a)                                                                 (b)
    Figure (3) Nonsubsampled directional filter bank (a) idealized frequency response of nonsubsampled directional filter bank.(b) The
analysis part of an iterated nonsubsampled directional bank.



     3. SINGULAR VALUE DECOMPOSITION                                              data, for data compression and data denoising. If
                                                                                  A is any N x N matrix, it is possible to find a
   Singular value decomposition (SVD) is a                                        decomposition of the form
popular technique in linear algebra and it has
applications in matrix inversion, obtaining low
dimensional representation for high dimensional


                                                                          3
                                                                        102                                http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                 (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                    Vol. 9, No. 3, March 2011 
 
                       A=USVT                                    including discrete cosine transform (DCT), discrete
                                                                 wavelet transform (DWT), Contourlet transform
                                                                 (CT) etc have been used to embed watermark into
                                                                 original image. here the proposed scheme uses
                                                                 nonsubsampled contourlet transform(NSCT) along
                                                                 with SVD for watermarking to obtain better
                                                                 performance compared to existing hybrid
                                                                 algorithms.
                                                                              4. PROPOSED ALGORITHM
    Where U and V are orthogonal matrices of order
N x N and N x N such that UTU=I,VTV=I , and the                      In this paper, Nonsubsampled Contourlet
diagonal matrix S of order N x N has elements λi                 Transform and SVD based hybrid technique is
(i=1,2,3,..n) , I is an identity matrix of order N x N.          proposed for color image watermarking that uses
    The diagonal entries are called singular values of           true color images for both watermark and host
matrix A, the columns of U matrix are called the left            images. The robustness and visual quality of
singular values of A, and the columns of V are                   watermarked image is tested with three quantifiers
called as the right singular values of A.                        such as PSNR, NCC and Bit Error Rate. It is
    The general properties of SVD are [1], [2], [9]              investigated whether the NSCT-SVD advantages
    a) Transpose: A and its transpose AT have the                over CT-SVD for color image watermarking with
same non-zero singular values.                                   their extra features would provide any significance
    b) Flip: A, row-flipped Arf, and column-                     in terms of watermark robustness and invisibility.
flipped Acf have the same non-zero singular values.              4.1 , 4.2 explain the watermark embedding and
    c) Rotation: A and Ar (A rotated by an                       extraction algorithm [10],[11]
arbitrary degree) have the same non-zero singular                    4.1 Watermark Embedding Algorithm
values.
    d) Scaling: B is a row-scaled version of A by                   The proposed watermark embedding algorithm
repeating every row for L1 times. For each non-zero              is shown in Figure 4. The steps of watermark
singular value λ of A, B has √L1λ. C is a column-                embedding algorithm are as follows.
scaled version of A by repeating every column for                   Step1: Separate the R G B color spaces of both
L2 times. For each nonzero singular value λ of A, C              host and watermark color images.
has √L2λ. If D is row-scaled by L1 times and
column-scaled by L2 times, for each non-zero                        Step2: Apply Nonsubsampled Contourlet
singular value λ of A, D has √L1L2λ.                             Transform to the R color space of both host image
    e) Translation: A is expanded by adding rows                 and watermark image to decompose them into sub
and columns of black pixels. The resulting matrix                bands.
Ae has the same Non-zero singular values as A.                      Step3: Apply SVD to mid frequency sub-band of
   The important properties of SVD from the view                 CT of R color space of both host and watermark
point of image processing applications are:                      image.
    1. The singular values of an image have very                     Step4: Modify the singular values of mid
    good stability i.e. When a small perturbation is             frequency sub-band coefficients of R color space of
    added to an image, their singular values do not              host image with the singular values of mid
    change significantly.                                        frequency sub-band coefficients of R color space of
                                                                 watermark image using spread spectrum technique.
   2. Singular value represents intrinsic algebraic
image properties.                                                     i.e.   λI’ = λI + α λW.,
    Due to these properties of SVD, in the last few                  Where α is scaling factor [9], λI is singular value
years several watermarking algorithms have been                  of R color space of host image, λW is singular value
proposed based on this technique. The main idea of               of R color space of watermark and λI’ becomes
this approach is to find the SVD of a original image             singular value of R color space watermarked image.
and then modify its singular values to embedded the                 Step5: Apply inverse SVD on modified singular
watermark. Some SVD based algorithms are purely                  values obtained in step4 to get the mid frequency
SVD based in a sense that only SVD domain is used                sub-band coefficients of watermarked image.
to embed watermark into original image. Recently
some hybrid SVD based algorithms have been                          Step6:      Apply inverse Nonsubsampled
proposed where different types of transform domain               Contourlet Transform to the mid frequency sub-

                                                           4
                                                          103                                http://sites.google.com/site/ijcsis/
                                                                                             ISSN 1947-5500
                                                  (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                     Vol. 9, No. 3, March 2011 
 
band coefficients obtained in step 5 to get the R                     Step5: Apply inverse SVD to obtain mid
color space of watermarked image.                                 frequency coefficients of R color space of
                                                                  transformed watermark image using Step 3.
   Step7: Apply the same Steps from Step2 to
Step6 for the G and B color subspaces.                                Step6: Apply inverse NSCT using the
                                                                  coefficients of the mid frequency sub-band to obtain
   Step 8: Combine the R,G and B color spaces of
                                                                  the R color space of Watermark image.
watermarked image to obtain the color watermarked
image.                                                               Step7: Repeat the Steps 2 to 6 for G and B color
                                                                  spaces.
                                                                      Step8: Combine the R,G and B color spaces to
                                                                  get the color watermark. 




         Figure 4 Watermark Embeddign Algorithm
                                                                                                                         
                                                                             Figure 5 Watermark Extracting Algorithm
    4.2 Watermark Extraction Algorithm
   The watermark extraction algorithm is shown in
Figure 5.      The Steps of watermark extraction                              5. EXPERIMENTAL RESULTS
algorithm are as follows.                                             In the experiments, we use the true color
                                                                  “tajmahal.jpg” of size 256X256 as host image as
   Step1: Separate the R,G,B color spaces of
                                                                  shown in the Figure 6 and true color “lena.jpg” of
watermarked image.                                                size 128 X 128 as watermark as shown in Figure 7.
   Step2: Apply Nonsubsampled Contourlet                          The experiment is performed by taking scaling
Transform to the R color space obtained in step1.                 factor alpha as 0.5.The results show that there are no
                                                                  perceptibly visual degradations on the watermarked
   Step3: Apply SVD to mid frequency sub-band of                  image shown in Figure 8 with a PSNR of
R color space of transformed watermarked image.                   45.2253dB. Extracted watermark without attack is
                                                                  shown in Figure 9 with NCC around unity and BER
    Step4: Extract the singular values from mid
                                                                  of 0.1339. MATLAB 7.6 version is used for testing
frequency sub-band of R color space of
                                                                  the robustness of the proposed method.  
watermarked and host image
                                                                      The proposed algorithm is tested for different host
    i, e λW   =   ( λI’ - λI )/ α                                 images such as “lotus.jpg”, ”Baboon.jpg”,
Where λI is singular value of watermarked image.                  ”Barbara.jpg”,     ”Way.jpg”      ,”Horse.jpg”     and
                                                                  “Wheel.jpg” as shown in Table 1 and it is observed
                                                                  that there are no visual degradations on the respected

                                                            5
                                                           104                                http://sites.google.com/site/ijcsis/
                                                                                              ISSN 1947-5500
                                                 (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                    Vol. 9, No. 3, March 2011 
 
watermarked images. For all the different Host test                   color space of host image using eq.3 [12]. The
images, the watermark is effectively extracted with               final PSNR of watermarked image is taken as mean
around unity NCC. Various intentional and non-                    of PSNR obtained with three color spaces. The
intentional attacks are tested for robustness of the              similarity of extracted watermark with original
proposed       watermark       algorithm      includes            watermark embedded is measured using NCC. The
JPEG,JPEG2000compressions, low pass filtering,                    NCC is calculated using eq. (4) [13]for the three
Rotation, Histogram Equalization ,Median Filtering,               color spaces and their mean is taken as the resultant
Salt &Pepper Noise, Weiner Filtering, Gamma                       Normalized Correlation coefficient.  The proposed
Correction, Gaussian Noise, Rescaling, Sharpening                 method is also tested for binary and grayscale
Blurring ,Contrast Adjustment ,Automatic cropping,                watermark image of size 128x128 and watermarked
Dilation, Row Colum Copying, Row Colum                            and extracted watermark are shown in table 3.
removing, color to Gray scale conversion ,shearing
and sharpening. The term robustness describes the
watermark resistance to these attacks and can be
measured by the bit-error rate which, is defined as the
ratio of wrong extracted bits to the total number of                                 ……….….(3)
embedded bits.                                                        Normalized Correlation Coefficient:
    In table 2, extracted watermark and attacked
watermarked image with NCC and BER are shown.
The quality and imperceptibility of watermarked
image is measured by using PSNR. The PSNR is                                                                ………..(4)
calculated separately for R, G, B color space of
watermarked image with respect to the respective




    Fig 6:Original image-      Fig 7:Watermark image-           Fig 8:Watermarked Lena            Fig 9:Extracted
       "Tajmahal.jpg”                "Lena.jpg”                     PSNR= 45.2253                   Watermark
                                                                                               Ncc=0.9991,Ber=0.1339

      TABLE 1: WATERMARKED AND EXTRACTED WATERMARK WITH PSNR, NCC, AND BER FOR DIFFERENT ORIGINAL
                                            IMAGES.




           Original image           Watermark image             Watermarked image with           Extracted image
            “lotus.jpg”              “LENA.jpg”                     PSNR=46.2785              NCC= 0.9983,Ber=0.1610




                                                           6
                                                          105                                http://sites.google.com/site/ijcsis/
                                                                                             ISSN 1947-5500
                                (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                   Vol. 9, No. 3, March 2011 
 




                                               Watermarked image with
    Original image   Watermark image                                            Extracted image
                                                   PSNR=44.8322
    “baboon.jpg”      “LENA.jpg”                                             NCC=0.9992, Ber=0.1342




    Original image   Watermark image           Watermarked image with           Extracted image
    “barbara.jpg”     “LENA.jpg”                   PSNR=44.4930             NCC=0.9994,Ber=0.1299




    Original image   Watermark image           Watermarked image with           Extracted image
      “way.jpg”       “LENA.jpg”                   PSNR=44.7550             NCC= 0.9994, Ber=0.1140




    Original image   Watermark image           Watermarked image with          Extracted image
     “horse.jpg”      “LENA.jpg”                  PSNR= 44.7308             NCC= 0.9994, Ber=0.1201




    Original image   Watermark image           Watermarked image with          Extracted image
     “wheeljpg”       “LENA.jpg”                  PSNR= 45.5204             NCC= 0.9985, Ber=0.1614




                                          7
                                         106                                http://sites.google.com/site/ijcsis/
                                                                            ISSN 1947-5500
                                                    (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                       Vol. 9, No. 3, March 2011 
 


      TABLE 2: EXTRACTED WATERMARKS WITH NCC AND BER FOR DIFFERENT ATTACKS ALONG WITH ATTACKED
                                       WATERMARKED IMAGE



       Jpeg compression Ncc= 0.9985,Ber=0.3306                               Jpeg2000Ncc= 0.9995,Ber=0.1056




      Salt & pepper noise Ncc= 0.6948, Ber=0.4503                       Low Pass filtering Ncc= 0.9729 Ber=0.2995




      utomatic cropping Ncc= 0.9538 Ber=0.3449                       Histogram Equalization Ncc= 0.9808 Ber=0.3128




          Rotation Ncc= 0. 0.9951 Ber=0.2958                             Median filtering Ncc= 0.9484 Ber=0.3178




    Contrast adjustment Ncc= 0.9985    Ber= 0.1613                         Weiner filter Ncc= 0.9982 Ber=0.2051




                                                              8
                                                             107                                http://sites.google.com/site/ijcsis/
                                                                                                ISSN 1947-5500
                                            (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                               Vol. 9, No. 3, March 2011 
 




    Gamma correction Ncc= 0.9989 Ber=0.1387                      Gaussian Noise Ncc= 0.8399 Ber=0.3120




       Sharpening Ncc= 0.8379 Ber=0.3967                         Gaussian Blurring Ncc= 0.9719 Ber=0.3003




        Shearing Ncc= 0.9744 Ber=0.2889                                Dilatations= 0.9443 Ber=0.3332




    Color to grayscale Ncc= 0.8163 Ber=0.3490                  Row & column removal Ncc=0.9977 Ber=0.1930




                                                      9
                                                     108                                http://sites.google.com/site/ijcsis/
                                                                                        ISSN 1947-5500
                                                (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                   Vol. 9, No. 3, March 2011 
 
       Row column copying Ncc= 0.9902 Ber=0.9734                              Scaling (150%)     Ncc = 0.9187




                                                                                                                                 


     TABLE 3: WATERMARKED AND EXTRACTED WATERMARK IMAGES FOR BINARY AND GRAYSCALE WATERMARK




Original image                 Binary Watermark image                Watermarked image                Extracted image
“tajmahal.jpg                        “ksp.bmp”.                   PSNR= 47.6710                 Ncc= 0.9995, Ber=0.0157




      Original image            Binary Watermark image            Watermarked image                    Extracted image
       “tajmahal.jpg                  “lena.bmp”.                        PSNR= Inf                     Ncc= 1,Ber= 0




Original image “tajmahal.jpg   Gray scale Watermark image         Watermarked image                   Extracted image
                                       “Lena.jpg”.                 PSNR=45.2629                 Ncc= 0.9992,Ber= 0.1345


                                                                    salt pepper noise, Rotation, Gaussian Noise,
   In table 4, the proposed method is compared
                                                                    Sharpening, Row and Colum removal and Row
with contourlet and SVD based algorithm [11].It
                                                                    and column copying.
demonstrates that proposed method is superior to


                                                             10
                                                             109                              http://sites.google.com/site/ijcsis/
                                                                                              ISSN 1947-5500
                                                  (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                     Vol. 9, No. 3, March 2011 
 
           TABLE 4: COMPARISON OF CT+SVD AND                                            7. REFERENCES:
                       NSCT + SVD
                                                                  [1].      C.Venkata Narasimhulu &K.Satya Prasad:”A novel robust
                                                                         watermarking technique based on nonsubsampled contourlet
    S.No    ATTACK              Normalized Correlation                   transform and SVD”, International Journal of multimedia and
                               NSCT+SVD      CT+SVD                      Applications.vol.3, no.1, Feb2011.
    1       Jpeg                 0.9985        0.9996
            compression                                           [2].    C.Venkata Narasimhulu &K.Satya Prasad:”A hybrid
    2       Jpeg2000             0.9995        0.9996                    watermarking scheme using contourlet transform and
    3       Salt & pepper        0.6948        0.6823                    singular value Decomposition”, IJCSNS: International
                                                                         Journal of Computer Science and Network Security.vol.10,
            noise                                                        no.9, Sep2010.
    4       Low         pass     0.9729        0.9839
            filtering                                             [3]     Minh N. Do, and Martin Vetterli,        “The Contourlet
    5       Automatic            0.9538        0.9658                    Transform: An Efficient Directional Multiresolution Image
            cropping                                                     Representation” IEEE Transaction on image processing, vol
    6       Histogram            0.9808        0.9733                    14, issue no 12, pp 2091-2106, Dec 2005
            Equation
                                                                  [4]     Jianping Zhou; Cunha, A.L, M.N.Do, “Nonsubsampled
    7       Rotation             0.9958        0.9750                    contourlet transform construction and application in
    8       Median               0.9484        0.9680                    enhancement” IEEE Trans. Image Proc Sept. 2005.
            filtering
    9       Contrast             0.9985        0.9991             [5]      Arthur L. Cunha, J. Zhou, and M. N. Do, “Nonsubsampled
            adjustment                                                   contourlet transform: filter design and applications in
    10      Weiner filter        0.9982        0.9989                    denoising” IEEE International conference on image
                                                                         processing, September 2005.
    11      Gamma                0.9989        0.9995
            correction                                            [6]     V.I.Gorodetski L.J.Popyack, and V.Samoilov, “SVD-based
    12      Gaussian Noise       0.8399        0.7538                     approach to transparent embedding data into digital
    13      Sharpening           0.8379        0.8212                    images,” in proc. int. Workshop, MMM-ACNS,
    14      Gaussian             0.9719        0.9841                     St Peterburg, Russia, May 2001, pp.263-274.10.
            Blurring
                                                                  [7]    R.Liu and T.Tan, “An SVD-Based Watermarking scheme
    14      Shearing             0.9744        0.9857                     for Protecting rightful ownership,” IEEE Trans. Multimedia,
    16      dilatations          0.9443        0.9678                    vol.4, no.1, pp.121-128, Mar.2002.
    17      Color         to     0.8163        0.8693
            grayscale                                             [8]     A. L. Cunha, J. Zhou, and M. N. Do, “The Nonsubsampled
    18      Row & Colum          0.9977        0.9972                     contourlet transform: theory, design and applications,”
            removal                                                       IEEE Trans. Image Proc., vol.15, no.10, October 2006.
    19      Row       Colum      0.9902        0.9820
                                                                  [9]       Emir Ganic and ahmet M. Eskicioglu “ Robust embedding
            copying                                                         of visual watermarks using discrete wavelet transform and
    20      Scaling (150%)       0.9187        0.9417                     singular value decomposition Journal. Of Electron.
                                                                           Imaging, Vol. 14, 043004 (2005); doi:10.1117/1.2137650
                                                                          Published 12 December 2005
                   6. CONCLUSION:
                                                                  [10]    Dongyan liu,wenbo Liu,Gong Zhang,”An adaptive
In this paper, a novel robust hybrid watermarking                          watermarking scheme based on nonsubsampled contourlet
scheme is proposed for authentication of color                            transform for color image authentication”.Proceedings of
images using nonsubsampled contourlet transform                           the 2008 the 9th international conference for Young
                                                                          computer Scientist,ISBN:978-0-7695-3398-8.
and singular value decomposition. Watermark is
embedded in all color spaces of host image by                     [11]    C.Venkata Narasimhulu &K.Satya Prasad:”A new SVD
modifying singular values of mid frequency sub band                       based hybrid color image watermarking for copy right
coefficients with respect to watermark mid frequency              \         protection using Contourlet transform”, Communicated
                                                                           to    International   Journal     of     computer   and
sub band coefficient with suitable scaling factor. The                       Applications(IJCA) in March 2011.
robustness of watermark is improved for common
image procession operations by combining both the                 [12]      Ashraf. K. Helmy and GH.S.El-Taweel “Authentication
concepts of nonsubsampled contourlet transform and                         Scheme Based on Principal Component Analysis for
                                                                           Satellite Images” International Journal of Signal
singular value decomposition. The proposed                                  Processing, Image Processing and Pattern Recognition
algorithm is tested for different host images and                           Vol. 2, No.3, September 2009.
respective watermark images are obtained without
any visual degradation. The proposed algorithm                    [13]     Matlab 7.6 version, Image Processing Tool Box.
preserves high perceptual quality of the watermarked
image and shows an excellent robustness to attacks
like Salt and Pepper Noise, Gaussian Noise, Row                    
Column Copying, and Row Column Removal.

                                                           11
                                                           110                                   http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                 (IJCSIS) International Journal of Computer Science and Information Security,  
                                                                                                    Vol. 9, No. 3, March 2011 
 
    AUTHORS PROFILE:




                                                                                   
                
                                                                 K.Satya Prasad
C.V Narasimhulu
                                                                 Received his Ph.D degree from IIT Madras, India. He
He received his Bachelor degree in Electronics and               is presently working as professor in ECE department,
Communication Engineering from S.V. University,                  JNTU college of Engineering Kakinada and Rector of
Tirupati, India in 1995 and Master of Technology in              JNT University, Kakinada, India. He has more than
Instruments and Control Systems from Regional                    30 years of teaching and research experience. He
Engineering College Calicut, India in 2000.He is                 published 30 research papers in international and 20
currently pursuing the Ph.D degree in the department             research papers in National journals. He guided 8
of Electronics and Communication Engineering from                Ph.D thesises and 20 Ph.D thesises are under his
Jawaharlal     Nehru     Technological      University           guidance. His area of interests is digital signal and
Kakinada, India. He has more than 15 years                       image processing, communications, adhoc networks
experience of teaching under graduate and post                   etc.., 
graduate level. He is interested in the areas of signal
processing and multimedia security




 

 



                                                          12
                                                          111                                http://sites.google.com/site/ijcsis/
                                                                                             ISSN 1947-5500
     Parallel Implementation of Compressed Sensing
                Algorithm on CUDA- GPU
         Kuldeep Yadav1, Ankush Mittal 2                                                     M. A. Ansar3, Avi Srivastava4
                                               1, 2, 4
         Computer Science and Engineering                                                    Galgotia College of Engineering
         College of Engineering Roorkee                                                      Gr. Noida, INDIA
         Roorke-247667, INDIA                                                                ma.ansari@ieee.org3
         Kul82_deep@rediffmail.com1                                                          aviarodonix12@yahoo.com4
         dr.ankush.mittal@gmail.com2

Abstract - In the field of biomedical imaging, compressed               The basis principle is that sparse or compressible signals can
sensing (CS) plays an important role because compressed                 be reconstructed from a surprisingly small number of linear
sensing, a joint compression and sensing process, is an                 measurements, provided that the measurements satisfy an
emerging field of activity in which the signal is sampled and
                                                                        incoherence property. Such measurements can then be
simultaneously compressed at a greatly reduced rate.
                                                                        regarded as a compression of the original signal, which can
Compressed sensing is a new paradigm for signal, image and
                                                                        be recovered if it is sufficiently compressible. A few of the
function acquisition. In this paper we have worked on Basis
Pursuit Algorithm for compressed sensing. We have computed              many     potential      applications     are     medical       image
time for running this algorithm on CPU with Intel®                      reconstruction [5], image acquisition [6], and sensor
Core™2Duo T8100 @ 2.1GHz and 3.0 GB of main memory                      networks [7]. The first algorithm presented in this context is
which run on Windows XP. The next step was to convert this              known as basis pursuit [8]. Let Φ be an M × N measurement
code in GPU format i.e. to run this program on GPU NVIDIA               matrix, and Φx = b the vector of M measurements of an N
GeForce series 8400m GS model having 256 MB of video
                                                                        dimensional signal x. The reconstructed signal u∗ is the
memory of DDR2 type and bus width of 64bit. The graphic
driver we used is of 197.15 series of NVIDIA. Both the CPU              minimizer of the L1 Norm subject to the data min ||u||= 1,
and GPU version of algorithm is being implemented on the                subject to Φu = b .A remarkable result of Candes and Tao
Matlab R2008b. The CPU version of the algorithm is being                [9] is that if, for example, the rows of Φ are randomly
analyzed in simple Matlab but the GPU version is being                  chosen, Gaussian distributed vectors, there is a constant C
implemented with the help of intermediate software JACKET               such that if the support of x has size K and M ≥ CK log
V1.3. For using Jacket, we have to make some changes in our
                                                                        (N/K), then the solution will be exactly u∗ = x with
source code so to make the CPU and GPU to work
simultaneously and thus reducing the overall computational              overwhelming probability. The required C depends on the
acceleration of the algorithm. Graphic Processing Units                 desired probability of success, which in any case tends to
(GPUs) are emerging as powerful parallel systems at a cheap             one as N →∞. Donoho and Tanner [10] have computed
cost of a few thousand rupees. We got the speed up around 8X,           sharp reconstruction thresholds for Gaussian measurements,
for most of the Biomedical images and six of them have been             so that for any choice of sparsity K and signal size N, the
included in this paper, which can be analyzed via the profiler.
                                                                        required number of measurements M to recover x can be
                                                                        determined precisely.In this study, we implemented Basis
Keywords — Compressive sensing, Basis Pursuit Algorithms,
                                                                        Pursuit Algorithms, on NVIDIA’s GeForce 8400 GPU with
Jacket v 1.3, GPU, medical image processing; high performance
computing.
                                                                        the    Computer      Unified    Device      Architecture(CUDA)

                     I. INTRODUCTION                                    programming environment. Hence, we have chosen to
                                                                        implement it and we hope that other GPGPU researchers in
Current papers [1, 2, 3, and 4] have introduced the concept
                                                                        the field will also make the same choice to standardize the
known as compressed sensing (among other related terms).
                                                                        performance comparisons.




                                                                  112                               http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                     II. BACKGROUND                                    A. Jacket Overview
In this section, we have discussed about the Jacket v 1.3              Jacket connects Matlab to the GPU. Matlab is a technical
which is a Graphics Processors for general purpose                     computing     language     that     integrates      computation,
computing.    Over    the   past   few   years,   specialized          visualization and programming in an easy  to  use
coprocessors from floating point hardware to field                     environment that has found wide popularity both in the
programmable gate arrays have enjoyed a widening                       industry and academia. It is used across the breath of
performance gap with traditional x86  based processors. Of             technical computing applications including mathematical
these, graphics processing units (GPUs) have advanced at an            computations, algorithm development, data analysis, data
astonishing rate, currently capable of delivering over 1               visualization and application development. With the GPU as
TFOPS of single precision performance and over 300                     a backend computation engine, Jacket brings together the
GFLOPS of double precision while executing up to 240                   best of three important computational worlds: computational
simultaneous threads in one low  cost package. As such,                speed, visualization, and the user friendliness of M
GPUs have gained significant popularity as powerful tools              programming. Jacket enables developers to write and run
for high performance computing (HPC) achieving 20100                   code on the GPU in the native M language used in Matlab.
times the speed of their x86 counterparts in applications              Jacket accomplishes this by automatically wrapping the M
such as physics simulation, computer vision, options                   language into a GPU compatible form. By simply casting
pricing, sorting, and search. As with previous research                input data to Jacket’s GPU data structure, Matlab functions
compressed sensing studies based on Graphics Processing                are transformed into GPU functions. Jacket also preserves
Units (GPUs) provide fast implementations. However, only               the interpretive nature of the M language by providing
a small number of these GPU-based studies concentrate on               realtime, transparent access to the GPU compiler.
compressed sensing Since the GPU which we have taken
                                                                       B. Integration with Matlab
(NVIDIA 8400M GS) is the most basis model has high
                                                                       Once Jacket is installed, it is transparently integrated with
portability and is easily available in present day laptop and
                                                                       the Matlab’s user interface and the user can start working
desktops so can be implemented directly. However
                                                                       interactively through the Matlab desktop and command
synchronizing of host and device with suitable parallel
                                                                       window as well as write M-functions using the Matlab
implementation is the most challenging part. Which has
                                                                       editor and debugger. All Jacket data is visible in the Matlab
been parallelized by us? Basis Pursuit Algorithms cannot be
                                                                       workspace, along with any other Matlab matrices.
parallelized straight forward because of distribution part, so
our solution provides balanced and data distributed
                                                                       C.   GPU Data Types
parallelization framework of Basis Pursuit Algorithms on
CUDA without compromising numerical precision. We                      Jacket provides GPU counterparts to MATLAB’s CPU data
have broken the process in threads of blocks and managed               types, such as real and complex double, single, uint32,
those threads inside a special thread managing hardware                int32, logical, etc. Any variable residing in the host (CPU)
called as GPU with the help of the environment and set of              memory can be cast to Jacket’s GPU data types. Jacket’s
libraries provided by CUDA.                                            memory management system allocates and manages
                                                                       memory for these variables on the GPU automatically,
                                                                       behind  the scenes. Any functions called on GPU data will
                                                                       execute on the GPU automatically without any extra



                                                                 113                             http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
programming, GPUfunction Jacket provides the largest
available set of GPU functions in the world, ranging from             A. Basis Pursuit Compression Algorithm
functions like sum, sine, cosine, and complex arithmetic to           This algorithm takes an input as an image, reads it as a
more sophisticated functions like matrix inverse, singular            matrix. It then decomposes the image into blocks, then from
value decomposition, Bessel functions, and Fast Fourier               blocks to many columns. Each of the columns is then
Transforms. The supported set of functions continues to               processed and compressed. Each of the compressed columns
grow with every release of Jacket (see the Function                   is reconstructed into compressed blocks. Each compressed
Reference Guide), runtime of jacket is the most advanced              block is then reconstructed and sampled to get the final
GPU runtime in the world, providing automated memory                  compressed image. In this algorithm there are six functions
management,     compile  on  the  fly,       and    execution         is used they are as bp_basis, bp_decompose, bp_construct,
optimizations for Jacket  enable code, Jacket’s Graphics              bp_block_decompose, bp_block_construct, imagproc. These
Toolbox is the only tool in the world that enables a merger           functions are collectively used to decompose the image into
of GPU visualizations with computation. With Jacket a                 blocks, compress the image, and reconstruct it by sampling
simple graphics command can be added at the end of a                  it again. The general scheme of algorithm is shown below.
simulation loop to visualize data as it is being computed
while maintaining performance, The Developer SDK makes                /* Pseudo Code of Basis Pursuit Algorithm */
integration of custom CUDA code into Jacket’s runtime
                                                                      1. Load real time RGB image.
very easy. With a few simple SDK functions, your CUDA
                                                                      2. Assign it with double precision matrix.
code can benefit from the optimized Jacket platform. When
                                                                      3. repeat
Jacket applications have completed the development, test,
                                                                          for each row of block
and optimization stages and are ready for deployment, the
                                                                      4. Using CUDA threads Decompose coloumnwise by basis
Jacket MATLAB Compiler allows users to generate license 
                                                                      pursuit algorithm.
free executables for distribution to larger user bases. (See
                                                                           Find the decomposed matrix.
the SDK and JMC Wiki pages) and Interactive help for any
                                                                          End for.
Jacket function is available using Jacket’s ghelp function.
                                                                      5. Until all three colour block decompose.
                                                                      6. repeat

                  III. IMPLEMENTATION                                    for each row of block
                                                                      7. Parallely using CUDA threads Reconstruct coloumnwise
In this section, first, we introduce the general scheme for           by basis pursuit algorithm.
Basis Pursuit algorithm. Then, we introduce our GPU                        Consider complex matrix also.
implementation environment by first discussing why GPUs                    Find the reconstructed matrix.
are a good fit for medical imaging applications and then                 End for.
presenting NVIDIA’s CUDA platform and GeForce 8400 M                  8. until all three color block reconstructed.
architecture. Next, we talk about the CPU implementation              9. Combine all three color block to give a matrix.
environment. This is followed by description of the test data         10. convert double precision of compressed
used in the experiments. Finally, we provide the list of                Image data to unsigned int values.
CUDA kernels used in our GPU implementation.                          11. Scale image data




                                                                114                               http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
B. GPU Implementation Environment                                     fine, lightweight threads in parallel. In CUDA, programs are
We have implemented a GPU version of Basis pursuit (BP)               expressed as kernels. Kernels have a Single Program
with NVIDIA’s GPU programming environment, CUDAv                      Multiple Data (SPMD) programming model, which is
0.9. The era of single-threaded processor performance                 essentially Single Instruction Multiple Data (SIMD)
increases has come to an end. Programs will only increase in          programming model that allows limited divergence in
performance if they utilize parallelism. However, there are           execution. A part of the application that is executed many
different kinds of parallelism. For instance, multicore CPUs          times, but independently on different elements of a dataset,
provide task-level parallelism. On the other hand, GPUs               can be isolated into a kernel that is executed on the GPU in
provide data-level parallelism. Depending on the application          the form of many different threads. Kernels run on a grid,
area, the type of the preferred parallelism might change.             which is an array of blocks; and each block is an array of
Hence, GPUs is good fit for all problems. However, medical            threads. Blocks are mapped to multiprocessors within the
imaging applications are very suitable to be implemented on           G80 architecture, and each thread is mapped to single
GPU architecture. It is because these applications                    processor. Threads within a block can share memory on a
intrinsically have data-level parallelism with high compute           multiprocessor. But two threads from two different blocks
requirements, and GPUs provide the best cost-per-                     cannot cooperate. The GPU hardware performs switching of
performance parallel architecture for implementing such               threads on multiprocessors to keep processors busy and hide
algorithms. In addition, most medical imaging applications            memory latency. Thus, thousands of threads can be in flight
(e.g. semi-automatic segmentation) require, or benefit from           at the same time, and CUDA kernels are executed on all
visual interaction and GPUs naturally provide this                    elements of the dataset in parallel. We would like to
functionality. Hence, the use of the GPU in non-graphics              mention that in our implementation; increasing the dataset
related highly-parallel applications, such as medical imaging         size does not have an effect on the shared memory usage.
applications, became much easier than before. For to the              This is because, to deal with larger datasets, we only
graphics API. Since it is cumbersome to use graphics APIs             increase the number of blocks and keep the shared memory
for non-graphics tasks such as medical applications,                  allocations in a thread as well as the number of threads in a
instance, NVIDIA introduced CUDA to perform data-                     block the same.
parallel computations on the GPU without the need of                  C. CPU Implementation Environment
mapping    the    graphics-centric   nature    of   previous
                                                                      The CPU version of basis pursuit is implemented in Matlab
environments. GPU- specific details and allowing the
                                                                      with integration of jacket v1.3. The computer used for the
programmer to think in terms of memory and math
                                                                      CPU implementation is an Intel® CORE™2Duo T8100 @
operations as in CPU programs, instead of primitives,
                                                                      2.1GHz and 3.0 GB of main memory which run on
fragments, and textures that are specific to graphics
                                                                      Windows XP (SP2). The CPU implementation was executed
programs. CUDA is available for the NVIDIA GeForce
                                                                      on this box with both single threading mode and multi-
8400 (G80) Series and beyond. The GeForce 8400 GS
                                                                      threading mode. Open MP is used to implement the multi-
model having 256Mbytes of video memory of DDR2 type
                                                                      threading part.
and bus width of 64bit l. The memory bandwidth of the
GeForce 8400 GTX is 80+ GB/s. To get the best
performance from G80 architecture, we have to keep 128
processors occupied and hide memory latency. In order to
                                                                      D. Test Data of (BP) Compressed Results
achieve this goal, CUDA runs hundreds or thousands of



                                                                115                             http://sites.google.com/site/ijcsis/
                                                                                                ISSN 1947-5500
Our test data consists of six MRI images which measure the                  implementations. It is clear from the figures that significant
performance of images on CPU & GPU with the help of                         speedup is obtained for Basis Pursuit in GPU in comparison
profiler and has been shown in Figure 2, 3, 4 below. Show                   to CPU implementations.
the   profiler     results   for   CPU   and     GPU      based




                 (a) MRI1                                       (b) MRI 2                                (c) MRI 3




                      (d) MRI 4                                (E) MRI 5                                      (F) MRI 6

                                     Figure 1: Test Data Images Under Measure the Performance




                                                                     116                              http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                    Figure 2: Profile on CPU




                                                    Figure 3: Profile on GPU




                 5. Experiment and Results
                                                                      Sharpetal’s   implementation        and       highlight        our
In this section, we first compare the runtimes of our GPU
                                                                      improvements. We measure the performance of images on
and CPU implementations for datasets with different sizes.
                                                                      CPU & GPU with the help of profiler and have been shown
And present our speedup. Then, we show visual results by
                                                                      in figure 2 and 3 show the profiler results for CPU
providing slices from one of the datasets. Next, we provide
                                                                      architecture and GPU based implementations. It is clear
the breakdown of GPU implementation runtime to the
                                                                      from the figures that significant speedup is obtained. We
CUDA kernels and present GFLOP (Giga Floating Point
                                                                      have achieved around 8X speedup over a single-threaded
Operations = 109 FLOP) and GiB (Gibi Bytes = 230 Bytes)
                                                                      CPU implementation over GPU. We can calculate the speed
instruction and data throughput information. This is
                                                                      up by using the following formula. Speed up=time taken to
followed by the discussion of the factors that limit our
                                                                      compress the image on CPU/time taken to compress image
performance. Finally, we compare our implementation with
                                                                      on GPU.




                                                                117                           http://sites.google.com/site/ijcsis/
                                                                                              ISSN 1947-5500
                          Table1. Performance of GPU implementation with respect to CPU implementation

                                       Time to run            Time to run                 Time to run
S. No.     Image
                         Size          program on          BP_DECOMPOSE                BP_DECOMPOSE                        Depth
                                          CPU                on CPU(sec)                 on GPU(sec)
  1.       MRI1       256 X 256           118.52                   105.2                        14.06                        512
  2.       MRI2       256 X 256           136.47                  123.49                        20.31                       1024
  3.       MRI3       256 X 256           139.56                 125.159                        18.19                       1024
  4.       MRI4       256 X 256            98.87                   84.25                        11.90                        512
   5       MRI5       256 X 256           120.34                   106.1                        16.31                        512
   6       MRI6       256 X 256           138.47                  124.50                        21.02                       1024




                                    Figure 4: Comprarative Graphical Results on CPU& GPU



                 6. CONCLUSION


The work presented in this paper is one of the major
problems of clinical application of compressed sensing on
bio-medical imaging which is its high computational cost by
using currently available jacket software and GPU NVIDIA
GeForce series 8400m GS model having 256M bytes of
Video memory of DDR2 type and bus width of 64bit. We
reduced the execution time from 50 seconds to 6 seconds for
several bio-medical images. We obtained the speedup of
the system 8x time.



                                                                     118                                 http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                               REFERENCES

[1]             Rick chartrand and wotayin “Iteratively Reweighted algorithms for
               Compressed Sensing”, in IEEE 2008 ICASSP 2008.
[2]            D. L. Donoho, “Compressed sensing,” IEEE Trans.On Information
               Theory, vol. 52, no. 4, pp. 1289–1306, April 2006.
[3]            http //www.dsp.ece.rice.edu / cs.
[4] E. J. Cand`es, J. Romberg, and T. Tao, “Robust uncertainty principles:
       [[[[[




      Exact signal reconstruction from highly incomplete frequency
      information,” IEEE Trans. Inf. Theory, vol. 52, 2006.
[5].           M. Lustig, J. M. Santos, J.-H. Lee, D. L. Donoho, and J. M. Pauly,
                “Application of compressed sensing for rapid MR imaging,” in
                SPARS, (Rennes, France), 2005.
[6]            D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S.
                Sarvotham, K. F. Kelly, and R.G. Baraniuk, “A new compressive
                imaging camera architecture using optical-domain compression,” in
                Computational Imaging IV - Proceedings of SPIE-IS and T
                Electronic Imaging, vol. 6065, 2006.
[7]            M. F. Duarte, S. Sarvotham, D. Baron, M. B. Wakin, and R. G.
                Baraniuk, “Distributed compressed sensing of jointly sparse
                signals,” in 39th Asilomar Conference on Signals, Systems and
                Computers, pp. 1537–1541, 2005.
[8]            S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic
                decomposition by basis pursuit,” SIAM J. Sci. Comput., vol. 20, pp.
                33–61, 1998.
[9]            E. Cand`es and T. Tao, “Near optimal signal recovery from random
                projections: universal encoding strategies,” IEEE Trans. Inf.
                Theory, vol. 52, pp. 5406–5425, 2006.
[10] D. L. Donoho and J. Tanner, “Thresholds for the recovery of sparse
      solutions via L1 minimization,” in 40th Annual Conference on
      Information Sciences and Systems, pp. 202–206, 2006.




                                                                                      119   http://sites.google.com/site/ijcsis/
                                                                                            ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 9, No. 3, 2011

                     FUZZY HRRN CPU SCHEDULING
                            ALGORITHM
                                       1
                                           Bashir Alam, 1 M.N. Doja, 2R. Biswas, 3M. Alam
                          1
                          Department of Computer Engineering, Jamia Millia Islamia, New Delhi, India
                 2
                 Department of Computer Science and Engineering, Manav Rachna University, Faridabad, India
                            3
                              Department of computer Science, Jamia millia Islamia,New Delhi India
              Email: - babashiralam@gmail.com, ndoja@yahoo.com , ranjitbiswas@yahoo.com, mansaf@lycos.com



Abstract— There are several scheduling algorithms like FCFS,              (iii) SRTN: In shortest Remaining time next scheduling
SRTN, RR, priority etc. Scheduling decisions of these algorithms                   algorithm , the process with shortest remaining
are based on parameters which are assumed to be crisp.
However, in many circumstances these parameters are vague.                         time is scheduled for execution.[3]
The vagueness of these parameters suggests that scheduler should          (iv) Priority: in Priority Scheduling algorithm the process
use fuzzy technique in scheduling the jobs. In this paper we have                  with highest priority is scheduled for execution.
proposed a novel CPU scheduling algorithm Fuzzy HRRN that
                                                                          (v) Round-robin: In this the CPU scheduler goes around
incorporates fuzziness in basic HRRN using fuzzy Technique FIS.
                                                                                   the ready queue allocating the CPU to each
Keywords: - HRRN, CPU Scheduling, FIS, Fuzzy Logic                                 process for a time interval of up to one time
                                                                                   quantum. [1,2,3]
    1.   INTRODUCTION
                                                                          (vi) Multilevel queue scheduling: In this the ready queue
     When a computer is multi programmed, it frequently has                        is partitioned into several separate queue. The
multiple processes competing for the CPU at the same time.                         processes are permanently assigned to one queue
When more than one process is in the ready state and there is                      generally based on some property of the process
only one CPU available, the operating system must decide                           such as memory size, process priority or process
which process to run first. The part of operating system that                      type. Each queue has its own scheduling
makes the choice is called short term scheduler or CPU                             algorithm. There is scheduling among the
scheduler. The algorithm that it uses is called scheduling
                                                                                   queues, which is commonly implemented as
algorithm. There are several scheduling algorithms. Different
scheduling algorithms have different properties and the choice                     fixed-priority preemptive scheduling. Each
of a particular algorithm may favor one class of processes over                    queue has absolute priority over low priority
another. Many criteria have been suggested for comparing                           queues.[1]
CPU scheduling algorithms and deciding which one is the best              (vii) Multilevel feedback-queue scheduling:-This allows a
algorithm [1]. Some of the criteria include the following:-                        process to move between queues.[1]
(i)Fairness                                                               (viii)   Fair share Scheduling: Fair share scheduler
(ii)CPU utilization
                                                                                   considers the execution history of a related group
(iii)Throughput
(iv)Turnaround time                                                                of processes, along with the individual execution
(v)Waiting time                                                                    history of each process in making scheduling
(vi)Response time                                                                  decision. The user community is divided into a
It is desirable to maximize CPU utilization and throughput, to                     fair- share groups. Each group is allocated a
minimize turnaround time, waiting time and response time and                       fraction of CPU time. Scheduling is done on the
to avoid starvation of any process. [1, 2]
                                                                                   basis of priority of the process, its recent
Some of the scheduling algorithms are briefly described
below:                                                                             processor usage and the recent processor usages
      (i) FCFS: In First come First serve scheduling algorithm                     of the group to which the process belongs. Each
               the process that request first is scheduled for                     process is assigned a base priority. The priority
               execution. [1,2,3]                                                  of a process drops as the process uses the
      (ii) SJF: In shortest Job first scheduling algorithm the                     processor and as the group to which process
               process with the minimum burst time is                              belongs uses the processor.[3]
               scheduled for execution.[1,2]




                                                                    120                           http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 9, No. 3, 2011
    (ix) Guaranteed scheduling:-In this a ratio of actual CPU             "consequent". Typical fuzzy inference subsystems have
            time a process had and its entitled CPU time is               dozens of rules. These rules are stored in a knowledgebase. An
            calculated. The process with this lowest ratio is             example of fuzzy IF-THEN rules is: IF Remaining Time is
                                                                          extremely short then priority is very high, in which Remaining
            scheduled[2]
                                                                          Time and priority are linguistics variables and extremely short
    (x) Lottery Scheduling:-The basic idea is to give                     and very high are linguistics terms. The five steps toward a
            processes lottery tickets for CPU time. Whenever              fuzzy inference are as follows:
            a scheduling decision has to be made , a lottery              • fuzzifying inputs
            ticket is chosen at random and the process                    • applying fuzzy operators
            holding the ticket gets the CPU.[2]                           • applying implication methods
    (xi) HRRN: - In this response ration is calculated for each           • aggregating outputs
                                                                          • defuzzifying results
            process. The process with the highest ratio is
                                                                          Below is a quick review of these steps. However, a detailed
            scheduled for execution. [3]                                  study is not in the scope of this paper. Fuzzifying the inputs is
                                                                          the act of determining the degree to which they belong to each
In all the above scheduling algorithm the parameters used are             of the appropriate fuzzy sets via membership functions. Once
crisp. However, in many circumstances these parameters are                the inputs have been fuzzified, the degree to which each part
vague. [9] To exploit this vagueness we have used fuzzy logic             of the antecedent has been satisfied for each rule is known. If
in our proposed scheduling algorithm. We have also done                   the antecedent of a given rule has more than one part, the
simulation for comparing the performance of basic HRRN                    fuzzy operator is applied to obtain one value that represents
scheduling algorithm and Fuzzy HRRN scheduling algorithm.                 the result of the antecedent for that rule. The implication
                                                                          function then modifies that output fuzzy set to the degree
     2. FUZZY INFERENCE SYSTEMS AND FUZZY                                 specified by the antecedent. Since decisions are based on the
                            LOGIC                                         testing of all of the rules in the Fuzzy Inference System (FIS),
A fuzzy inference system (FIS) tries to derive answers from a             the results from each rule must be combined in order to make
knowledgebase by using a fuzzy inference engine. The                      the final decision. Aggregation is the process by which the
inference engine which is considered to be the brain of the               fuzzy sets that represent the outputs of each rule are processes
expert systems provides the methodologies for reasoning                   into a single fuzzy set. The input for the defuzzification
around the information in the knowledgebase and formulating               process is the aggregated output fuzzy set and the output is
the results. Fuzzy logic is an extension of Boolean logic                 then a single crisp value [4]. This can be summarized as
dealing with the concept of partial truth that denotes the extent         follows: mapping input characteristics to input membership
to which a proposition is true. Whereas classical logic holds             functions, input membership function to rules, rules to a set of
that everything can be expressed in binary terms (0 or 1, black           output characteristics, output characteristics to output
or white, yes or no), fuzzy logic replaces Boolean truth values           membership functions, and the output membership function to
with the degree of truth. Degree of truth is often employed to            a single crisp valued output. There are two common inference
capture the imprecise modes of reasoning that play an                     methods [4]. The first one is called Mamdani's fuzzy inference
essential role in the human ability to make decisions in an               method proposed in 1975 by Ebrahim Mamdani [5] and the
environment of uncertainty and imprecision. The membership                second one is Takagi-Sugeno-Kang, or simply Sugeno,
function of a fuzzy set corresponds to the indicator function of          method of fuzzy inference introduced in 1985 [6]. These two
the classical sets. It can be expressed in the form of a curve            methods are the same in many respects, such as the procedure
that defines how each point in the input space is mapped to a             of fuzzifying the inputs and fuzzy operators. The main
membership value or a degree of truth between 0 and 1. The                difference between Mamdani and Sugeno is that the Sugeno’s
most common shape of a membership function is triangular,                 output membership functions are either linear or constant but
although trapezoidal and bell curves are also used. The input             Mamdani’s inference expects the output membership functions
space is sometimes referred to as the universe of discourse [4].          to be fuzzy sets. Sugeno’s method has three advantages.
Fuzzy Inference Systems are conceptually very simple. An                  Firstly, it is computationally efficient, which is an essential
FIS consists of an input stage, a processing stage, and an                benefit to real-time systems. Secondly, it works well with
output stage. The input stage maps the inputs, such as                    optimization and adaptive techniques. These adaptive
deadline, execution time, and so on, to the appropriate                   techniques provide a method for the fuzzy modeling procedure
membership functions and truth values. The processing stage               to extract proper knowledge about a data set, in order to
invokes each appropriate rule and generates a result for each.            compute the membership function parameters that best allow
It then combines the results of the rules. Finally, the output            the associated fuzzy inference system to track the given
stage converts the combined result back into a specific output            input/output data. The third, advantage of Sugeno type
value [4]. As discussed earlier, the processing stage, which is           inference is that it is well-suited to mathematical analysis.
called the inference engine, is based on a collection of logic            The block diagram of the proposed fuzzy inference system is
rules in the form of IF-THEN statements, where the IF part is             given in figure1.
called the "antecedent" and the THEN part is called the




                                                                    121                              http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 9, No. 3, 2011
   In the proposed model, the input stage consists of three                   Figure 2: Memebership Function for Static Priority
linguistic variables. The first one is the static priority that is
assigned to the process before its execution. The second is the
expected remaining time of the process. The third input is the
Response Ratio of the process. The output stage consists of
one linguistic variable called Dynamic priority.
         Static Priority
                                    Fuzzy
                                  Inference        Dynamic
          Remaining                Engine          Priority
            Time                  (Sugeno)
                                  27 Rules                                    Figure 3: Membership function for Remaining Time
           Response
            Ratio

              Figure 1: Block diagram of FIS

The input and out variables are mapped into fuzzy sets using
appropriate membership functions. The shape of the
membership function for each linguistic term is determined by
the expert. Adjusting these membership functions in an
optimal mode is very difficult. However, there are some
techniques for adjusting membership functions [7,8]These
techniques cannot be covered in this paper. They can be
further studied in a separate paper.                                          Figure 4: Membership Function for Response Ratio
The membership functions for inputs and outputs are given
below
Membership Function for DP (Dynamic Priority)

Type- Triangular, Range:1-5, Very low-[-1,0,1], Low:-
[0,1.5,3] medium:-[2,3,4] High:-[3,4,5] Very High:-[4,5,6]

Membership Function for SP (Static Priority)
                                                                                                                                            5
Type- Triangular, Range: 1-5, low-[-2, 0, 2], medium-[1, 2.5,                Figure5: Membership Function for Dynamic Priority
4] High [3, 5, 7]                                                          Twenty seven rules are formulated and a Sugeno type fuzzy
                                                                           Inference system is built. Some of the rules are listed below:
Membership Function for RT (Remaining Time)                                     •                              If the static priority is
                                                                                    ‘low’ and remaining time is ‘extremely short’ and
Type- Triangular, Range: 0-5, Extremely short:-[-2, 0, 2],                          Response Ratio is ‘long’ then the dynamic priority is
Very Short:-[1, 2.5, 4] Short:-[3, 5, 7]
                                                                                    ‘very high’.
Membership Function for RR (Response Ratio)                                     •                              If the static priority is
                                                                                    ‘low’ and remaining time is ‘ short’ and Response
Type- Triangular, Range: 0-10, Short:-[0, 1, 2], Medium:-[2,
                                                                                    Ratio is ‘short’ then the dynamic priority is ‘very
5, 8] Long:-[5, 10, 15]
                                                                                    low’.
                                                                                •                              If the static priority is
                                                                                    ‘medium’ and remaining time is ‘ extremely short’
                                                                                    and Response Ratio is ‘long’ then the dynamic
                                                                                    priority is ‘very high’.
                                                                                •                              If the static priority is
                                                                                    ‘medium’ and remaining time is ‘short’ and Response
                                                                                    Ratio is ‘short’ then the dynamic priority is
                                                                                    ‘medium’.




                                                                     122                             http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. 9, No. 3, 2011


      Table I: Rule base for Fuzzy Inference System
S.     Static   Remainin Respons Dynamic
No    priority   g time      e Ratio    priority
 .
1.     low      Extremely     short    Very High
                  short
2.     Low      Extremely    medium    Very High
                  short
3.     low      Extremely     long     Very High
                  short
4.     low      Very short    short    Very low
5.     low      Very short   medium      low
6.     low      Very short    long       high
7.     low        short       short    Very low
8.     low        short      medium      low
                                                                            Figure 6:    Rule View of FISHRRN
9.     low        short       long       high
10.   Medium    Extremely     short    Very high
                  short
11.   Medium    Extremely    medium    Very high
                  short
12.   Medium    Extremely     long     Very high
                  short
13.   Medium    Very short    short     Medium
14.   Medium    Very short   medium     Medium
15.   Medium    Very short    long     Very High
16.   Medium      short       short     Medium
17.   Medium      short      medium     medium
18.   Medium      short       long       High
19.    High     Extremely     short    Very high
                  short
20.    High     Extremely    medium    Very high
                  short
21.    High     Extremely     long     Very high
                  short
22.    High     Very short    short      High                              Figure 7: Surface view of FISHRRN
23.    High     Very short   medium      High
24.    High     Very short    long     Very High                                   4. Proposed Algorithm
25.    High       short       short      high
26.    High       short      medium      High                   The parameters of process are stored in table called Process
27.    High       short       long     Very High                Control Block (PCB). Each process has its own PCB. The
                                                                structure of the Process Control Block is given in Table II.
Table II: Structure of Process control Block                    The parameters remaining time Rti, static priority spi , dynamic
                                                                priority dpi and waiting time wti of process Pi are stored in
Process                                                         Process Control Block PCBi . Response Ratio RRi of a
Process name=”bash”                                             process Pi may be calculated by dividing the sum of waiting
Process identifier=100                                          time and service time by service time. The proposed algorithm
State=Ready                                                     is as follows:
CPU reserve                                                          i. For each process Pi in ready queue fetch its
{CPU Burst Time=                                                               parameters Rti, spi, and wti from PCBi and
  CPU Remaining Time=                                                          calculate RRi and give them as input to FIS and
  Priority=
                                                                               then set dpi to the output of FIS
 Waiting time=
  Arrival Time=                                                      ii. Schedule the process Pi with the highest value of dpi
 Start time=                                                                   for execution.
….
}

                                                          123                              http://sites.google.com/site/ijcsis/
                                                                                           ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 9, No. 3, 2011
    iii. If the scheduled process finishes and no new request            [8] Simon D, Training fuzzy systems with the extended
              arrives go to step ii                                      Kalman filter, Fuzzy Sets and Systems, Vol. 132, No. 2, 1,
    iv. If new request arrives go to step I                              December 2002.
                                                                         [9]Bashir Alam, M. N. Doja & R. Biswas, “A General Fuzzy
5. Performance Comparison                                                CPU Scheduling Algorithm” ,International Journal of Fuzzy
                                                                         Systems and Rough Systems (IJFSRS) (Vol. 1, No. 1, Janu.-
For comparing the performance of basic HRRN Scheduling                   June 2008)
and Fuzzy HRRN we did simulation on 1000 processes in
groups of ten each. We assumed random burst time of
processes and random arrivals. Max burst time of a process
should not exceed 10 ms. Throughput and average waiting
time of the processes in a group was computed and then
average was taken over all groups to give average throughput
and average waiting time. The performance is shown in the
column chart given in figure given below. This chart shows
that the average waiting time for Fuzzy HRRN is lesser than
the same of basic HRRN. This also shows that throughput of
our proposed algorithm Fuzzy HRRN is better than the same
of basic HRRN.




Figure 8 :   Performance Comparison of HRRN and Fuzzy
                    HRRN Scheduling

                        6. Conclusion

Our proposed algorithm named Fuzzy HRRN has got benefits
of shortest remaining time next (SRTN) as well as HRRN
scheduling algorithm and fuzziness. This proposed algorithm
gives better throughput and lesser average waiting time than
its non fuzzy counter algorithm HRRN.

                       REFERENCES

[1] Silberschatz, A., Peterson, J. L., and Galvin, .B.,Operating
System Concepts, Addison Wesley, 7th Edition, 2006.
[2] Andrew S. Tanenbaum , and Albert S. Woodfhull, Opera-
ting Systems Design and Implementation,Second Edition,2005
[3]William Stallings, Operating Systems Internal and Design
Principles, 5th Edition ,2006
[4]Wang Lie-Xin, A course in fuzzy systems and control,
Prentice Hall, August 1996.
[5]Mamdani E.H., Assilian S., An experiment in linguistic
synthesis with a fuzzy logic controller,International Journal of
Man-Machine Studies, Vol.7, No. 1, 1975.
[6] Sugeno, M., Industrial applications of fuzzy control,
Elsevier Science Inc., New York, NY, 1985.
[7]Jang, J.-S. R., ANFIS: Adaptive-Network-based Fuzzy
Inference Systems, IEEE Transactions on Systems, Man, and
Cybernetics, Vol. 23(3), 685, May 1993.




                                                                   124                           http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                            Vol. 9, No. 3, March 2011




   Experiences and comparison study of EPC & UML
          for Business Process & IS modeling
   Md. Rashedul Islam                       Md. Rofiqul Islam               Md. Shariful Alam                      Md. Shafiul Azam
  School of Business and                  School of Business and           School of Business and             Dept. of Computer Science
       Informatics                              Informatics                      Informatics                       and Engineering
    Högskolan i Borås                       Högskolan i Borås                Högskolan i Borås                 Science and Technology
      Borås, Sweden                           Borås, Sweden                    Borås, Sweden                      University, Pabna
  rashed.cse@gmail.com                   rana_aiub01@yahoo.com             shajib004@yahoo.com                    Pabna, Bangladesh
                                                                                                               shahincseru@gmail.com


Abstract— Business process modeling is an approach by which we             describes the business process from the business perspective
can analyze and integrate the business process. Using the                  and it is so much understandable for business people.
Business Process Modeling we can represent the current and
future process of a business/organization/enterprise. The business             On the other hand object oriented modeling is closely
process modeling is a prerequisite and essential implementing a            related to implementation. Now a day both two types are
business or making any automation system. In this paper, we                coming closer together for making efficient business
present our experience in a Business Process Modeling for                  information process modeling which is best for process the
organization. This paper presents detailed description about               business also implementing the business information system.
business process modeling, details description about the main two
modeling language EPC and UML. This paper presented the                        In this paper we have discussed the comparison of two
uses, advantages, disadvantages of EPC and UML modeling                    main business process modeling language one is EPC and
language. Here we tried to express the experience about those              another is UML. The EPC is mainly the process oriented
modeling language. This paper presents a details comparison                modeling and the UML is mainly the objecting oriented
between two modeling language from the business process                    modeling. The EPC and UML have enough tools to represent
modeling and information system implementation point of view.              any business process. Also those are very useful and easy to
                                                                           understand to related people. After all every modeling language
Keywords- Business Process Modeling, Petri net, Event-driven               has some advantage and disadvantage or difficulties. At the
Process Chain (EPC), Unified Modeling Language (UML),                      time of comparing of two modeling language we have found
Process-oriented modeling, Object-oriented modeling.                       some difficulties related to each other. Some are good and
                                                                           understandable for some level of people and other for other
                                                                           level of people.
                          I.     INTRODUCTION
    Business Process Modeling and successful development                                II.     BUSINESS PROCESS MODELING
and implementation of business interrelated. Before thinking a
Business information system or Information system supporting                   Business Process Modeling (BPM) also we can
business process then the first comes to you the business                  call Business Process Discovery (BPD). Business process
process modeling. A business process modeling demonstrates                 modeling is an approach by which we can analyze and
the whole scenario of business to the all related people and also          integrate the business process. Using the Business Process
increases the performance of business process.                             Modeling we can represent the current and future process of a
                                                                           business/organization/enterprise. Business Process Modeling
    Every business or organization consists of high number of              figures out the whole business model and provides maximum
interlinked core processes and every core process has many sub             business performance. The main outcomes of the Business
process and so many interrelated internal and external objects.            Process Modeling are, add value for the customer and reduce
So modeling this business is very imported for implementing a              the costs for the company to increase profit.
successful business and IT implementation.
                                                                               A Business Process Model is commonly a diagram
    There are many modeling approach has been developed for                representing a sequence of activities and information flow. It
define a model of a business respect to organizational and                 represents the business in sequence start to end by events,
information system aspect. Different business process flows are            actions and links or connection points. Business Process Model
different way and different types of people are involves in                includes both IT processes and people processes. There are two
those business process. So different business process modeling             main types of Business Process Model:
is relatively good for different business process and different
level of people. The Process oriented modeling is mainly                           1.         Baseline Model (Present)
   Identify applicable sponsor/s here. (sponsors)                                  2.         To Be Model (Future)




                                                                     125                                  http://sites.google.com/site/ijcsis/
                                                                                                          ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 9, No. 3, March 2011




A. Some common issues with business process modeling [1]                         An Event-driven       Process      Chain (EPC)     is
        Managing collaborative activities within business                        mainly flowchart type modeling language. The EPC
        process models that are derived from the                                 is very useful for business process modeling. Also it
        “transformational” approach                                              is easy to understand for business people.
        Canonical models and variability management                         c)   Unified Modeling Language (UML)
        Notations are stabilizing but methods are lagging                        Unified Modeling Language (UML) is a general-
        Process decomposition                                                    purpose and Object Oriented Modeling Language.
             o Some        rules    available,   methodology                     Using UML we can create visual model for making
                 dependent                                                       IT system using graphic notation techniques.
             o Becomes more important when coupled with
                 Business process execution and Web                         d) Dynamic Essential Modeling (DEMO)
                 Services                                                      Dynamic Essential Modeling (DEMO) is mainly
        Managing requirements from business processes, to                      communication-centered organizational modeling
        use cases to systems                                                   approach. DEMO is helpful for details specification
             o Is the use case driven approach still needed?                   of behavior of participating actors.
                 (non question)
        IT    enablement      focus    –Human       Interaction            In this paper we describe the details and comparison of
                                                                        Event-driven Process Chain (EPC) and Unified Modeling
        Management tends to be relegated to forms driven
                                                                        Language (UML).
        approaches

B. Advantages of Business Process Modeling                                III.   REFLECTION ON EPC MODELING LANGUAGE
   There are so many type of advantage of business process                 Event-driven Process Chains (EPC) is a widely used
modeling. We can describe the advantage of business process             approach for Modeling Business Process. The EPC provides
modeling in different aspect as follows:                                comprehensive means for modeling several aspects of a
                                                                        business process [7]. The modeling approach Event-driven
    a) Formalize existing process and spot needed                       Process Chains (EPC) [2, 3] has been developed for model
improvements                                                            business process within the ARIS framework.
    Using the Business Process Modeling analyst can make an                 In whole business process there are so many business
overall structure of whole business in a graphical view. BPM            function event activity. Using the EPC model, we can model
helps to represent all processes and data source internal and           whole business process with different events and business
external objects which is understandable for any business               function in sequences of events triggering business functions.
people. Also BMP help to make the business model which will             The business functions are themselves the results of other
be adjustable for the future need.                                      functions separately from initial events triggering the whole
                                                                        process. Representing business process decisions and
     b) Facilitate automated, efficient process flow
                                                                        expanding the complex control flow, EPC control structure
    BPM supports process parallelism. In business has different         with connector operation “and”, “or” and “xor” can be used.
parallel activity which can perform independently. Using BPM            This set of elements describes the processes, since some
we can model parallel activity. And it is possible to make an           authors define a process as a succession of events and functions
efficient process flow                                                  [10, 11]. The connector may be used for split or join and before
                                                                        and after of those connector will be event or function. The three
    c) Increase productivity and decrease head count
                                                                        connectors have twelve possibilities. In several standard
   In a Business a suitable model can increase productivity             software package (SAP) have used the EPC for making
and appropriate resource allocation reduce the cost. Using              software for documenting the business process [4].
Business Process Model we can design the suitable process
model with resource allocation which is helpful to increase the         A. ARIS – ARchitecture of Integrated Information Systems
productivity and decrease no of uses people.
                                                                            ARIS (Scheer 2000) [5] stands for Architecture of
C. Business Process Modeling Language                                   Integrated Information Systems, and denotes a methodology
                                                                        for modeling business processes. EPCs are not a new method in
There are mainly 4 Business Process Modeling Approaches                 essence, as it contains elements of the Petri nets and GERT
                                                                        (Scheer 2002) [6]. The ARIS methodology or its core
    a) Petri net
                                                                        technique EPCs, have been often confused with the software
       Petri net is a mathematical modeling language for the            tools such as ARIS Toolset (IDS Scheer 2003) [7]. Just to
       description of distributed systems. Mainly Petri net is          make one example, ARIS is labeled as a „modeling tool‟ in
       one kind of directed bipartite graph. In the Petri Net           Vernadat (2002) [8]. This confusion is probably augmented by
       graph all transactions and places represented by node.           the success of IDS Scheer, and it is considered to be a leader in
                                                                        the BPM sector (Gartner Group 2002) [9].
    b) Event-driven Process Chain (EPC)




                                                                  126                              http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                           Vol. 9, No. 3, March 2011




    The Event-driven Process Chains (EPC) diagram is the core             diagram “Order confirmation”, “Order Tracking”, and
technique for modeling in ARIS. The ARIS divided a process                “Production Planning” are the components of Function View.
in different aspects or views. The several views are (a) the
functional view, (b) the data view, or (c) the resource view. The              d) Organization view:
EPC technique for modeling in ARIS makes link the all views                   The Organizational view contains the relationship between
in control view. We will describe the different views in                  users and organizational units. The Organizational units formed
following paragraph.                                                      by some user for performing some specific task. Human beings
                                                                          are able to perform complex social action such as enterprise.
     a) Descriptive Views                                                 But the complex action can be broken in manageable units. In
    In previous paragraph we have mentioned the different                 the above diagram the content of Organizational views are
views. and above, the concept of ARIS lies on reducing                    “department” and “user”.
complexity into different views. In the following example,
Figure-1 Business Process Model with different views [11]                      e) Resource view:
describes the excerpt model of a business process with the                    The Resource view constitutes by resource of organization
concept of different views and link between views. The                    like Information technology components like Computer, Data
following example is a computer supported business process                Server.
for processing customer orders. This business process includes
different processes, activity/function, events, users,                         f) The Control View: EPC
Organizational units and IT resources, Also relationship                      The Control view mainly the combine view of all views.
between component for describe the whole business flow.                   Initially all views are developed separately for reducing the
                                                                          complexity. But in Control view links functions, organizational
                                                                          units and data in together. Here functions, events, information
                                                                          resources, and organization units are connected together into a
                                                                          common context according to process flow. And this combine
                                                                          Control view model is the complete EPC. Extended form of
                                                                          EPC we can link additional elements such as Organizational
                                                                          unit, data, product or service to the functions in an EPC
                                                                          Diagram.

                                                                          B. Purpose of EPC
                                                                              Before thinking about Purpose of EPC, if we think about
                                                                          the purpose of Business Modeling we can see that identifying
                                                                          the all core and sub processes and make a suitable model and
       Figure 1: Business Process Model with different views
                                                                          way of business process is the main purposes of Business
                                                                          Modeling. So Making a Business Model is the main purpose of
    In the above diagram the whole model and components has               EPC Modeling Language. Also there are some other main
been divided into individual views for reducing the complexity            purposes like:
of the business model. The all components are divided in                          Develop business process model which is useful for
individual views on the basis of relationship of component and                    representing an outline of whole business.
action. The criteria of separation is that, the relationships                     Make a graphical method modeling which will be
between components in the same view are relatively strong and
                                                                                  easy understandable to the users.
relationships between components of different views are
relatively weak. The four views of this modeling result                           Gathering requirements in the beginning phase.
according to ARIS are describes bellow:                                           Capture the flow of events in the business domain.
                                                                                  Describe in detail organizational aspects of the
     b) Data view:                                                                business information systems. Make starting point for
    The data view generally contains the events and statuses.                     identifying the actors of the system.
Events mainly the informational object which is represents the                    Distinguish between function and process
data. Here the events are “customer order received”, “order                       Identify software elements by analyzing business
confirmation prepared”. On the other hand Statuses are                            areas which related to business data and functions.
“customer status” and “article status” also represented by data.                  To make the whole organization more productivity
In data view the description of detail requirements plays an                      and maximize resource utilization
important role for developing Information System.
    c) Function view:                                                     C.   Advantage of EPC
   The function view mentions the performing activity of a                    As mentioned before, the EPC is widely used modeling
business process. Also the overall relationship and relationship          language which has so many features by which we can
between function. Sometime the function is complex. For                   represent any business model easily. Also it has many
reducing the complexity it can be broken down. In above                   advantages. After a thorough evaluation of a number of
                                                                          methodologies, techniques, and tools, EPC has been selected



                                                                    127                               http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 9, No. 3, March 2011




for modeling the Hospital case. The main reasons are the                 included with the “OR” connector such as wait-for-all, first-
following:                                                               come or every-time. . In wait-for-all approach the “OR”
                                                                         connector will wait-for- all path which split by start “OR”
        One of the main advantages of the EPC is that it is              connector. In first-come approach the “OR” operator will
        both powerful and easily understandable for end-                 trigger when the first path will complete.
        users.
        It is very much communicative to the different                   F. Business Perspectives and Views
        function and process
                                                                             For the beginning idea and study of process model, EPC
        EPC is not too much technical                                    take a part in an important role for business from business
        The EPCs match the requirements posed with respect               perspective and views. For designing a complex system
        to the ease of understanding by non-specialist in                perspectives have proven it for differentiate it. In business
        modeling.                                                        process modeling, there are several perspective proposals in
        EPCs can offer a multi-level view of the process,                EPC. The perspectives are data oriented, application oriented,
        since a function in an EPC can be described in more              function oriented, organization oriented and product/ service
        detail by means of another EPC.                                  oriented.
        EPCs offer a consistent, formally supported model
        (see comments above) that can ensure an efficient                  IV.    REFLECTION OF UML MODELING LANGUAGE
        simulation of the processes.                                         Modeling is an essential part of analyzing or engineering a
        A workflow oriented modeling connected to actors,                business, also it is very important and prerequisite for
        documents and information.                                       developing an IT system. A complex business process is
        For getting clear idea about actors and function in use          difficult for you to describe in textually format without a
        cases EPCs can be used for explain accurately to the             modeling diagrams. There are three key benefits of Modeling
        workflows of use cases.                                          are visualization, complexity management and clear
        In EPCs, there is an option for translating the activity         communication.
        diagram into EPC and vice versa.                                    In several modeling language the UML is most uses
        EPCs are often used for capturing and discussing                 modeling language in all over the world. The UML is a general
        business processes with people who have never been               purpose visual language which is very useful for specifying,
        trained in any kind of modeling technique, e. g. with            constructing, documenting the complex systems [13]. Using
        workers on the shop floor.                                       UML you can make a specific model which will be
        Although EPCs can be understood even by short-time               unambiguous and complete way.
        trained personnel, the same models can be refined
                                                                             The standard UML approved by the OMG™ in 1997. In
        and used for the requirements definition of an                   past few years there have been some improvement and UML 2
        information system.                                              is the major revision [12]. UML mainly is an Object Oriented
                                                                         Modeling Language. From the developer point of view,
D.   Difficulties of EPC                                                 Modeling of a business using the UML visualization graph can
    We have not found much difficulty for using of EPC. There            express a common interpretation at the time of exchanging
is some minor limitation in EPC language. They are:                      ideas. UML is not a programming Language but it helps to
                                                                         interpret the business process in a way which is very helpful for
        A simple EPC diagram can analyze easily, but for                 making IT system. Also UML modeling method cover the end-
        complex graphs it is very important to analyze before            users‟ views [14]. The UML also documents the project during
        implementation and it is difficult. Sometime in a                software development.
        complex diagram a deadlock can be happened when
        executing the process according to the diagram even              A. System Architectural Views and UML diagrams
        the graph is semantically correct according to the                   A Business process or IT system can be viewed from a
        definition of EPC                                                number of perspectives. There are so many categories of
        Looping is another possible discovered problem.                  participants involved in a business process of IT system like
        For simulating big model the OR operator has been                Business people, User, Analysts, Project Managers, Developers
        forbidden.                                                       etc in different ways and different times over the project‟s life.
                                                                         So from the architecture point of view a system has different
E. Ambiguity of the OR connector                                         viewpoints focused on a particular aspect of that system. Such
    In EPC there are three logical connector “AND”, “OR” and             as:
“XOR”. The using of “AND” and “XOR” is no problem. But                       1.   Use case view
the use of “OR” connector make some ambiguities. Which path                  2.   Design view
we should follows. At the end part we will wait for one path or              3.   Process view
both parts and so on. The ambiguity of or connector discussed                4.   Implementation view
in different approaches. “OR” connector‟s ambiguity has been
                                                                             5.   Deployment view
described in different way [20]. A comment flags can be




                                                                   128                               http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 9, No. 3, March 2011




   The figure 2 illustrates, the architecture of a software system can be described by five views.




                                          Figure 2: Modeling System Architecture (From [15])


    The UML has different diagram cover the static and                          State Diagram.
dynamic aspect/views of system. Such as the Use Case diagram
represent the use case view. The class diagram of UML                            Just like interaction diagram, state diagrams also
represents the design and process view. The component                            capture the behavior of a system over time. The
diagram represents the implementation view. The deployment                       elements of State Diagram are State, Transition
diagram is responsible for deployment view. In all these views                  Activity Diagram
the dynamic aspects are represented by behavioral diagrams of
UML such as the interaction diagrams, state diagrams, and                        Activity diagram models the workflows and
activity diagrams. In the following we are trying to mention                     processes of business or System.
about different diagram very shortly. All diagrams grouped into
two groups 1st one is Structural diagrams and another one is               Use Case and Activity diagram will be described later in
Behavioral diagrams.                                                    more detail.

    a) Structural Diagrams                                              B. Facts of Uses of Use Case Diagram
       Class Diagram                                                        Use case diagrams represent the interaction between object
                                                                        of system and the organizational units. But the Use case
        Class diagram is most important and the main part of            diagrams do not represent the process flow of a business. We
        object-oriented systems. A class diagram shows a set            can get the concept of activity and actors of those activity but
        of classes together with their relationships.                   we can‟t get any sequence of activity and the condition of those
        Component Diagram.                                              activity. The use case model defines the relationships and
                                                                        characteristics between business activities and participants
        The component diagram mainly for repressing the                 outside the focused business, for example e.g. Patients and
        relationship and dependencies among software                    Private transport, medicine suppliers etc. The object model
        components. A component diagram includes                        focuses on the internal business processes object systems
        Component, Component package.                                   includes organization units, work units, workers and entities.
        Deployment Diagram                                              C. 4.3 Facts of Uses of Activity Diagram
        The Deployment diagram of UML representation the                    Activity diagram of UML is one of the most used and
        configuration of run time processing nodes and the              useful diagram modeling the dynamic views / aspects of a
        components that live on them.                                   business process or system. It is important for modeling
                                                                        processes and workflows. Activity diagrams can be applied as
    b) Behavioral Diagrams                                              business modeling method [17, 18]. Also Activity diagram can
       Use Case Diagram.                                                describe the sequence and branching of different business
                                                                        processes in an organization with organizational unites very
        The Use case diagram of UML represents the all Use              clearly. It can describe components like activities, relationships
        Case/function of a system and interaction of user with          for control flow, logical connectors etc. In Activity diagram
        the system. It represents the system from the user              swimlines represent the organizational unites and grouped the
        perspective. The elements of this diagram are: Actor,           components. The UML activity diagrams have also strengths
        Use-case, Association, Generalization.                          and weaknesses. The activity diagram support parallel
        Interaction Diagram.                                            behavior, but their great disadvantage is that they do not make
                                                                        clear the links between activities and objects. However Activity
        Interaction diagrams describe how a set of objects              diagram is useful when we work with the following situations:
        collaborates in some behavior. Interaction diagram is
        two types, Sequence diagram, Collaboration diagram



                                                                  129                               http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                          Vol. 9, No. 3, March 2011




D. Modeling an operation                                                                 In use case diagram there is difficult to trace the
     We can use Activity diagrams to model an operation of a                             iteration and sequence of execution
business process which we can mention in use case or a class.                            The properties of different item is not possible to
It is possible to model all operation with the relationship within                       display in diagrams
themselves Modeling a workflow
    Activity diagram is the best way for modeling workflows                         V.      COMPARISON BETWEEN EPC AND UML
across organization that involve many actors or business                                       MODELING LANGUAGE
organizations. We can focus on activities and the actors which                 From the general point of view the both EPC and UML are
collaborate with the system.                                               very useful for modeling business process. EPC and UML are
                                                                           efficient in different aspects. In General we can say that the
E.   Why UML                                                               EPC is a Process oriented modeling language and the UML is a
   In above paragraph already we have described some                       Object Oriented Modeling Language. A more comprehensive
important features of UML which are indicates, why we use                  discussion can be found in [18]. The EPC is relevant for
UML for modeling. Also there are some other issues we                      business process modeling. Each of the UML diagram has
mentioned here.                                                            some aspects which are also relevant for business process
                                                                           modeling. However, each diagram type has a certain focus for
         UML is most widely used technical OO designed                     modeling business processes and also making information
         UML is suitable for business modeling                             system.
         UML is more structured
         UML is more descriptive                                           A. Relationship between EPC and Use case diagram:
         UML have more graphical diagram                                       A use case diagram has more interactions with a software
         UML is more communicative for programmer.                         system and less to business-related functionality. There are
         UML is easy to avoid the deadlock.                                three types of connections between an use case and an EPC.
         UML can show the actor responsible to execute the                     1.        A use case diagram can specify a function of an EPC.
         specific function.                                                              From an EPC diagram an EPC function can be
         UML language is independent                                                     represent by several Use Cases e.g. check status,
         UML is formal handle big project.                                               check product catalogue, edit text and print text.
         Effective for modeling large and complex software                     2.        In EPC a process can describe in underlying
         business process.                                                               sequence, also it can be describe it in verbally, in a
         Simple to learn UML support more tools                                          more precisely way.
                                                                               3.        In EPC, involved people or objects involved with
F. Advantage of UML                                                                      process can be interpreted as an organizational unit.
         UML is more structured and has better attribute                                 But In use case diagram it interpreted as actor. In all
         UML has the flow of activities with different levels                            cases, we can use the same roles in both diagrams.
         of attributes                                                         4.        The EPC diagram means a sequence of all process. In
         Support for development process                                                 use case diagram we can represent all use case
                                                                                         function and actors but there is no process sequence.
         Simplicity and its state clearly where the relationship
                                                                               5.        In EPC there is a process sequence so there is start
         lies
                                                                                         and end event. But in use case diagram there is no
         Top down methods are available
                                                                                         sequence so no start and end position.
         Support for the development process
         UML is easy to redraw any time and time saving for                    In EPC diagram the organization unit mentioned the using
         re-engineering                                                    elapse shape tool in different place the organizational unit
         Easy to understand for the both programmers and                   mentioned with different corresponding process. Here the
         business people                                                   organizational unit is scattered. In use case diagram the all
                                                                           organizational units are represented by actor and all use case
         Easy to translate into code                                       function are connected to actor .For example refer patient
         No mathematical representation                                    linked with two actor general practitioners and patient
         UML is best suitable for large cases
                                                                                            Transport                          Male
G. Disadvantage of UML                                                                       patient                           nurse
         Necessary to specify system requirement
         Lacking of showing logical relationship between
         different functions and process
                                                                                                         Use Case
         Lacking of showing data flows and workflows.
         Modeling dependency of two parallel task
         Loop of care cycle until patient recover                              Figure 3: Organizational units in EPC and Use case diagram
                                                                                    Actor1                                     Actor2



                                                                     130                                   http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                          Vol. 9, No. 3, March 2011




B. Relationship between EPC and Activity diagram:                          necessary to be careful about exactness or ambiguity. From the
    The Activity diagram and EPC are mainly serves the same                analysis of elements interacting with process flow in EPC
purpose. So no need to use the EPC and UML activity diagram                diagram may arise ambiguities. The following ambiguities are:
for modeling a same business processes. Also it is possible to                     Conjunctions of start event
translate EPCs diagram into activity diagrams and vice versa.
Of course, if we translate an EPC diagram to activity diagram                      In EPC modeling the start event is consider a node
then we can loss some information in an EPC. Because, the                          without input edge and similarly end event is consider
EPC cover a wider range of information than activity diagrams.                     the node without output edge. But ambiguity exists if
Activity diagram of UML does not consider external flows for                       start events found in the middle of the process.
modeling object.                                                                   Deadlock and loops
C. Comparison between EPC and UML Activity Diagram                                 It is very easy to analyze a simple graph in EPC but a
   If we compare the EPC and UML Activity Diagram for                              tool is needed to analyze the complex graph. A
modeling business processes, there are some different aspects                      deadlock mean when a event start after the process
by which we can view the correspondences and differences                           runs and after some time the process is trapped and
between these two approaches. There are three following main                       unable to reach the end states. Deadlock may occur if
aspects for comparing the EPC and UML Activity diagram.                            logical connectors are mismatch; especially for
                                                                                   complex graph, there are different interpretation of
    1.   Context                                                                   logical connectors and where one connector is link
    2.   Exactness/Ambiguity                                                       with other connector.
    3.   Notation/Terminology
                                                                                c) Notation/Terminology
     a) Context                                                                In comparison with EPC and UML activity diagram we can
    This aspect describes the development and uses context of              see that their concepts are similar but their representing style
EPC or UML Activity Diagram. Both diagrams are use for                     using different notation and terminology. There are some
modeling business processes, but both have different                       notations; those are not equivalent to other diagram. In our
development contexts. The EPC and UML Activity diagram                     discussion we will try to show the differences of both notation
drive the different modeling approaches. There are two                     and terminology and show some comparison of symbol of one
approaches to model a system:                                              diagram to another.
         Process-oriented modeling.                                            Both EPC and UML activity diagram are comparable with
                                                                           some common terminology; on the contrary they have
         The Processes inside the system are the main focus in             differences between them in correspondence with notation to
         process-oriented modeling. A process consists of                  visualize the processes and workflows. Here we will try to
         sequences of events and its activities/function. Event            translate from EPC to UML activity diagram in correspondence
         triggers activities/functions. An event is the results of         with notational comparison and vice versa on latter. In the
         other functions. From the first events, it trigger to the         following figure 4 we try to shows some notational/
         whole process. Some logical operator expands the                  terminology comparison between EPC and UML activity
         control flow. The EPC flows the process oriented                  diagram:
         modeling for standard business process modeling. The
         basic EPC model can be extended by introducing
         information objects and organization units with the
         process.
         Object-oriented modeling.
         On the other hand, the object inside the system is the
         main focus in object-oriented modeling. There are
         different interrelated objects inside a system. These
         independent objects communicate to each other by
         exchanging messages. Each object has properties and
         exchanges messages through operations. The UML
         Activity diagram the object-oriented modeling
         approach for developing IT solutions such as enterprise
         information system.                                                     Figure 4: Comparison view of EPC and activity diagram

    b) Exactness/Ambiguity                                                 We try to categorize the differences between both EPC and
   For business process modeling there would be ambiguity                  UML diagram in connection with notation. These are as
both in EPC or activity diagram. For example there is a                    follows:
possibility of blocking for implicit decision. So that, for
designing diagram with EPC or UML activity diagram it is




                                                                     131                              http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                      Vol. 9, No. 3, March 2011




  Functions and Activity/ Action states
  The functions and activity/ action state in both EPC
  and UML activity diagram activity diagrams states the
  active elements that represent the organizational unit of
  activity diagram and actors in use case diagram with
  respect to the process.
                                                                                  F1                                  F1
  Events
  In EPC diagram an event is a result of the other
  function that triggers to function. Activity diagram are
  based on the state diagram and there is no relevant
  events in activity diagram.                                             F2               F3              F2                    F3

  Start and End State/Event                                               Figure 6: Logical connector in EPC and Activity diagram
  The EPC diagram start by an event with no input and                          Organizational unit and swim lane
  end by an event with no output. But in activity diagram
  start using a black circle and end by black circle with                      In EPC diagram organizational unit are attached with
  another circle border.                                                       function which is responsible for the relevant business
                                                                               task. On the other hand in activity diagram, use of
                                                                               swim lane differentiate the activities belong to their
                                                                               respective organizational unit. Compared with EPC the
                                                                               activity diagram has some problem. Using the
                                                                               swimlanes we can represent the organizational unit.
                                                                               But some time the organization units are not enough
Figure 5: Starting and ending of EPC and activity diagram
                                                                               for representing all organizational relationship. Say for
  Data/Information Unit                                                        example, responsible for, provides support for, must be
                                                                               informed about result of, and must approve of.
  In EPC diagram we can mention data/ information unit
  using by rectangle but an activity or use case diagram                       In EPC diagram of patient Admission module the
  there is no tools for representing external data.                            organization unit mentioned the using elapse shape tool
                                                                               in different place the organizational unit mentioned
  Control flow and transaction                                                 with different corresponding process . For example
  In comparison with the control flow of EPC and                               transport patient by male nurse, fill form by physician
  transaction in activity diagram, both are similar. In                        update ward book by nurse.
  EPC modeling control flow is used to representing the                        In activity diagram all organizational units are
  process chain that is; one event triggers a business                         organized in some swimlanes and the different function
  function which is the result of other function. In                           has been placed within different swimlanes .for
  activity diagram transitions show the change of state                        example forwarded Admission document, update word
  over time and these are based on state diagram.                              book. Enter in pc etc has been placed in nurse. When a
  Logical connector                                                            function done by two or more organizational units it is
                                                                               too difficult to manage in swimlanes in Activity
  For considering the EPC modeling logical connector                           diagram.
  permit for splitting control flow, on the other hand
  activity diagram used transition to perform this                             Iteration
  splitting. Both the diagram used similar branch/ merge                       In EPC diagram there is no iterative notation, on the
  for splitting control flow and taking decision. For                          other hand activity diagram support iteration notation.
  taking decision EPC used logical „XOR‟ connector and
  for the same operation activity diagram used decision
  diamond. For parallel activity EPC use Logical „AND‟                                      VI.   CONCLUSION
  connector     while     in   activity   diagram    use                 The purposes of business process modeling to make an
  synchronization bar. The main difference between EPC               overall view and workflow of a business. For modeling the
  and UML activity diagram is that there is no notation              hospital case we have discussed about EPC and UML and we
  to denote „OR‟ operation in activity diagram like EPC              have got lots of advantage and disadvantage of language. Also
  diagram.                                                           at the time of making modeling diagram of any business or
                                                                     organization, we found some difficulties and ambiguity. We
                                                                     have tried to discuss about those language and the modeling
                                                                     process which is helpful to make a concept of business model
                                                                     as well as modeling languages. Those concepts will help to
                                                                     continue research the business process modeling and Model
                                                                     any business process.\



                                                               132                                http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                     Vol. 9, No. 3, March 2011




                                REFERENCE                                                    Saarbrükken,            March          1998.           (http://www.tu-
                                                                                             chemnitz.de/wirtschaft/wi2/home/loos/iwih144.pdf)
[1]    Balbir Barn Feb 2007, “Business Process Modeling”
                                                                                        [22] Rittgen, P.: EMC - A Modeling Method for Developing Web-based
[2]    http://www.jisc.ac.uk/media/documents/programmes/eframework/proces                    Applications. International Conference of the International Resources
       s-modelling_balbir_barn.pdf                                                           Management Association (IRMA) 2000, Anchorage, Alaska, USA, May
[3]    KELLER, G.; NÜTTGENS, M.; SCHEER, A.-W. (1992): Semantische                           21 - 24, 2000
       rozeßmodellierung auf der Grundlage ''Ereignisgesteuerter Prozeßketten           [23] Ferdian, “A Comparison of Event-driven Process Chains and UML
       (EPK)'', Veröffentlichungen des Instituts für Wirtschaftsinformatik,                  Activity Diagram for Denoting Business Processes”, Technische
       Heft 89, Saarbrücken,                                                                 Universität Hamburg-Harburg Arbeitsbereich Softwaresysteme, April
[4]    URL: http://www.iwi.uni-sb.de/public/iwihefte/ heft089.zip.                           1st, 2001
[5]    NÜTTGENS, M. (1997): Event-driven Process Chain (EPK) - Some                     [24] Van der Aalst W. (1999), “Formalization and verification of Event-
       Links and Selected Publications, URL: http://www.iwi.uni-                             driven Process Chains”, Information and Software Technology 41, pp.
       sb.de/nuettgens/EPK/epk.htm.                                                          639-650.
[6]    Keller, G.; Meinhardt, S.: SAP R/3-Analyzer: Optimierung von
       Geschäftsprozessen auf der Basis des R/3-Referenzmodells, Walldorf,                                          AUTHORS PROFILE
       1994.
                                                                                        Md. Rashedul Islam
[7]    Scheer A-W, (2000). ARIS: Business Process Modelling, 3rd Edition.
       Berlin, Springer.                                                                     He has completed Bachelor of Science (Hons.) in Computer Science and
                                                                                             Engineering at University of Rajshahi, Bangladesh on 2006, Now he is
[8]    Scheer A-W, (2002), “ARIS – From the vision to practical process                      studying Master‟s of Informatics under School of Business and
       control”, in Business Process Excellence, Berlin, Springer, pp. 1-14.                 Informatics, Högskolan i Borås, Sweden. This M.Sc is mostly finish. He
[9]    IDS Scheer, (2003) ARIS Toolset, version 6.2                                          is a Senior Lecturer, Dept of CSE, Leading University, Bangladesh, also
[10]   Vernadat F. (2002), “Enterprise modeling and integration (EMI):                       have 5 years IT development experience; He has two Journal
       Current status and research perspectives”, Annual Reviews in Control                  publications and five Conference publications. His current research
       26, pp. 15-25.                                                                        interest: Parallel Programming, Signal & Speech Processing,
                                                                                             Management Information, Information system planning, Software
[11]   Gartner Group (2002), Gartner Group Report. Available at                              Engineering.
       http://www.gartner.com.
[12]   Gulledge T., and Sommer R. (2002), “Business process management:
       public sector implications”, Business Process Management Journal 8,              Md. Rofiqul Islam
       pp. 364-376.                                                                          He has completed Bachelor of Science (Hons.) in Computer Engineering
[13]   A.-W. Scheer. Business Process Engineering. Reference Models for                      at American International University of Bangladesh 2006, Now he is
       Industrial Enterprises. Springer-Verlag, 1995.                                        studying Master‟s of Informatics under School of Business and
                                                                                             Informatics, Högskolan i Borås, Sweden. This M.Sc is mostly finish. He
[14]   Unified             Modeling            Language,            http://www-
                                                                                             has one Journal publication. His current research interest: Parallel
       01.ibm.com/software/rational/uml/ at 23-10-2010
                                                                                             Programming, Information system planning, Software Engineering.
[15]   G. Booch, J. Rumbaugh, I. Jacobson. The Unified Modeling Language
       Reference Manual. Addison Wesley, 1999.
                                                                                        Md. Shariful Alam
[16]   Ambler, S. W.: What's Missing from the UML? Techniques that can
       help model effective business applications, in: Object Magazine, 7                    He has completed BSc(Eng.) in Computer Science and Engineering from
       (1997) 8.                                                                             Chittagong University of Engineering and Technology, Chittagong,
                                                                                             Bangladesh on 2008, Now he is studying Master‟s of Informatics under
[17]   [BRJ99b] G. Booch, J. Rumbaugh, I. Jacobson. The Unified Modeling
                                                                                             School of Business and Informatics, Högskolan i Borås, Sweden. This
       Language User Guide. Addison Wesley, 1999.
                                                                                             M.Sc is mostly finish. He has almost 4 years professional experience in
[18]   P. Loos, T. Allweyer. Process Orientation and Object Orientation – An                 softwere development field. His current research interest: Object
       Approach for Integrating UML and Event-Driven Process Chain (EPC).                    Oriented Techonology, Image Processing, Software Engineering,
       Publication of the Institut für Wirtschaftsinformatik, Paper 144,                     System Development Philosophy, Information System and Business
       Saarbrücken, 1998 (http://www.iwi.uni-sb.de/iwi-hefte/iwih144.ps).                    Process.
[19]   Fowler, M.; Scott, K.: UML Distilled – A Brief Guide to the Standard             Md. Shafiul Azam
       Object Modeling Language. 2nd ed. Reading, MA 1999.
                                                                                             He has completed Bachelor of Science (Hons.) in Computer Science and
[20]   Paech, B.: On the Role of Activity Diagrams in UML, in: P.-A. Muller,                 Engineering at University of Rajshahi, Bangladesh on 2006 and M.Sc in
       J. Bezivin, (eds.), Proceedings of the Workshop <<UML>>‟98, Beyond                    Computer Science and Engineering at University of Rajshahi,
       the Notation (Mulhouse, June 3-4, 1998), pp 245-250.2.                                Bangladesh on 2008. He is a Lecturer, Dept of CSE, Science and
[21]   Loos, P; Allweyer, T.: Process Orientation and Object-Orientation - An                Technology University, Pabna, Bangladesh. He has two Conference
       Approach for Integrating UML and Event-Driven Process Chains (EPC).                   publications. His current research interest: Image Processing,
       Publication of the Institut für Wirtschaftsinformatik, Paper 144,                     Information system planning, Software Engineering.




                                                                                  133                                   http://sites.google.com/site/ijcsis/
                                                                                                                        ISSN 1947-5500
                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                 Vol. 9, No. 3, March 2011




  FACIAL TRACKING USING RADIAL
         BASIS FUNCTION
      P.Mayilvahanan,                          Dr.S.Purushothaman                               Dr.A.Jothi
      Research scholar,                      Principal, Sun College of                            Dean,
       Dept. of MCA,                        Engineering & Technology,                      School of Computing
      Vel's University,                       Kanyakumari – 629902,                     Sciences, Vel’s University,
 Pallavaram, Chennai, India                  Tamil Nadu, India, Email:                  Pallavaram, Chennai, India
                                           <dr.s.purushothaman@gmail.
                                                        com



ABSTRACT--This paper implements facial tracking                 the template, by projecting it around the detected
using Radial basis function neural network (RBF).               positions of the target and considering its overlap
There is no unique method that claims perfect facial            with the segmented local object. The tracking
tracking in video transfer. The local features of a
                                                                results show good performance, when the camera
frame are segmented. A ratio is found based on a
criteria and output of RBF is used for transferring the
                                                                moves towards the object. Yan Tong et al. [3]
necessary information of the frame from one system to           developed a general framework for region
another system. A decision approach, with a threshold,          tracking which includes models for image
is used to detect if there is any change in the local           changes due to motion, illumination and partial
object of the successive frames. The accuracy of the            occlusion. They used a cascaded parametric
result depends upon the number of centers. The                  motion model and a small set of basis images, to
performance of the algorithm in reconstructing the              account for shading changes, which will be
tracked object is about 96.5% and similar to the                solved in a robust estimation framework, in order
performance of back propagation algorithm (BPA), in
                                                                to handle small partial occlusion. Gleicher [4]
terms of reduced time and quality of reconstruction.
                                                                introduced difference decomposition, to solve
Index Terms- Radial basis function (RBF), Back-                 the registration problem in tracking, where the
propagation algorithm (BPA); Watershed algorithm;               difference would be linear combination of a set
Motion Estimation.                                              of basis vectors. Sclaroff and Isidoro [5] used
                                                                this idea for template registration in region-based
                1.   INTRODUCTION                               non-rigid tracking, where the non-rigid
                                                                deformation was represented, in terms of
          In specific applications like video-                  eigenvectors of a finite element method.
conferencing, news telecast, most of the image                  Photometric variation is considered; and a
area is covered by a human face. Low bit-rate                   modified Delaunay refinement algorithm is used
video transmission is required by using 3D head                 to construct a consistent triangular mesh for the
models. Tracking algorithms are available to                    region of the tracked object.
track head in video sequence. There is no                                  Nguyen and Worring [6] made their
complete automatic system available for                         contribution, by introducing a contour tracking
extracting head model from the video. If three                  method, incorporating static segmentation by the
dimension head models can be extracted from                     watershed algorithm. Their method utilized kinds
the first frame (or a first few frames) in a video              of edge maps from motion (optic flow), intensity
sequence then it will become possible to build                  (watershed) and prediction (contour warping), to
extremely low bit rate video coding systems for                 update the object contour. It was claimed that
communicating head and shoulder scenes. Head                    this method yielded accurate and robust results.
models can be used for synthesizing views and                             The idea of “active blob” by Nguyen
facial expressions, animating virtual characters.               and Worring [6] discusses the non-rigid
          Shi and Tomasi [1] put forward the                    deformation. The Delaunay triangulation of
criterion of “good features” by its texture and                 computer graphics is used to generate some mesh
used it in affine feature tracking. Parry et. al [2]            of the object region [5].
introduced a region-based (formed by
segmentation) tracking method, mainly updating



                                                          134                               http://sites.google.com/site/ijcsis/
                                                                                            ISSN 1947-5500
                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                             Vol. 9, No. 3, March 2011




          In general, the following procedure is                 B.       Methods
adopted for face tracking algorithm [7]
                                                                        The Radial basis function (RBF) is a
a. Wait for a face(s) to appear in the frame                supervised artificial neural network method [8].
b. Enter initialization mode (wait for a face(s) to         The concept of distance measure is used to
appear for a predefined amount of time, to avoid            associate the input and output pattern values, eq
paying attention to people who just happen to               (1). Radial Basis Functions is capable of
pass by)                                                    producing approximations to an unknown
b. Enter tracking mode, choose the closest face.            function ‘f’ from a set of input data abscissa. The
c. Track the face until it leaves the frame. To             approximation is produced by passing an input
avoid losing track of the face due to minor head            point through a set of basis functions, each of
movements, leave tracking mode only when the                which contains one of the RBF centers,
tracked face disappears for a predefined amount             multiplying the result of each function by a
of time.                                                    coefficient and then summing them linearly. For
d. Go to a.                                                 each function ‘t’, the approximation to this
                                                            function is essentially stored in the coefficients
          This paper proposes a region-based                and centers of the RBF. These parameters are in
method for motion estimation undergoing object              no way unique, since for each function ‘t’ is
tracking. Tracking is performed by means of                 approximated, many combinations of parameter
motion segmentation. The proposed method fully              values exist. RBFs have the following
                                                            mathematical representation:
utilizes information of temporal motion and
spatial luminance. Computation of dominant                            N −1
motion of the tracked object is done by a robust             F(x) = ∑ c i Φ(|| x − R i ||)                       (1)
iterative weights least square (IWLS) method.                          i =0
Static segmentation is incorporated to modify
this prediction, where the warping error of each
watershed segment and its rate of overlapping                   where
with warped template are utilized, to help
                                                              R is a vector containing the centers of the
classification of some possible watershed
                                                            RBF, and
segments near the object border.
          The following procedure is used to                    φ is the basis function or activation function
implement RBF for facial tracking.                          of the network. The implementation of RBF for
• Read frame1                                               facial tracking is as follows:
• Take a portion of frame 1(eye/nose/lip etc.)
• Apply watershed segmentation
• Find mean of the segmented image                          Step 1: Apply Radial Basis Function.
• Train /test using RBF                                         No. of Input = width of the facial parameter
• During testing , get the output of RBF                    in number of pixels
• Accordingly display image in system 2                        No. of Patterns = No. of frames under
                                                            implementation
     II MATERIALS AND METHODS                                   No. of centres = No. of patterns
     A. Materials                                                       Calculate RBF as
          The concept of ANN, with supervised
algorithm     for    computing      the     affine                            RBF = exp (-X)
transformation-taking place in the current frame,                      Calculate
with respect to previous frame. This is achieved,
when there is a significant change in the output                          G = RBF
of the neural network, which indicates the                                A = GT * G
change in position of the object in the current
                                                                      Calculate
frame. To detect the change in position of the
object, the network has to be trained in advance                          B = A-1
under supervised mode.
                                                                      Calculate
                                                                          E = B * GT




                                                      135                                http://sites.google.com/site/ijcsis/
                                                                                         ISSN 1947-5500
                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                   Vol. 9, No. 3, March 2011




   Step 2: Calculate the Final Weight.                            uses final weights obtained during training for
           F=E*D                                                  updating the frames in the receiving system.

   Step 3: Store the Final Weights in a File.                              Here, fx , fy and ft are the partial
          The final updated weights are saved for                 derivatives of brightness function with respect to
testing the video transfer.                                       x, y and t, the    function is chosen as the Nesi
                                                                  and Magnolfi [8]; and σ is the scale parameter.
  III SCHEMATIC DIAGRAM OF FACIAL                                 To solve the problem, there are two different
             TRACKING                                             ways to find robustly the motion parameters: one
                                                                  is gradient-based, like the SOR method in [3];
                                                                  another is least squares-based, such as (IWLS)
                                                                  method. The algorithm begins, by constructing
                                                                  the Gaussian pyramid (three levels are set up).
                                                                  When the estimated parameters are interpolated
                                                                  into the next level, they are used; to warp
                                                                  (realized by bilinear interpolation) the last frame
                                                                  to the current frame. In the current level, only the
                                                                  changes are estimated in the iterative update
                                                                  scheme
                    (a) Training                                           In static segmentation, the watershed
                                                                  algorithm of mathematical morphology is a
                                                                  powerful method [4]. Early watershed algorithms
                                                                  are developed, to process digital elevation
                                                                  models, and are based on local neighborhood
                                                                  operations on square grids. Some approaches use
                                                                  “immersion simulations“to identify watershed
                                                                  segments, by flooding the image, with water
                                                                  starting at intensity minima. Improved gradient
                                                                  methods are devised, to overcome plateaus and
                                                                  square pixel grids [10]. The former method is
                                                                  used. A severe drawback, to the computation of
                                                                  watershed algorithm, is over-segmentation.
                                                                  Normally watershed merging is performed, along
                                                                  with the watershed generation. Over-
                                                                  segmentation is welcome; so, during tracking,
                                                                  the merging process is omitted, which saves
                                                                  some computational costs. Figure 2 shows
                                                                  procedure for watershed segmentation.

                                                                     IV TEMPLATE WARPING AND REGION
                                                                                ANALYSIS

                                                                           Once the motion parameters have been
                     (b)Testing                                   computed, warp the object template from the last
                                                                  frame to the current frame. Then the warped
         Figure. 1 Procedure for facial tracking                  template is used to determine, which watershed
                                                                  segments enter the template according, to the
          Figure 1a shows the training procedure                  following measure: Given that the number of
for the RBF. Frames are extracted from the                        pixels belonging to the warped template in the
video. Watershed segmentation is applied on the                   number of all pixels in Ri is Ci, a ratio ri is
local and global objects in the face. The mean                    computed, as given in eq(2):
values of the segmented image are used for
training the RBF. The final weights are stored in                             ri =CPi / Ci                             (2)
a file. Figure 1b shows the procedure for
updating the frames in the receiving system. It



                                                            136                               http://sites.google.com/site/ijcsis/
                                                                                              ISSN 1947-5500
                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                     Vol. 9, No. 3, March 2011




          Based     on     this   measure,    the                            Human face motion is complex with
classification problem of each sub-region is                        rigid and non-rigid movements; hence the idea in
discussed in the following cases:                                   [3] adopted, using a modified affine model, to
1) When ri > r0, then, classify ri as part of the                   describe the local motion of facial features
final object template;                                              (mouth, eyes and eyebrows) and a planar
2) When r0 ≥ ri ≥ r1 (here r1 = 0.4), another                       projective transform to model the head motion.
measure as MAE (Mean Absolute Error) of                             The IWLS method is used to estimate these
difference between the warped frame and the                         motion parameters.
current frame is taken into account, eq (3)

                                                                                                         V. EXPERIMENT RESULTS

                                                                             The project is implemented using
                                                                    Matlab 7. The time taken for processing each
                                                                    frame is at an average of 1.4 seconds. This
                                                                    includes segmentation and processing with
                                                                    neural network. The topology of the RBF is
                                                                    width of the FAP x number of patternsx1. The
                                                                    total number of frames in the video is 91. The
                                                                    peak signal to noise ratio (PSNR) for all the
                                                                    reconstructed face in the received system is
                                                                    shown in Figure 3. For the experiment, only 8
                                                                    frames (Figure 4) are considered, which show
                                                                    significant changes in the lip movements.
                                                                                                  39


                                                                                                  38


                                                                                                  37
                                                                     peak signal to noise ratio




                                                                                                  36


                                                                                                  35
       Figure 2 Flow chart for watershed algorithm
                                                                                                  34




M i = ∑ | f ( x, t + 1) − f w ( x, t ) | / Ci
                                                                                                  33

                                                      (3)                                         32

where fw (x , t ) the warped image of f(x ,t),                                                    31
using the estimated dominant motion parameters;                                                     0   10       20    30       40       50
                                                                                                                       Frame numbers updated
                                                                                                                                               60    70     80


If the warped error Mi of Ri is smaller enough                                                               Figure.3 Peak Signal to noice ratio
(less than a of f (x , t ) using given threshold, for
instance, 10), Ri is still regarded as part of the                                                                VI. CONCLUSION
updated template; otherwise, exclude Ri out of
the object region.                                                            In this paper, an RBF based approach is
3) When ri < r1, ri will not be included in the                     proposed for motion estimation, undergoing
updated template.                                                   facial tracking. The lip movements are mainly
                                                                    focused in this work. The template warping, by
         When people make facial expression                         watershed segmentation and ANN for quick
movements, especially behaving emotionally,                         decision of frame updation, is implemented.
(mainly, six universal facial expressions are to be                 Applications of this method in facial expression
discussed, i.e. disgust, sadness, happiness, fear,                  tracking can be expressed for other parts of face.
anger and surprise), in most of cases head motion
is accompanied. The procedure is divided into                                        REFERENCES
two steps: 1) Head tracking is realized first, then                 [1] Shi J. and Tomasi C, “Good features to
the estimated motion is used to stabilize the face                  track”. In Proc. Computer Vision and Pattern
region; 2) The local motion of each facial feature                  Recognition, 1994.
is estimated relative to the stabilized face.                       [2] Parry et. al, “Region Template Correlation
                                                                    for FLIR Target Tracking”, British Machine
                                                                    Vision Conference’96.



                                                              137                                                              http://sites.google.com/site/ijcsis/
                                                                                                                               ISSN 1947-5500
                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                           Vol. 9, No. 3, March 2011




[3] Yan Tong, YangWang, Zhiwei Zhu, Qiang Ji,
Robust facial feature tracking under varying face
pose and facial expression, Pattern Recognition
40 (2007) 3195 – 3208.
[4] Gleicher M, “Projective registration with
difference decomposition”, IEEE CVPR’97,
pp331-337, 1997.
[5] Sclaroff S. and Isidoro J., “Active blobs”,
ICCV’98.                                                  Frame no:5                        Frame no:16
[6] Nguyen, Worring M., “Multi-feature object
tracking using a model-free approach”, IEEE
CVPR, pp 145 –150, 2000.
[7]    Anton Podolsky and Valery
Frolov,             Face            tracking,
www.cs.bgu.ac.il/~orlovm/
teaching/saya/.../saya-tracking-report.pdf                Frame no:24                       Frame no:37
[8] P. Nesi and R. Magnolfi, Tracking and
Synthesizing Facial Motions with Dynamic
Contours Real-Time Imaging 2, 67–79 (1996)
[9] Vincent L, Soille, “Watersheds in digital
spaces: an efficient algorithm based on
immersion simulations”, IEEE T-PAMI, 13(6):
583-589, 1991.
[10] Gauch J, „Image segmentation and analysis             Frame no:46                      Frame no:57
via multi-scale gradient watershed hierarchies“,
IEEE T-IP, 8(1): 69-79, 1999.




                                                          Frame no:70                       Frame no:91

                                                             Figure 4 Experimental results relating to lip movements




                                                    138                                 http://sites.google.com/site/ijcsis/
                                                                                        ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 9, No. 3, 2011

    Performance Comparison of Speaker Identification
          using circular DFT and WHT Sectors
                                             Dr. H B Kekre1, Vaishali Kulkarni2,
                                Indraneal Balasubramanian3, Abhimanyu Gehlot4, Rasik Srinath5
                                1
                                  Senior Professor, Computer Dept., MPSTME, NMIMS University.
                                                           hbkekre@yahoo.com
                                     2
                                       Associate Professor, EXTC Dept., MPSTME, NMIMS University.
                                                           Vaishalikulkarni6@yahoo.com
                                        3, 4, 5
                                                students, B-Tech EXTC, MPSTME, NMIMS University.
                            indraneal89@gmail.com, abhimanyu13090@gmail.com, rasik90@gmail.com



Abstract— In this paper we aim to provide a unique approach to             identification using power distribution in the frequency domain
text dependent speaker identification using transform techniques           [11], [12]. We have also proposed speaker recognition using
such as DFT (Discrete Fourier Transform) and WHT (Walsh                    vector quantization in time domain by using LBG (Linde Buzo
Hadamard Transform). In the first method, the feature vectors              Gray), KFCG (Kekre’s Fast Codebook Generation) and KMCG
are extracted by dividing the complex DFT spectrum into                    (Kekre’s Median Codebook Generation) algorithms [13 – 15]
circular sectors and then taking the weighted density count of the         and in transform domain using DFT (Discrete Fourier
number of points in each of these sectors. In the second method,           Transform), DCT (Discrete Cosine Transform) and DST
the feature vectors are extracted by dividing the WHT spectrum             (Discrete Sine Transform) [16].
into circular sectors and then again taking the weighted density
count of the number of points in each of these sectors. Further,               The concept of sectorization has been used for (CBIR)
comparison of the two transforms shows that the accuracy                   content based image retrieval. [17] – [21]. We have proposed
obtained for DFT is more (80%) than that obtained for WHT                  speaker identification using circular DFT sectors [22]. In this
(66%).                                                                     paper, we propose speaker identification using WHT (Walsh
                                                                           Hadamard Transform), and also compare the results with DFT
   Keywords - Speaker identification; Circular Sectors; weighted           sectors. In Fig. 1, we can see how a basic speaker identification
density; Euclidean distance                                                system operates. A number of speech samples are collected
                                                                           from a variety of speakers, and then their features are extracted
                       I.    INTRODUCTION                                  and stored as reference models in a database. When a speaker is
    Human speech conveys an abundance of information, from                 to be identified, the features of his speech are extracted and
the language and gender to the identity of the person speaking.            compared with all of the reference speaker models. The
The purpose of a speaker recognition system is thus to extract             reference model which gives the minimum Euclidean distance
the unique characteristics of a speech signal that identify a              with the feature vector of the person to be identified is the
particular speaker [1 - 4]. Speaker recognition systems are                maximum likelihood model and is declared as the person
usually classified into two subdivisions, speaker identification           identified.
and speaker verification [2 – 5]. Speaker identification (also
known as closed set identification) is a 1: N matching process                                             II.
where the identity of a person must be determined from a set of
known speakers [7]. Speaker verification (also known as open                                               III.
set identification) serves to establish whether the speaker is
who he claims to be [8]. Speaker identification can be further                                             IV.
classified into text-dependent and text-independent systems. In
a text dependent system, the system knows what utterances to                                               V.
expect from the speaker. However, in a text-independent
system, no assumptions about the text can be made, and the
system must be more flexible than a text dependent system [4,                                              VI.
5, and 8].
                                                                                                  VII. EASE OF USE
    Speaker recognition systems find use in a multitude of
applications today including automated call processing in                  A. Selecting a Template (Heading 2)
telephone networks as well as query systems such as stock                     FF
information, weather reports etc. However, difficulties in wide
deployment of such systems are a practical limitation that is yet
to be overcome [2, 6, 7, 9, and 10]. We have proposed speaker                                Figure 1. Speaker Identification System




                                                                     139                                http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 9, No. 3, 2011
                                                                              II.      SECTORIZATION OF THE COMPLEX TRANSFORM PLANES
                                                                             The speech signal has amplitude range from -1 to +1. It is
A. Discrete Fourier Transform(DFT)                                       first converted into positive values by adding +1 to all the
     The DFT transforms time or space based data into                    sample values. Thus the amplitude range of the speech signal is
frequency-based data. The DFT allows you to efficiently                  now from 0 to 2. For sectorization two methods are used,
estimate component frequencies in data from a discrete set of            which are described below:
values sampled at a fixed rate [23, 24]. If the speech signal is
represented by y (t), then the DFT of the time series or                 A. DFT Sectorization
samples y0, y1,y2, …..yN-1 is defined as given by (1):                      The algorithm for DFT sectorization is given below:
                                                                         1. The DFT of the speech signal is computed. Since the DFT
                                        -2jπkn/N
                       Yk =           ne
                                                                            is symmetrical, only half of the number of points in the
                                                                            DFT is considered while drawing the complex DFT plane
                                                             (1)
                                                                            (i.e. Yreal vs. Yimag).
             Where yn=ys (nΔt); k= 0, 1, 2…, N-1.
   Δt is the sampling interval.                                          2.         Also the first point in DFT is a real number, so it is
                                                                                    considered separately while taking feature vectors. So the
B. Walsh Hadamard Transform                                                         complex plane is only from (2, N/2), where N is the
     The Walsh transform or Walsh–Hadamard transform is a                           number of points in DFT. Fig. 2 shows the original speech
non-sinusoidal, orthogonal transformation technique that                            signal and its complex DFT plane for one of the samples
decomposes a signal into a set of basis functions. These basis                      in the database.
functions are Walsh functions, which are rectangular or square
waves with values of +1 or –1. The Walsh–Hadamard                        3.         For dividing the complex plane into sectors, the
transform returns sequency values. Sequency is a more                               magnitude of the DFT is considered as the radius of the
generalized notion of frequency and is defined as one half of                       circular sector as in (3):
the average number of zero-crossings per unit time interval.
Each Walsh function has a unique sequency value. You can                              Radius (R) = abs (sqrt ((Yreal)2+(Yimag)2))                                                                  (3)
use the returned sequency values to estimate the signal
frequencies in the original signal. The Walsh–Hadamard                   4.         Table I shows the range of the radius taken for dividing
transform is used in a number of applications, such as image                        the DFT plane into circular sectors.
processing, speech processing, filtering, and power spectrum
analysis. It is very useful for reducing bandwidth storage                                             1


requirements and spread-spectrum analysis [25]. Like the FFT,                                        0.5
the Walsh–Hadamard transform has a fast version, the fast
                                                                                         Amplitude




Walsh–Hadamard transform (fwht). Compared to the FFT,                                                  0


the FWHT requires less storage space and is faster to calculate
                                                                                                     -0.5
because it uses only real additions and subtractions, while the
FFT requires complex values. The FWHT is able to represent                                            -1
                                                                                                        0              0.5          1     1.5       2     2.5     3   3.5    4    4.5          5
signals with sharp discontinuities more accurately using fewer                                                                                     No. of samples                          4
                                                                                                                                                                                        x 10


coefficients than the FFT. FWHTh is a divide and conquer                                                        400


algorithm that recursively breaks down a WHT of size N into
                                                                                                                300
two smaller WHTs of size N / 2. This implementation follows
the recursive definition of the           Hadamard matrix HN                                                    200



given by (2):                                                                                                   100
                                                                                                        Ximag




                                                                                                                   0




                                                                                                                -100

                                                     (2)
   The         normalization factors for each stage may be                                                      -200




grouped together or even omitted. The Sequency ordered, also                                                    -300


known as Walsh ordered, fast Walsh–Hadamard transform,
                                                                                                                -400
FWHTw, is obtained by computing the FWHTh as above, and                                                            -400      -300       -200     -100     0
                                                                                                                                                        Xreal
                                                                                                                                                                100   200   300   400



then rearranging the outputs.
    The rest of the paper is organized as follows: Section II
explains the sectorization process, Section III explains the                                            Figure 2. Speech signal and its complex DFT plane
feature extraction using the density of the samples in each of
the sectors, Section IV deals with Feature Matching, and results         5.         The maximum range of the radius for forming the sectors
are explained in Section V and the conclusion in section VI.                        was found by experimenting on the different samples in

   Identify applicable sponsor/s here. (sponsors)



                                                                   140                                                                          http://sites.google.com/site/ijcsis/
                                                                                                                                                ISSN 1947-5500
                                                                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                                                           Vol. 9, No. 3, 2011
     the database. Various combinations of the range were
     tried and the values given in Table I was found to be                                                                            300


     satisfactory. Fig. 3 shows the seven sectors formed for the
     complex plane shown in Fig. 2. Different colours have                                                                            200

     been used to show the different sectors.

6.   The seven circular sectors were further divided into four                                                                        100



     quadrants each as given by Table II. Thus we get 28
     sectors for each of the samples. Fig. 4 shows the 28                                                                                0


     sectors formed for the sample shown in Fig. 2.
                                                                                                                                      -100



        TABLE I.                      RADIUS RANGE OF THE CIRCULAR SECTORS
                                                                                                                                      -200
      Sr.           Radius range                          Sector                   Weighing
      No.                                                                          factor
      1             0≤R≤4                                 Sector1                  2/256                                              -300
      2             4≤R≤8                                 Sector2                  6/256                                                 -300          -200   -100     0        100      200         300


      3             8≤R≤16                                Sector3                  12/256
      4             16≤R32                                Sector4                  24/256
      5             32≤R≤64                               Sector5                  48/256                                           Figure 4. Sectorization of DFT plane into 28 sectors for the speech
      6             64≤R≤128                              Sector6                  96/256                                                                 sample shown in Fig. 2
      7             128≤R≤256                             Sector7                  192/256

                                                                                                                               1.   The WHT of the speech signal is taken using FWHT
           250
                                                                                                                                    (Fast Walsh Hadamard Transform).
           225

           200                                                                                                                 2.   The WHT can be represented as (C0, S0, C1, S1, C2,
           175

           150
                                                                                                                                    S2, …….., CN-1, SN-1), C represents Cal term and S
           125                                                                                                                      represents Sal term.
           100

           75
                                                                                                                               3.   The Walsh transform matrix is real but by
           50

           25
                                                                                                                                    multiplying all Sal Components by j it can be made
            0                                                                                                                       complex. The first term i.e. C0 represents dc value. So
           -25

           -50
                                                                                                                                    the complex plane is considered by combining S0
           -75                                                                                                                      with C1, S1 with C2 and so on. In this case SN-1 will be
        -100                                                                                                                        left out. Thus C0 and SN-1 are considered separately.
        -125

        -150

        -175
                                                                                                                               4.   The complex Walsh transform is then divided into
        -200                                                                                                                        circular sectors as shown by (4). Again the radial
        -225
                                                                                                                                    sectors are formed using the radius as shown in Table
        -250
           -250 -225 -200 -175 -150 -125 -100 -75   -50   -25   0   25   50   75   100 125 150 175 200 225 250
                                                                                                                                    I.

             Figure 3. Circular Sectors of the complex DFT plane of the speech                                                      Radius (R) = abs (sqrt ((Ycal)2+(Ysal)2))                        (4)
                                   sample shown in Fig. 2
                                                                                                                               5.   The seven circular sectors were further divided into
                 TABLE II.                     DIVISION INTO FOUR QUADRANTS                                                         four quadrants as explained in (A) by using Table II.
                                                                                                                                    Thus we get 28 sectors for each of the samples.
     Sr.           value                                        Quadrant
     No.
     1             Xreal≥0 & Ximag≥0                            1 (00 – 900 )                                                                   III.   FEATURE VECTOR EXTRACTION
     2             Xreal≤0 & Ximag≥0                            2 (900 – 1800)                                                 For feature vector generation, the count of the number of
     3             Xreal≤0 & Ximag≤0                            3 (1800 – 2700)                                            points in each of the sectors is first taken. Then feature vector
     4             Xreal≥0 & Ximag≤0                            4 (2700 – 3600)                                            is calculated for each of the sectors according to (5).

                                                                                                                           Feature vector = ((count/n1)*weighing factor)*10000                   (5)
B. WHT Sectorization
   The algorithm for Walsh Sectorization is given below:                                                                   For DFT, the first value i.e. dc component is accounted as in
                                                                                                                           (6). For WHT, C0 is accounted as given by (6) and SN-1 is
                                                                                                                           considered as given by (7). Overall there are eight components
                                                                                                                           in the feature vector for DFT (one per sector and first term).




                                                                                                                     141                                      http://sites.google.com/site/ijcsis/
                                                                                                                                                              ISSN 1947-5500
                                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                Vol. 9, No. 3, 2011
Similarly, there are nine components in the feature vector for                  decreases. When the complex plane is further divided into 56
WHT (one per sector, first term and last term), when the seven                  sectors, there is a improvement in accuracy for less number of
circular sectors are considered. When 28 sectors are                            samples, but as the number of samples is increased
considered there are 29 components in the feature vector (one                   performance is similar as that with 28 sectors. Fig. 6 shows the
per sector and first term) for DFT and 30 components in the
feature vector (one per sector, first term and last term) for
WHT.

First term = sqrt (abs (first value of DFT/WHT))                (6)

Last term = sqrt (abs (Last value of FWHT))                     (7)


                              IV.    RESULTS

A. Database description
    The speech samples used in this work are recorded using
Sound Forge 4.5. The sampling frequency is 8000 Hz (8 bit,
mono PCM samples). Table II shows the database description.
The samples are collected from different speakers. Samples are
taken from each speaker in two sessions so that training model
and testing data can be created. Twelve samples per speaker are
taken. The samples recorded in one session are kept in database
and the samples recorded in second session are used for testing.

                TABLE III.          DATABASE DESCRIPTION
                                                                                              Figure 5. Accuracy for DFT Sectorization
               Parameter                   Sample characteristics
       Language                         English
       No. of Speakers                  30
       Speech type                      Read speech
       Recording conditions             Normal. (A silent room)
       Sampling frequency               8000 Hz
       Resolution                       8 bps

B. Experimentation
     This algorithm was tested for text dependent speaker
identification. Feature vectors for both the methods described
in section II were calculated as shown in section III. For
testing, the test sample is similarly processed and feature vector
is calculated. For recognition, the Euclidean distance between
the features of the test sample and the features of all the
samples stored in the database is computed. The sample in the
database for which the Euclidean distance is minimum, is
declared as the speaker recognized.

C. Accuracy of Identification
The accuracy of the identification system is calculated as
given by equation 5.

                                                                (5)

 Fig. 5 shows the results obtained for DFT sectorization. As                                  Figure 6. Accuracy for WHT Sectorization
seen from the results, when the complex DFT plane is divided
into seven sectors, the maximum accuracy is around 80% and                      results obtained for WHT sectorization. Here also we see that
decreases as the number of samples in the database is increased                 accuracy improves as the number of sectors is increased from
(64% for 30 samples). It can be seen that accuracy increases                    7 to 28. But further division into 56 sectors does not give any
when the number of sectors into which the complex DFT plane                     advantage. Overall the results obtained for DFT are better than
is divided, is increased from 7 to 28. With 28 sectors, the                     those obtained for WHT.
maximum accuracy is 80% up to 20 samples after which it



                                                                          142                               http://sites.google.com/site/ijcsis/
                                                                                                            ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                   Vol. 9, No. 3, 2011
                             V.     CONCLUSION                                         [17] H B Kekre, Dhirendra Mishra, “Performance Comparison of Density
                                                                                            Distribution and Sector mean of sal and cal functions in Walsh
Speaker Identification using the concept of Sectorization has                               Transform Sectors as Feature Vectors for Image Retrieval ” ,
been proposed in this paper. The complex DFT and WHT                                        International Journal of Image Processing ,Volume :4, Issue:3, 2010.
plane has been divided into circular sectors and feature vectors                       [18] H B Kekre, Dhirendra Mishra, “CBIR using Upper Six FFT Sectors of
                                                                                            Color Images for Feature Vector Generationl”, International Journal of
have been calculated using weighted density. Accuracy                                       Engineering and Technology,Volume :2(2) ”, 2010.
increases when the 7 circular sectors are divided into 28                              [19] H B Kekre, Dhirendra Mishra, “Performance Comparison of Four,
sectors for both the transform techniques. But there is no                                  Eight & Twelve Walsh Transform Sectors Feature Vectors for Image
significant improvement when the complex plane is further                                   Retrieval from Image Databases”, International Journal of Engineering
divided. The results also show that the performance of DFT is                               Science and Technology”, Volume :2(5) , 2010.
better than WHT.                                                                       [20] H B Kekre, Dhirendra Mishra, “ Four Walsh Transform Sectors
                                                                                            Feature Vectors for Image Retrieval from Image Databases ” ,
                                                                                            International Journal of Computer           Science and Information
                                                                                            Technologies”, Volume :1(2) , 2010.
                               REFERENCES                                              [21] H B Kekre, Dhirendra Mishra, “Digital Image Search & Retrieval
                                                                                            using FFT Sectors of Color Images”, International Journal of Computer
[1]    Lawrence Rabiner, Biing-Hwang Juang and B.Yegnanarayana,                             Science and Engineering”, Volume :2 , No.2, 2010.
       “Fundamental of Speech Recognition”, Prentice-Hall, Englewood Cliffs,           [22] H B Kekre, Vaishali Kulkarni, “Automatic Speaker Recognition using
       2009.                                                                                circular DFT Sector”, Interanational Conference and Workshop on
[2]    S Furui, “50 years of progress in speech and speaker recognition                     Emerging Trends in Technology (ICWET 2011), 25-26 February, 2011.
       research”, ECTI Transactions on Computer andInformation Technology,             [23] Bergland, G. D. "A Guided Tour of the Fast Fourier Transform." IEEE
       Vol. 1, No.2, November 2005.                                                         Spectrum 6, 41-52, July 1969
[3]    D. A. Reynolds, “An overview of automatic speaker recognition                   [24] Walker, J. S. Fast Fourier Transform, 2nd ed. Boca Raton, FL: CRC
       technology,” Proc. IEEE Int. Conf. Acoust., Speech,S on Speech and                   Press, 1996.
       Audio Processing, Vol. 7, No. 1, January 1999. IEEE, New York, NY,              [25] Terry Ritter, Walsh-Hadamard Transforms: A Literature Survey, Aug.
       U.S.A                                                                                1996.
[4]    S. Furui. Recent advances in speaker recognition. AVBPA97, pp 237--
       251, 1997
[5]    J. P. Campbell, ``Speaker recognition: A tutorial,'' Proceedings of the
       IEEE, vol. 85, pp. 1437--1462, September 1997.                                                              AUTHORS PROFILE
[6]    D. A. Reynolds, “Experimental evaluation of features for robust speaker
       identification,” IEEE Trans. Speech Audio Process., vol. 2, no. 4, pp.          Dr. H. B. Kekre has received B.E. (Hons.) in Telecomm. Engg. from Jabalpur
       639–643, Oct. 1994.                                                             University in 1958, M.Tech (Industrial Electronics) from IIT Bombay in 1960,
[7]    Tomi Kinnunen, Evgeny Karpov, and Pasi Fr¨anti, “Realtime Speaker               M.S.Engg. (Electrical Engg.) from University of Ottawa in 1965 and Ph.D.
       Identification”, ICSLP2004.                                                                          (System Identification) from IIT Bombay in 1970. He
                                                                                                            has worked Over 35 years as Faculty of Electrical
[8]    F. Bimbot, J.-F. Bonastre, C. Fredouille, G. Gravier, I. Magrin-                                     Engineering and then HOD Computer Science and Engg.
       Chagnolleau, S. Meignier, T. Merlin, J. Ortega-García, D.Petrovska-                                  at IIT Bombay. For last 13 years worked as a Professor in
       Delacrétaz, and D. A. Reynolds, “A tutorial on text-independent speaker                              Department of Computer Engg. at Thadomal Shahani
       verification,” EURASIP J. Appl. Signal Process., vol. 2004, no. 1, pp.
                                                                                                            Engineering College, Mumbai. He is currently Senior
       430–451, 2004.
                                                                                       Professor working with Mukesh Patel School of Technology Management and
[9]     Marco Grimaldi and Fred Cummins, “Speaker Identification using                 Engineering, SVKM’s NMIMS University, Vile Parle(w), Mumbai, INDIA.
       Instantaneous Frequencies”, IEEE Transactions on Audio, Speech, and             He ha guided 17 Ph.D.s, 150 M.E./M.Tech Projects and several B.E./B.Tech
       Language Processing, vol., 16, no. 6, August 2008.                              Projects. His areas of interest are Digital Signal processing, Image Processing
[10]    Zhong-Xuan, Yuan & Bo-Ling, Xu & Chong-Zhi, Yu. (1999). “Binary                and Computer Networks. He has more than 300 papers in National /
       Quantization of Feature Vectors for Robust Text-Independent Speaker             International Conferences / Journals to his credit. Recently twelve students
       Identification” in IEEE Transactions.                                           working under his guidance have received best paper awards. Recently two
[11]   Dr. H B Kekre, Vaishali Kulkarni,”Speaker Identification using Power            research scholars have received Ph. D. degree from NMIMS University
       Distribution in Frequency Spectrum”, Technopath, Journal of Science,            Currently he is guiding ten Ph.D. students. He is member of ISTE and IETE.
       Engineering & Technology Management, Vol. 02, No.1, January 2010.
[12]   Dr. H B Kekre, Vaishali Kulkarni, “Speaker Identification by using                                 Vaishali Kulkarni has received B.E in Electronics
       Power Distribution in Frequency Spectrum”, ThinkQuest - 2010                                       Engg. from Mumbai University in 1997, M.E (Electronics
       International Conference on Contours of Computing Technology”,                                     and Telecom) from Mumbai University in 2006. Presently
       BGIT, Mumbai,13th -14th March 2010.                                                                she is pursuing Ph. D from NMIMS University. She has a
[13]     H B Kekre, Vaishali Kulkarni, “Speaker Identification by using Vector                            teaching experience of more than 8 years. She is Associate
       Quantization”, International Journal of Engineering Science and                                    Professor in telecom Department in MPSTME, NMIMS
       Technology, May 2010.                                                                              University. Her areas of interest include Speech
                                                                                       processing: Speech and Speaker Recognition. She has 10 papers in National /
[14]   H B Kekre, Vaishali Kulkarni, “Performance Comparison of Speaker
                                                                                       International Conferences / Journals to her credit.
       Recognition using Vector Quantization by LBG and KFCG ” ,
       International Journal of Computer Applications, vol. 3, July 2010.
[15]    H B Kekre, Vaishali Kulkarni, “ Performance Comparison of
       Automatic Speaker Recognition using Vector Quantization by LBG
       KFCG and KMCG”, International Journal of Computer Science and
       Security, Vol: 4 Issue: 5, 2010.
[16]     H B Kekre, Vaishali Kulkarni, “Comparative Analysis of Automatic
       Speaker Recognition using Kekre’s Fast Codebook Generation
       Algorithm in Time Domain and Transform Domain ” , International
       Journal of Computer Applications, Volume 7 No.1. September 2010.




                                                                                 143                                    http://sites.google.com/site/ijcsis/
                                                                                                                        ISSN 1947-5500
                                                                (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 Vol. . 9, No. 3, March 2011

                        Reliability and Security in MDRTS
                                              A Combine Colossal Expression


       Gyanendra Kumar Gupta                                    A. K Sharma                                   Vishnu Swaroop
     Computer Sc. & Engg. Deptt.                       Computer Sc. & Engg. Deptt                      Computer Sc. & Engg. Deptt
     Kanpur Institute of Technology                    M.M.M. Engineering College                      M.M.M. Engineering College
      Kanpurr, UP, India, 208001                       Gorakhpur, UP, India, 273010                    Gorakhpur, UP, India, 273010
       gyanendrag@gmail.com                              akscse@rediffmail.com                           rsvsgkp@rediffmail.com

Abstract—Numerous types of Information Systems are broadly
used in various fields. With the fast development of computer                               I.     INTRODUCTION (HEADING 1)
network, Information System users care more about data                         Data reliability summarizes the validity, accuracy,
sharing in networks. Sharing of information and changes made               usability and integrity of related data between applications
by dissimilar user at different permission level is controlled by          and across Information Technology. This ensures that each
super user, but the read/write operation is performed in a
                                                                           user observes a reliable view of the constant data, including
reliable manner. In conventional relational database, data
reliability was controlled by consistency control mechanism
                                                                           visible changes made by the user's own transactions
when a data object is locked in a sharing mode, other                      (read/write) and transactions of other users or processes
transactions can only read it, but can not update it. If the               [1,2]. Data Reliability problems may arise at any time but are
conventional consistency control method has been used yet, the             frequently introduced during or following recovery situations
system’s concurrency will be inadequately influenced. So there             when backup copies of the data are used in place of the
are many new necessities for the consistency control in the field          original data. Reliability is mostly concerned with
of Information system (MDRTS). In present era not only the                 consistency [3].
information grows enormously it also brings together in
                                                                               Building distributed database system reliability is very
different nature of data like text, image, and picture, graphic
and sound. The problem not limited only to type of data (e.g.
                                                                           important. The failure of a distributed database system can
databases) it has used in different environment of database like           result in anything from easily repairable errors to disastrous
Mobile Database, Distributed, Real Time Database, and                      meltdowns. A reliable distributed database system is
Database and Multimedia database. There are many aspects of                designed to be as fault tolerant as feasible. Fault tolerance
data reliability problems in mobile distributed real time system           deals with making the system function in the presence of
(MDRTS), such as inconsistency between attribute and type of               faults. Faults can occur in any of the components of a
data; the inconsistency of topological relations after objects has         distributed system. This article gives a brief overview of the
been modified. In this paper, many cases of data reliability are           different types of faults in a system and some of their
discussed for Information System. As the mobile computing                  solutions.
becomes well-liked and the database grows with information
sharing security is a big issue for researchers. Reliability and              Various kinds of data consistency have been identified.
Security of data is a big confront for researchers because when            These include Application Consistency, Transaction
ever the data is not reliable and secure no maneuver on the                Consistency and Point-in-Time Consistency
data (e.g. transaction) is useful. It becomes more and more
crucial when the data changes from one form to another (i.e.                          II.        VARIUOS TYPE OF CONSISTENCY
transactions) that are used in non-traditional environment like
Mobile, Distributed, Real Time and Multimedia databases. In                A. Point in Time Consistency
this paper we raise the different aspects and analyze the
available solution for reliability and security of databases.                  Data is said to be Point in Time consistent if all of the
Conventional Database Security has focused primarily on                    interrelated data components are as they were at any single
creating user accounts and managing user privileges level to               instant in time. This type of consistency can be visualized by
database objects. In this paper we also talk about an                      picturing a data center that has experienced a power failure.
impression of the present and past            database security            Before the lights come back on and processing resumes, the
challenges.                                                                data is considered time consistent, due to the fact that the
                                                                           entire processing environment failed at the same instant of
    Key Words- System Reliability, Sharing , Data                          time.
Consistency, Data Privileges, Data Loss, Data Recovery,                        Different types of failures may create a situation where
Integrity, Concurrency Control & Recovery, Distributed                     Point in Time consistency is not maintained. For example,
Databases, Transactions, Security, Authentication,                         consider the failure of a single logical volume containing
Integrity, Access Control, Encryption                                      data from several applications. If the only recovery option is
                                                                           to restore that volume from a backup taken sometime earlier,
                                                                           the data contained on the restored volume is not consistent



                                                                     144                               http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. . 9, No. 3, March 2011
with the other volumes, and additional recovery steps must                  By and large we take synchronization features for granted
be undertaken.[101]                                                     and do not give much thought to how they all work together
                                                                        to protect both the integrity and consistency of the data. It is
B. Transaction Consistency                                              the integrity of the data and various systems that allows
   A transaction is a logical unit of work that may include             applications to restart after a power failure or other
any number of file or database updates. During normal                   unscheduled event.
processing, transaction consistency is present only
                                                                                  III.   DATA LOSS VS. DATA CONSISTENCY
    •    Before any transactions have run.                                  How does one reconcile the possibility of lost data versus
                                                                        the integrity and consistency of the data? Often times,
    •    Follow the completion of a successful transaction              traditional backups were created while files were being
         and before the next transaction begins, and                    updated. Eventually, backups created in this fashion were
                                                                        referred to as “fuzzy backups” as neither the consistency nor
    •    When the application ends normally or the                      the integrity of the data could be assured.
         database is closed.
                                                                             Importantly it is better idea to capture as many updates as
    A failure of some kind, the data will not be transaction            possible, even if the end result is not consistent. Let us
consistent if transactions were in-flight at the time of the            consider this point within the confines of a "typical" large
failure. In most cases what occurs is that once the application         systems data center. For the sake of discussion, let us assume
or database is restarted, the incomplete transactions are               that there are many applications sharing data on hundreds of
identified and the updates relating to these transactions are           logical volumes in many thousands of data sets. What
either “backed-out” or processing resumes with the next                 happens to the integrity of the data if some updates are
dependant write [4].                                                    applied and others are not? Should this occur, the data is in
                                                                        an artificial state, one that is neither time, transaction nor
C. Application Consistency                                              application consistent? When the applications are restarted, it
    It is similar to Transaction consistency, but on a grander          is likely that some data will be duplicated, while other data
scale. Instead of data consistency within the scope of a single         will still be missing. The difficulty here is in identifying
transaction, data must be consistent within the confines of             which updates were successful, which updates caused
many different transaction streams from one or more                     erroneous results and which updates are missing.
applications. An application may be made up of many                        In all cases it is preferable to have time consistent data,
different types of data, such as multiple database                      even if a few partial transactions are lost or rolled back in the
components, various types of files, and data feeds from other           process.
applications. Application consistency is the state in which all
related files and databases are in-synch and represent the true             Data loss can be defined as data that is lost and cannot be
status of the application.                                              recovered by another means. Often, individual transactions
                                                                        or files can be restored or recreated, which is inconvenient,
     Data Consistency refers to the usability of data and is            but does not represent a true loss of data. Even in cases
often taken for granted in the single site environment. Data            where some transactional data cannot be recreated or
Consistency problems may arise even in a single-site                    recovered by the data center support teams, it can sometimes
environment during recovery situations when backup copies               be re-entered by the end user if necessary.
of the production data are used in place of the original data
[5].                                                                       If considering an asynchronous Business Continuity and
                                                                        Disaster Recovery solution, it is important to understand that
    In order to ensure that your backup data is useable, it is          some updates may be lost in flight. However, the greater
necessary to understand the backup methodologies that are in            consideration is that the asynchronous solution you select
place as well as how the primary data is created and                    provides you time consistent data for all of your interrelated
accessed. Another very important consideration is the                   applications. In this way, recovery is similar to the process
consistency of the data once the recovery has been completed            necessary to achieve Transaction and Application
and the application is ready to begin processing.                       Consistency following an outage at the primary site.
    In order to appreciate the integrity of your data, it is                Data loss does not imply a loss of data integrity.
important to understand the dependent write process. This               However, given a choice, most organizations will protect
occurs within individual programs, applications, application            data consistency—for example, ensuring that bank deposits
systems and databases. A dependent write is a data update               and withdrawals occur in the proper sequence so that account
that is not to be written until a previous update has been              balances reflect a consistent picture any given point in time.
successfully completed. In the large systems environments,              This is preferable to processing transactions out of sequence,
the logic that determines the sequence in which systems issue           or, to use our banking example again, to record the
“writes” is controlled by the application processing flow and           withdrawal and not the preceding deposit [7].
supported by basic system functions [6].




                                                                  145                             http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                Vol. . 9, No. 3, March 2011
         IV.   THE BACKUP PROBLEM - AN OVERVIEW                               It is true that different records would have been backed
   For a set of backup data to be of any value it needs to be             up if the write I/O pattern had been different, or if the backup
consistent in some fashion; Time, Transaction or Application              process was either faster or slower. The point here is that
consistency is required. For an individual data set, one with             unless the backup could have been processed instantaneously
no dependencies on any other data, this can be accomplished               (or at least in the time between two of the file write I/Os), the
by creating a simple Point in Time copy of the data and                   backup copy does not represent consistent data within the
ensuring that the data is not updated during the backup                   file.
process.[8]                                                                   In order to address this failing, various methods were
    At peek, this appears to be a relatively simple thing to              developed including transaction logging, transaction back out
accomplish -- at least for an individual data set. However, if            and file reload with applied journal transactions, just to name
this data set is being updated by a critical on-line application,         a few. These methods are all share the attributes of requiring
there may never be an opportunity to create a consistent                  extra effort (before the backup) and additional time -
backup-copy without temporarily halting the critical                      possibly even manual intervention - before the data can be
application. With today's dependence on 24x7 processing,                  used. More importantly, the corrective process requires an
the opportunities for even temporarily interrupting critical              in-depth understanding of both the application and data.
applications to create a window” are seldom available [9].                These requirements dictate that a unique recovery scenario
                                                                          be designed for nearly each and every data set.
    As this problem became more prevalent, there were
various methods used to attempt to address the situation. One                 The integrity problem is daunting enough when viewed
of these methods was to create a “fuzzy” backup of the data,              in the context of just these 20 records, but what about when
that is, to create the backup copy while updates were allowed             there are interdependencies between thousands of data sets
to continue. Various utilities were used to perform this                  residing on hundreds (or even thousands) of volumes?
“backup while open” (BWO), but they all shared the attribute                   In this greater context, simple data consistency within
that the backup copy of the data may or may not be useable:               individual data sets is no longer sufficient. What is required
If no additional actions were taken to validate and ensure the            is time consistency across all of the interdependent data. As
consistency of the data, any use of this backup data was                  it is impossible to achieve this with the traditional backup
predicated on the hope that “some data is better than                     methodologies, newer technologies are required to support
nothing” and generally produced unpredictable and/or un-                  time consistent data?
repeatable results.
                                                                              Fortunately, there are solutions available today. For a
In fact, there are three different possible outcomes, should              single-site solution, FlashCopy with Consistency Groups can
this fuzzy backup be restored:                                            be used to create a consistent Point-in-Time copy that can
                                                                          then be backed-up by traditional means [11].
    1.   The data is accidentally consistent and useable.                     To guarantee the correct results and consistency of
         This is a happy circumstance that may or may not                 databases, the conflicts between transactions can be either
         be repeatable.                                                   avoided, or detected and then resolved. Most of the existing
                                                                          mobile database CC techniques use the (conflict)
    2.   The data is not consistent and not useable. A                    serializability as the correctness criterion. They are either
         subsequent attempt to use the data detects the                   pessimistic if they avoid conflicts at the beginning of
         errors and abnormal end subsequent processing.                   transactions, or optimistic if they detect and resolve conflicts
                                                                          right before the commit time, or hybrid if they are mixed. To
    3.   The data is NOT consistent, but does not cause an                fulfill this goal, locking, timestamp ordering (TO) and
         ABEND and happens to be useable to the                           serialization graph testing can be used as either a pessimistic
         application. It is used by subsequent processing and             or optimistic algorithm.
         any data errors go undetected and uncorrected. This
         is the worst possible outcome.                                                   V.    SECURITY IN DATABASES
    One of the first things it might be notice when looking at                Database security is the system, processes, and
the records contained on the backup is that they are different            procedures that protect a database from unintended activity.
from the data records that were present on the file both                  Unintended activity can be categorized as authenticated
before the backup started and immediately after the backup                misuse, malicious attacks or inadvertent mistakes made by
ended. In fact, the records contained within the backup are a             authorized individuals or processes. Database security is also
completely artificial construct and does not accurately                   a specialty within the broader discipline of computer
describe the contents of the file at any Point in Time. This is           security. Databases introduce a number of unique security
not a consistent backup of the data. It is neither data-                  requirements for their users and administrators. On one hand,
consistent within itself nor is it time-consistent from any               databases are designed to promote open and flexible access
point in time. It is a completely artificial representation of a          to data. On the other hand, it’s this same open access that
file that never existed. [10]                                             makes databases vulnerable to many kinds of malicious
                                                                          activity. These are just a few of the database security
                                                                          problems that exist within organizations. The best way to



                                                                    146                             http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                               Vol. . 9, No. 3, March 2011
avoid a lot of these problems is to employ qualified                     your physical features, your voice, and fingerprint locks can
personnel and separate the security responsibilities from the            read your fingerprints [14, 15].
daily database maintenance responsibilities [12, 31].
                                                                             Access control is a rapidly growing market and soon may
    Traditionally databases have been protected from                     manifest itself in such ways we cannot even imagine.
external connections by firewalls or routers on the network              Nowadays, security access control is a necessary component
perimeter with the database environment existing on the                  for businesses. There are many ways to create this security.
internal network opposed to being located within a                       Some companies hire a security guard to stand in the
demilitarized zone. Additional network security devices that             gateway. There are many security devices that prevent or
detect and alert on malicious database protocol traffic                  permit access such as a turnstile. The best most effective
include network intrusion detection systems along with host-             access control systems are operated by computers.
based intrusion detection systems.
                                                                             Auditing is a computer security audit is a manual or
    One of the main issues faced by database security                    systematic measurable technical assessment of a system or
professionals is avoiding inference capabilities. Basically,             application. Manual assessments include interviewing staff,
inference occurs when users are able to piece together                   performing security vulnerability scans, reviewing
information at one security level to determine a fact that               application and operating system access controls, and
should be protected at a higher security level. Database                 analyzing physical access to the systems. Automated
security is more critical as networks have become more                   assessments include system generated audit reports or using
open.                                                                    software to monitor and report changes to files and settings
                                                                         on a system. Systems can include personal computers,
Databases provide many layers and types of information                   servers, mainframes, network routers, switches. Applications
security, typically specified in the data dictionary, including:         can include Web Services, Databases [16].
                                                                             Authentication is the process of confirming a user or
    •    Access control                                                  computer’s identity. The process normally consists of four
                                                                         steps:
    •    Auditing

    •    Authentication                                                  1. The user makes a claim of identity, usually by providing a
                                                                         username. For example, It might make this claim by telling a
    •    Encryption                                                      database that my username is something.
                                                                            2. The system challenges the user to prove his or her
    •    Integrity controls                                              identity. The most common     challenge is a request for a
    Database security can begin with the process of creation             password.
and publishing of appropriate security standards for the
                                                                             3. The user responds to the challenge by providing the
database environment. The standards may include specific
                                                                         requested proof. In this example, It would provide the
controls for the various relevant database platforms; a set of
                                                                         database with my password.
best practices that cross over the platforms; and linkages of
the standards to higher level polices and governmental                      4. The system verifies that the user has provided
regulations.                                                             acceptable proof by, for example, checking the password
                                                                         against a local password database or using a centralized
    Access Control is a term taken from the linguistic world
                                                                         authentication server
of security. In general, it means the execution of limitations
and constrictions on whoever tries to occupy a certain                       Encryption is good. It helps make things more secure.
protected property. Guarding an entrance of a person is also a           However, the idea that strong cryptography is good security
practice of access control. There are many types of access               by itself is simply wrong. Encrypted messages eventually
control.[28]. Some of them are mentioned in this article.                have to be decrypted so they are useful to the sender or
You, the reader of this article, will have several types of              receiver. If those end-points are not secured, then getting the
access control around you.         Nowadays, almost every                plain-text messages is trivial [17]. This is a demonstration of
computer user has a firewall or antivirus is running on every            a crude process of accomplishing that. There is no dispute
computer, a popup blocker and many other programs. All of                about the need for strong encryption, particularly for
these are with access control functions [13]. All of these               privileged communications. There is no way to have a high
programs guard us from intruders of sorts. They inspect                  level of assurance that the entire path between endpoints of a
everything trying to enter the computer and let it in or leave           message is secure, so the message has to be hidden in transit.
it out. Computers have complicated access control abilities.             While brute-force decryption is possible, modern forms of
They ask for authentication and search for the digital                   encryption have made this process too long to be valuable
signatures. Also, there are different types of keypads and               [18].
access control systems. In today's world the keys and locks
are beginning to look different. With the passage of time, the               Computer security authentication means verifying the
key locks also got smarter. They can identify the patterns of            identity of a user logging onto a network. Passwords, digital
                                                                         certificates, smart cards and biometrics can be used to prove



                                                                   147                             http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. . 9, No. 2, March 2011
the identity of the user to the network. Computer security                  database security manager also provides different types
authentication includes verifying message integrity, e-mail                 of access control for different users and assesses new
authentication and MAC (Message Authentication Code),                       programs that are performing with the database. If these
checking the integrity of a transmitted message. There are                  tasks are performed on a daily basis, you can avoid a lot
human authentication, challenge-response authentication,                    of problems with users that may pose a threat to the
password, digital signature, IP spoofing and biometrics [19,                security of the database.
26].
    Human authentication is the verification that a person              •   Varied Security Methods for Applications: More often
initiated the transaction, not the computer. Challenge-                     than not applications developers will vary the methods
response authentication is an authentication method used to                 of security for different applications that are being
prove the identity of a user logging onto the network. When                 utilized within the database. This can create difficulty
a user logs on, the network access server (NAS), wireless                   with creating policies for accessing the applications.
access point or authentication server creates a challenge,                  The database must also possess the proper access
typically a random number sent to the client machine. The                   controls for regulating the varying methods of security
client software uses its password to encrypt the challenge                  otherwise sensitive data is at risk.
through an encryption algorithm or a one-way hash function
and sends the result back to the network. This is the                   •   Post-Upgrade Evaluation: When a database is upgraded
response.                                                                   it is necessary for the administrator to perform a post-
    Two- factor authentication requires two independent                     upgrade evaluation to ensure that security is consistent
ways to establish identity and privileges. The method of                    across all programs. Failure to perform this operation
using more than one factor of authentication is also called                 opens up the database to attack.
strong authentication. This contrasts with traditional
password authentication, requiring only one factor in order to          •   Split the Position: Sometimes organizations fail to split
gain access to a system. Password is a secret word or code                  the duties between the IT administrator and the
used to serve as a security measure against unauthorized                    database security manager. Instead the company tries to
access to data. It is normally managed by the operating                     cut costs by having the IT administrator do everything.
system or DBMS. However, a computer can only verify the                     This action can significantly compromise the security of
legality of the password, not the legality of the user.                     the data due to the responsibilities involved with both
    The two major applications of digital signatures are for                positions. The IT administrator should manage the
setting up a secure connection to a website and verifying the               database while the security manager performs all of the
integrity of files transmitted. IP spoofing refers to inserting             daily security processes.
the IP address of an authorized user into the transmission of
an unauthorized user in order to gain illegal access to a               •   Application Spoofing: Hackers are capable of creating
computer system.                                                            applications that resemble the existing applications
                                                                            connected to the database. These unauthorized
    Biometrics is a more secure form of authentication than                 applications are often difficult to identify and allow
typing passwords or even using smart cards that can be                      hackers access to the database via the application in
stolen. However, some ways have relatively high failure
                                                                            disguise.
rates. For example, fingerprints can be captured from a water
glass and fool scanners.
                                                                        •   Manage User Passwords: Sometimes IT database
                                                                            security managers will forget to remove IDs and access
    VI.   DATABASE SECURITY ISSUES: DATABASE SECURITY                       privileges of former users which leads to password
            PROBLEMS AND HOW TO AVOID THEM                                  vulnerabilities in the database. Password rules and
    A database security manager is the most important asset                 maintenance needs to be strictly enforced to avoid
to maintaining and securing sensitive data within an                        opening up the database to unauthorized users.
organization. Database security managers are required to
multitask and juggle a variety of headaches that accompany              •   Windows OS Flaws: Windows operating systems are
the maintenance of a secure database. For any organization it               not effective when it comes to database security. Often
is important to understand some of the database security                    theft of passwords is prevalent as well as denial of
problems that occur within an organization and how to avoid                 service issues. The database security manager can take
them. If it is understand that how, where, and why of                       precautions through routine daily maintenance checks.
database security you can prevent future problems from
occurring [20].                                                             As organizations increase their reliance on, possibly
                                                                        distributed, information systems for daily business, they
•     Regular Maintenance: Database audit logs require daily            become more vulnerable to security breaches even as they
      review to make certain that there has been no data                gain productivity and efficiency advantages. Though a
      misuse. This requires overseeing database privileges              number of techniques, such as encryption and electronic
                                                                        signatures, are currently available to protect data when
      and then consistently updating user access accounts. A
                                                                        transmitted across sites, a truly comprehensive approach for



                                                                  148                             http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                              Vol. . 9, No. 3, March 2011
data protection must also include mechanisms for enforcing                       transaction has inserted additional rows that satisfy
access control policies based on data contents, subject                          the condition.
qualifications and characteristics, and other relevant
contextual information, such as time. It is well understood                 VIII. INTRODUCTION TO DATA CONCURRENCY AND
today that the semantics of data must be taken into account in                CONSISTENCY IN A MULTIUSER ENVIRONMENT
order to specify effective access control policies. Also,
techniques for data integrity and availability specifically                 In a single-user database, the user can modify data in the
tailored to database systems must be adopted. In this respect,          database without concern for other users modifying the same
over the years the database security community has                      data at the same time. However, in a multiuser database, the
developed a number of different techniques and approaches               statements within multiple simultaneous transactions can
to assure data confidentiality, integrity, and availability.            update the same data. Transactions executing at the same
However, despite such advances, the database security area              time need to produce meaningful and consistent results.
faces several new challenges. Factors such as the evolution             Therefore, control of data concurrency and data consistency
of security concerns, the "disintermediation¿ of access to              is vital in a multiuser database [22].
data, a new computing paradigms and applications, such as
grid-based computing and on-demand business. we have                        •    Data concurrency means that many users can access
introduced both new security requirements and new contexts                       data at the same time.
in which to apply and possibly extend current approaches. In
this review, we first survey the most relevant concepts                     •    Data consistency means that each user sees a
underlying the notion of database security and summarize the                     consistent view of the data, including visible
most well-known techniques. We focus on access control                           changes made by the user's own transactions and
systems, on which a large body of research has been devoted,                     transactions of other users.
and describe the key access control models, namely, the
discretionary and mandatory access control models, and the                  To describe consistent transaction behavior when
role-based access control model. We also discuss security for           transactions execute at the same time, database researchers
advanced data management systems, and cover topics such                 have defined a transaction isolation model called
as access control for XML. We then discuss current                      serializability. The serializable mode of transaction behavior
challenges for database security and some preliminary                   tries to ensure that transactions execute in such a way that
approaches that address some of these challenges [21].                  they appear to be executed one at a time, or serially, rather
                                                                        than concurrently [31].
              VII. MAJOR SECURITIES CHALLENGES                              While this degree of isolation between transactions is
                                                                        generally desirable, running many applications in this mode
                                                                        can seriously compromise application throughput. Complete
 1.       Security Awareness and End-users
                                                                        isolation of concurrently running transactions could mean
 2.       Google Exposure
                                                                        that one transaction cannot perform an insert into a table
 3.       Standard Compliance & Regulations Updates                     being queried by another transaction. In short, real-world
 4.       Vulnerability Management                                      considerations usually require a compromise between perfect
 5.       Frequently Change of Management and Lack of Co-               transaction isolation and performance.
          ordination in Management
                                                                            In general, multiuser databases use some form of data
    Review the four levels of transaction isolation with                locking to solve the problems associated with data
differing degrees of impact on transaction processing                   concurrency, consistency, and integrity. Locks are
throughput. These isolation levels are defined in terms of              mechanisms that prevent destructive interaction between
three phenomena that must be prevented between                          transactions accessing the same resource.
concurrently executing transactions.
                                                                        Resources include two general types of objects:
The three preventable phenomena are:
                                                                            •    User objects, such as tables and rows (structures
      •     Dirty reads: A transaction reads data that has been                  and data)
            written by another transaction that has not been
            committed yet.                                                  •    System objects not visible to users, such as shared
                                                                                 data structures in the memory and data dictionary
      •     Non-repeatable (fuzzy) reads: A transaction rereads                  rows
            data it has previously read and finds that another
            committed transaction has modified or deleted the               Database automatically provides read consistency to a
            data.                                                       query so that all the data that the query sees comes from a
                                                                        single point in time (statement-level read consistency).
      •     Phantom reads: A transaction re-executes a query            Database can also provide read consistency to all of the
            returning a set of rows that satisfies a search             queries in a transaction (transaction-level read consistency).
            condition and finds that another committed



                                                                  149                             http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                             Vol. . 9, No. 32, March 2011
    Database uses the information maintained in its rollback           this situation by setting higher values of INITRANS for
segments to provide these consistent views. The rollback               tables that will experience many transactions updating the
segments contain the old values of data that have been                 same blocks. Doing so enables Database to allocate sufficient
changed by uncommitted or recently committed transactions.             storage in each block to record the history of recent
Database provides statement-level read consistency using               transactions that accessed the block.
data in rollback segments.
                                                                           Database generates an error when a serializable
  1) Statement-Level Read Consistency                                  transaction tries to update or delete data modified by a
    Database always enforces statement-level read                      transaction that commits after the serializable transaction
consistency. This guarantees that all the data returned by a           began.When a serializable transaction fails with the "Cannot
single query comes from a single point in time--the time that          serialize access" error, the application can take any of several
the query began. Therefore, a query never sees dirty data nor          actions:
any of the changes made by transactions that commit during
query execution. As query execution proceeds, only data                    •    Commit the work executed to that point
committed before the query began is visible to the query. The
query does not see changes committed after statement                       •    Execute additional (but different) statements
execution begins.                                                               (perhaps after rolling back to a save point
   2) Read Consistency with Real Application Clusters                           established earlier in the transaction)
    Real Application Clusters use a cache-to-cache block
transfer mechanism known as Cache Fusion to transfer read-                 •    Roll back the entire transaction
consistent images of blocks from one instance to another.                 5) Comparison of Read Committed and Serializable
Real Application Clusters does this using high speed, low
                                                                       Isolation
latency interconnects to satisfy remote requests for data
                                                                           Database gives the application developer a choice of two
blocks.
                                                                       transaction isolation levels with different characteristics.
  3) Read Committed Isolation                                          Both the read committed and serializable isolation levels
   The default isolation level for Database is read                    provide a high degree of consistency and concurrency. Both
committed. This degree of isolation is appropriate for                 levels provide the contention-reducing benefits of Database's
environments where few transactions are likely to conflict.            read consistency multiversion concurrency control model
Database causes each query to execute with respect to its              and exclusive row-level locking implementation and are
own materialized view time, thereby permitting                         designed for real-world application deployment.
nonrepeatable reads and phantoms for multiple executions of
a query, but providing higher potential throughput. Read                    a) Transaction Set Consistency
committed isolation is the appropriate level of isolation for              A useful way to view the read committed and serializable
environments where few transactions are likely to conflict.            isolation levels in Database is to consider the following
                                                                       scenario: Assume you have a collection of database tables (or
  4) Serializable Isolation                                            any set of data), a particular sequence of reads of rows in
Serializable isolation is suitable for environments:                   those tables, and the set of transactions committed at any
                                                                       particular time. An operation (a query or a transaction) is
                                                                       transaction set consistent if all its reads return data written by
    •    With large databases and short transactions that
                                                                       the same set of committed transactions. An operation is not
         update only a few rows
                                                                       transaction set consistent if some reads reflect the changes of
                                                                       one set of transactions and other reads reflect changes made
    •    Where the chance that two concurrent transactions             by other transactions. An operation that is not transaction set
         will modify the same rows is relatively low                   consistent in effect sees the database in a state that reflects no
                                                                       single set of committed transactions.
    •    Where relatively long-running transactions are
         primarily read-only                                               Database provides transactions executing in read
                                                                       committed mode with transaction set consistency for each
    Serializable isolation permits concurrent transactions to          statement. Serializable mode provides transaction set
make only those database changes they could have made if               consistency for each transaction.
the transactions had been scheduled to execute one after
another. Specifically, Database permits a serializable                      b) Row-Level Locking
transaction to modify a data row only if it can determine that             Both read committed and serializable transactions use
prior changes to the row were made by transactions that had            row-level locking, and both will wait if they try to change a
committed when the serializable transaction began.                     row updated by an uncommitted concurrent transaction. The
    Under some circumstances, Database can have                        second transaction that tries to update a given row waits for
insufficient history information to determine whether a row            the other transaction to commit or roll back and release its
has been updated by a "too recent" transaction. This can               lock. If that other transaction rolls back, the waiting
occur when many transactions concurrently modify the same              transaction, regardless of its isolation mode, can proceed to
data block, or do so in a very short period. You can avoid



                                                                 150                             http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                               Vol. . 9, No. 3, March 2011
change the previously locked row as if the other transaction             and serializable isolation provide a high level of concurrency
had not existed.                                                         for high performance, without the need for reading
                                                                         uncommitted ("dirty") data [23, 24]
     However, if the other blocking transaction commits and
releases its locks, a read committed transaction proceeds with                e) Read Committed Isolation
its intended update. A serializable transaction, however, fails              For many applications, read committed is the most
with the error "Cannot serialize access", because the other              appropriate isolation level. Read committed isolation can
transaction has committed a change that was made since the               provide considerably more concurrency with a somewhat
serializable transaction began.                                          increased risk of inconsistent results due to phantoms and
     c) Referential Integrity                                            non-repeatable reads for some transactions.
    Because Database does not use read locks in either read-                 Many high-performance environments with high
consistent or serializable transactions, data read by one                transaction arrival rates require more throughput and faster
transaction can be overwritten by another. Transactions that             response times than can be achieved with serializable
perform database consistency checks at the application level             isolation. Other environments that support users with a very
cannot assume that the data they read will remain unchanged              low transaction arrival rate also face very low risk of
during the execution of the transaction even though such                 incorrect results due to phantoms and no repeatable reads.
changes are not visible to the transaction. Database                     Read committed isolation is suitable for both of these
inconsistencies can result unless such application-level                 environments.
consistency checks are coded with this in mind, even when
using serializable transactions.                                             Database read committed isolation provides transaction
                                                                         set consistency for every query. That is, every query sees
     d) Distributed Transactions                                         data in a consistent state. Therefore, read committed isolation
    In a distributed database environment, a given transaction           will suffice for many applications that might require a higher
updates data in multiple physical databases protected by two-            degree of isolation if run on other database management
phase commit to ensure all nodes or none commit. In such an              systems that do not use multiversion concurrency control.
environment, all servers, whether Database or non-Database,                  Read committed isolation mode does not require
that participates in a serializable transaction are required to          application logic to trap the "Cannot serialize access" error
support serializable isolation mode.                                     and loop back to restart a transaction. In most applications,
                                                                         few transactions have a functional need to issue the same
If a serializable transaction tries to update data in a database         query twice, so for many applications protection against
managed by a server that does not support serializable                   phantoms and non-repeatable reads is not important.
transactions, the transaction receives an error. The                     Therefore many developers choose read committed to avoid
transaction can roll back and retry only when the remote                 the need to write such error checking and retry code in each
server does support serializable transactions.                           transaction.
    In contrast, read committed transactions can perform                      f) Serializable Isolation
distributed transactions with servers that do not support                    Database's serializable isolation is suitable for
serializable transactions.                                               environments where there is a relatively low chance that two
    Application designers and developers should choose an                concurrent transactions will modify the same rows and the
isolation level based on application performance and                     long-running transactions are primarily read-only. It is most
consistency needs as well as application coding                          suitable for environments with large databases and short
requirements.                                                            transactions that update only a few rows.
    For environments with many concurrent users rapidly                      Serializable isolation mode provides somewhat more
submitting transactions, designers must assess transaction               consistency by protecting against phantoms and
performance requirements in terms of the expected                        nonrepeatable reads and can be important where a read/write
transaction arrival rate and response time demands.                      transaction executes a query more than once.
Frequently, for high-performance environments, the choice                    Unlike other implementations of serializable isolation,
of isolation levels involves a trade-off between consistency             which lock blocks for read as well as write, Database
and concurrency.                                                         provides nonblocking queries and the fine granularity of
    Application logic that checks database consistency must              row-level locking, both of which reduce write/write
take into account the fact that reads do not block writes in             contention. For applications that experience mostly
either mode.                                                             read/write contention, Database serializable isolation can
                                                                         provide significantly more throughput than other systems.
   Database isolation modes provide high levels of                       Therefore, some applications might be suitable for
consistency, concurrency, and performance through the                    serializable isolation on Database but not on other systems.
combination of row-level locking and Database's
multiversion concurrency control system. Readers and                         Coding serializable transactions requires extra work by
writers do not block one another in Database. Therefore,                 the application developer to check for the "Cannot serialize
while queries still see consistent data, both read committed             access" error and to roll back and retry the transaction.



                                                                   151                             http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                               Vol. . 9, No. 3, March 2011
Similar extra coding is needed in other database management              impair the integrity and availability of a database. [26, 27,
systems to manage deadlocks. For adherence to corporate                  28]
standards or for applications that are run on multiple
database management systems, it may be necessary to design                   There are several technique have been built for
transactions for serializable mode. Transactions that check              maintaining the security and reliability of systems like Data
for serializability failures and retry can be used with                  Consistency Techniques - Two-Process Mutual Exclusion:
Database read committed mode, which does not generate                    Dekker's- and Peterson's Algorithms, N-Process Mutual
serializability errors.                                                  Exclusion using Hardware, N-Reader, 1-Writer Mutual
                                                                         Exclusion using Head/Tail Flags. But the available
    Serializable mode is probably not the best choice in an              techniques are not sufficient for the different database
environment with relatively long transactions that must                  environment where the data is huge and complex for
update the same rows accessed by a high volume of short                  transactions including security system. The unusual
update transactions. Because a longer running transaction is             requirement in security, how ever mean that designers must
unlikely to be the first to modify a given row, it will                  careful consider their opinions when choosing database
repeatedly need to roll back, wasting work. Note that a                  technology for deployment commercially available products
conventional read-locking, pessimistic implementation of                 can provide outstanding performance, reliability, scalability
serializable mode would not be suitable for this environment             but unless they are expressly for embedded use, may
either, because long-running transactions--even read                     compromise overall security system. Security is more than
transactions--would block the progress of short update                   Just Good Crypto - The point here is not that encryption is
transactions and vice versa.)                                            worthless. The point is that encryption by itself is not helpful
                                                                         [29]. The endpoints need to be secure, passwords need to be
    Developers should take into account the cost of rolling              difficult to crack, and those who do have access to the
back and retrying transactions when using serializable mode.             system need to be trustworthy. One might ask what is the
As with read-locking systems, where deadlocks occur                      point of being able to see plaintext versions of encrypted
frequently, use of serializable mode requires rolling back the           communication if they already have root access? Getting
work done by terminated transactions and retrying them. In a             additional passwords for other systems, obtaining
high contention environment, this activity can use significant           information that passes through the system but is not stored
resources.                                                               on the system (text conversations, for instance), or bypassing
    For the most part in transaction operations, a transaction           system controls that might catch direct attempts at data.
that restarts after receiving the "Cannot serialize access"              System call traces can be used on any kind of process such as
error is improbable to encounter a second conflict with                  e-mail daemons, web servers, or encrypted chat programs. In
another transaction. For this reason it can help to execute              order for any security tool to be effective, it needs to be
those statements most likely to contend with other                       layered with other strong security tools, starting with a
transactions as early as possible in a serializable transaction.         security policy. No one tool, by itself, can ever prevent
However, there is no guarantee that the transaction will                 information theft or attacks, but several layers of security
complete successfully, so the application should be coded to             provide the most solid defense against would-be hackers.
limit the number of retries.                                             Encryption needs to be accompanied by server hardening,
                                                                         intrusion detection, firewalls, and auditing. Without it,
    Database management systems implement a multi-                       encryption is easily compromised.
version concurrency control algorithm called snapshot
isolation rather than providing full serializability based on
locking. There are well-known anomalies permitted by                                                  REFERENCES
snapshot isolation that can lead to violations of data
consistency by interleaving transactions that would maintain             [1]   Turner, S., L. Albert, B. Gajewski, and W. Eisele, “Archived
consistency if run serially. Until now, the only way to                        Intelligent Transportation System Data Quality”, Preliminary
                                                                               Analyses of San Antonio TransGuide Data. Transportation Research
prevent these anomalies was to modify the applications by                      Record, 2000(1719), p.p. 8.
introducing explicit locking or artificial update conflicts,
                                                                         [2]   Wang, R. Y., V.C. Storey and C.P. Firth, “A Framework for Analysis
following careful analysis of conflicts between all pairs of                   of Data Quality Research,” IEEE Transactions on Knowledge and
transactions [25]                                                              Data Engineering, Vol. 7, No. 4, August, 1995, pp. 623-640
                                                                         [3]   Redman, Thomas C., “Improve Data Quality for Competitive
                      IX.   CONCLUSION                                         Advantage” , Sloan Management Review, 1995, pp. 99-107.
                                                                         [4]   Ronald Fagin, “On an authorization mechanism”, ACM Transactions
    Database security concerns the confidentiality, integrity,                  on Database Systems (TODS), v.3 n.3, p.310-319, Sept. 1978
and availability of data stored in a database. A extensive               [5]   Bhattacharya, S., Brannon, K. W., Hsiao, H. and Narang, I., “Data
cover of research from authorization, to inference control, to                 Consistency in a Loosely Coupled Transaction Model”, IBM
multilevel secure databases, and to multilevel secure                          Research Report, RJ10232, (Feb 2002).
transaction processing, addresses primarily how to protect               [6]   Elisa Bertino , Elena Ferrari , Vijay Atluri, “The specification and
the security of a database, especially its confidentiality.                     enforcement of authorization constraints in workflow management
                                                                               systems”, ACM Transactions on Information and System Security
However, very limited research has been done on how to                         (TISSEC), v.2 n.1, p.65-104, Feb. 1999.
survive successful database attacks, which can seriously




                                                                   152                                 http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            Vol. . 9, No. 3, March 2011
[7]    Nygard, Greg ; Hammoudi, Faouzi, “Role-Based Access Control for                     SIGART-SIGMOD Symposium on Principles of Database Systems,
       Loosely Coupled Distributed Database Management Systems”,                           pages 135{141, March 1988
       Storming Media, ISBN-13: 9781423511045, pp 132.                                [28] B. S. Jajodia, P. Samarati, V. S. Subrahmanian, and E. Bertino., “ A
[8]    Gyanendra Kumar Gupta, A K Sharma, V Swaroop, “A permutation                        unified framework for enforcing multiple access control policies”, In
       Gigantic Issues in Mobile         Real Time Distributed Database:                   Proceedings of ACM SIGMOD International Conference on
       Consistency & Security”, IJCSE, Vol. 9 No. 3, March 2011.                           Management of Data, pages, May 1997, pp 474–485.
[9]    Suparna Bhattacharya, Karen W. Brannon, Hui-I Hsiao,                           [29] Michael J. Cahill, Uwe Rohm. Alan D Fekete, “ Serilazation isolation
       “Coordinating Backup/Recovery and Data Consistency Between                          for snapshot databse”, ACM Transactions on database Systems
       Database and File Systems”, Proceeding of the 2002 ACM SIGMOD                       (TODS), Vol 34 Issue 4, December 2009.
       international conference on Management of data, New York, NY,                  [30] R. Sandhu and F. Chen., “The multilevel relational (mlr) data model”,
       USA 2002                                                                            ACM Transactions on Information and Systems Security, 1(1), 1998.
[10]   S Bhattacharya, C Mohan etal, “Coordinating backup/recovery and
       data consistency between database and file systems”, Proceeding
                                                                                                                AUTHORS PROFILE
       SIGMOD '02 Proceedings of the 2002 ACM SIGMOD international
       conference on Management of data, ACM New York, NY, USA,
       2002.                                                                                      Gyanendra Kumar Gupta received his Master
[11]   Ji-Won Byun , Yonglak Sohn , Elisa Bertino, “Systematic control and                        degree in Computer Application in year 2001 and
       management of data integrity”, Proceedings of the eleventh ACM
       symposium on Access control models and technologies, June 07-09,                           M.Tech in Information Technology in year 2004.
       2006, Lake Tahoe, California, USA.                                                         He has worked as Faculty in different reputed
[12]   John B. Kam , Jeffrey D. Ullman, “A model of statistical database              organizations. Presently he is working as Asst. Prof. in
       their security”, ACM Transactions on Database Systems (TODS), v.2              Computer Science and Engineering Deptt. , KIT, Kanpur. He
       n.1, p.1-10, March 1977                                                        has more than 10 years teaching experience. His area of
[13]   David F. Ferraiolo , Ravi Sandhu , Serban Gavrila, Ramaswamy                   interest includes DBMS, Networks and Graph Theory. His
       Chandramouli, Proposed “NIST standard for role-based access
       control”, ACM Transactions on Information and System Security                  research papers related to Real Time Distributed Database
       (TISSEC), v.4 n.3, p.224-274, August 2000.                                     and Computer Network are published in several National,
[14]   Elisa Bertino, Ravi Sandhu, ”Database security - concepts,                     International Conferences and Journals. He is pursuing his
       approaches, and challenges”,eee Transactions On Dependable And                 PhD in Computer Science.
       Secure Computing (2005), Volume: 2, Issue: 1, Publisher: IEEE
       Computer Society, Pages: 2-19.
[15]   Elena Ferrari, “Database as a Service : Challenges and Solutions for
                                                                                                  Dr. A.K. Sharma received his Master degree in
       Privacy      and   Security“,      Computing(2009),      Pages: 46-51,                    Computer Science in year 1991 and PhD degree
       Services Computing Conference,2009, , p.232-241.                                          in 2005 from IIT, Kharagpur. Presently he is
[16]   Schneier, “Cryptography, Security, and the Future,” Communications                        working as Associate Professor in Computer
       of the ACM, v. 40, No. 1, January 1997, p. 138                                 Science and Engineering Department, Madan Mohan
[17]   B. Schneier, “Why Cryptography is Harder than it Looks,”                       Malaviya Engineering College, Gorakhpur. he has more
       Information Security Bulletin, v. 2, n. 2, March 1997, pp. 31-36
                                                                                      than 23 years teaching experience. His areas of interest
[18]   Gail-Joon Ahn , Ravi Sandhu, Role-based authorization constraints
       specification, ACM Transactions on Information and System Security             include Database Systems, Computer Graphics, Object
       (TISSEC), v.3 n.4, p.207-226, Nov. 2000                                        Oriented Systems. He has published several papers in
[19]   S. Srinivasan , Anup Kumar, “Database security curriculum in                   National & International conferences & Journals.
       InfoSec program”, Proceedings of the 2nd annual conference on
       Information security curriculum development, September 23-24,
       2005, Kennesaw, Georgia
                                                                                                 Vishnu Swaroop received his Master degree in
[20]   Bertino, E.; Sandhu, R.; “Database security - concepts, approaches,
                                                                                                 Computer Application in year 2002 presently he is
       and challenges”, Dependable and Secure Computing, IEEE                                    working as Computer Programmer in Computer
       Transactions on , Volume: 2 Issue:1, pp 2 - 19 , April 2005.                              Science and Engineering Department, Madan
[21]   H.T. Kung and J Robinon, “On optimistic Methods for Cuncurrency”,              Mohan Malaviya Engineering College, Gorakhpur. He has
       ACM Transaction on Datbase Systems, 6(2), Dec. 1981, pp 213-226.               more than 20 years teaching and professional experience. His
[22]   P.M. Bober and M.J. Carey. Multiversion query locking. In                      area of interest includes DBMS, & Networks s research
       Proceedings of the Conference on Very Large Databases, Morgan
       Kaufman pubs. (Los Altos CA) 18, Vancouver., August, 1992                      papers related to Mobile Real Time Distributed Database and
[23]   Ravishankar K. Iyer, “EEE Transactions on Dependable and Secure                Computer Network. He has published several papers in
       Computing”, IEEE Computer Society Press Los Alamitos, CA, USA,                 several National, International conferences and Journals. He
       Volume 2 Issue 1, January 2005, pp 1.                                          is pursuing his PhD in Computer Science.
[24]   D. Agrawal and S. Sengupta. “Modular synchronization in
       multiversion databases: Version control and concurrency control”,
       ACM SIGM