Abstracts - the School of Computing _ Mathematical Sciences

Document Sample
Abstracts - the School of Computing _ Mathematical Sciences Powered By Docstoc
					            Liverpool John Moores University
                 School of Computing and Mathematical Sciences




      Annual Postgraduate
      Research Conference

Wednesday 17th & Thursday 18th march 2004
        to be held in room 705,
              Byrom street
          POSTGRADUATE RESEARCH CONFERENCE
                               Wednesday 17th March 2004

TIME     NAME                    STUDY       TITLE
                                 PERIOD

            Opening and Welcome by Professor Taleb-Bendiab (9.25 – 9.30am)
Session Chair: Professor Taleb-Bendiab
9.30   Fausto Sainz-Salces      40 mths      Earcons on a multimodal user interface for the
                                             elderly (Presentation)
9.55    Nikolaos Alexiou         5 mths      A Bluetooth Overview (Presentation)
10.20   Mark Allen               25 mths     Argumentation Techniques for Document
                                             Filtering (Presentation)
10.45   Muhammad Arshad          5 mths      A Gateway for the Utilisation of Networked
                                             Devices in Ubiquitous Computing (Presentation)
11.10   Sareer Badshah           17 mths     Impact of maternal age and other social factors on
                                             the hospital newborn health in Peshawar N.W.F.P
                                             Pakistan (Presentation)
11.35   Hulya Francis            25 mths     Augmenting Navigation for the Visually Impaired
                                             (Presentation)
               Lunch (12.00pm – 1.00pm – Raised Area of Student Dining Room)

Session Chair: Dr David England
1.00    Richard Cooksey         18 mths      A Programming Model for Self-Organising Object
                                             Systems (Presentation)
1.25    Victoria Craig           18 mths     Developing a Generic Web-based CASE tool for
                                             Planning GIS using Soft Systems Methodology to
                                             underpin the Planning Process (Presentation)
1.50    Teresa Ferran            53 mths     The evaluation of an intelligent learning system as
                                             a tool to support the learning of algebraically
                                             numerical manipulations (Presentation)
2.15    Paul Fergus              18 mths     On-Demand Service Composition in Ubiquitous
                                             Computing Environments (Presentation)
2.40    Peter Kinloch            5 mths      Police Mobile Communication Systems
                                             (Presentation)
                             Tea/Coffee Break (3.05pm- 3.20pm)
Session Chair: Dr Qi Shi
3.20     Thang Hoang             36 mths     Source separation and localisation of single trial
                                             analysis of event relate potential
                                             (electrocephalogram) for investigating brain
                                             dynamics (Presentation)
3.45     Ian Jarman              17 mths     Prognostic grouping and profiling of breast cancer
                                             patients following surgery (Presentation)
4.05     Gurleen Arora           29 mths     Cascading Payment Content Exchange
                                             (CASPACE) Framework for P2P Networks
                                             (Presentation)

                                    Close (4.30pm approx)
         POSTGRADUATE RESEARCH CONFERENCE
                                Thursday 18th March 2004

TIME     NAME                     STUDY     TITLE
                                  PERIOD

Session Chair: Dr Martin Hannegan
 9.30    Philip Miseldine     2 mths        Grid Computing (Presentation)
 9.55    David Gresty         62 mths       Automated Response to Producer-Based
                                            Denial of Service: The Inevitability of Failure
                                            – Part II (Presentation)
10.20    John Haggerty            42 mths   An Informal Chat about the Experiences of a
                                            Researcher (Presentation)
10.45    Martin Randles           1 mths    Modelling Adjustable Autonomic Software
                                            through the Stochastic Situation Calculus
                                            (Presentation)
11.10    Alexander Soulahakis     24 mths   A Scenario of Mobile Communications in
                                            Hazardous situations and Physical disasters
                                            (Presentation)
11.35    Mengjie Yu               30 mths   Self-Regenerative Middleware Service for
                                            Cross-Standard and Ubiquitous Services
                                            Activation (Presentation)

         Lunch (12.00pm – 1.00pm – Raised Area of Student Dining Room)

Session Chair: Professor Paulo Lisboa
1.00     Jennifer Tang          38 mths     Information Technology Supported Risk
                                            Assessment Framework For Scenario
                                            Analysis In Drug Discovery and Development
                                            (Presentation)
1.25     Patrick Naylor           58 mths   Generic Risk and Protection Inspection Model
                                            in the Unified Modelling Language
                                            (Presentation)
1.50     Karen Murphy             16 mths   Development of Evidence and knowledge
                                            based decision support system for the
                                            selection of post-operative adjuvant treatment
                                            for breast cancer patients (Presentation)
2.15     Henry Forsyth            38 mths   The Development of Adaptive Dependable
                                            Software Systems (Presentation)
2.40     Bill Janvier             42 mths   Human-Human-Interaction replicated in
                                            Human-Computer-Interaction (Presentation)

                                   Close (3.30pm approx)
                 Earcons on a multimodal user interface for the elderly
                                        Fausto Sainz-Salces
                                  Email: CMSFSAIN@livjm.ac.uk


                                               Abstract
This research intends to investigate the use of audio in a multi-modal interface for elderly users.

Much has been done on the research of the use of sound to display data, monitor systems and provide
enhanced user interfaces for computers, but the research done so far has not been aimed at household
applications for elderly users. Computers now offer multiple media output facilities that are able to
provide users information through auditory, tactile and visual channels. Multimodal interfaces will
benefit all users and promotes “universal accesses”.

The design is targeted at the general population although it is tested using elderly people. This does
not mean that the design is specifically designed for elderly and no other user group, but instead that
this user group, being the primary beneficiary –as well as the visually impaired- will determine its
design characteristics.

Audio interfaces seem extremely adequate for control systems. Auditory displays have being proved
an important improvement in the performance of elderly users when multimodal displays have been
used in a driving skills test. We designed an interface with earcons as a tool to deliver household
appliances status, and we were confronted with questions such as:

Are the musical tones sounds good? Or do the users want to turn the sound off? What was the users
opinion about it? Did it deliver the information expected? Is the system efficient, convenient and
natural, allowing users interact with their everyday skills? Can sound change erroneous mental
models?

The introduction of interviews, questionnaires and other methods of gathering users opinions on the
interface facilitate the discovery of hidden issues and users concerns. Participatory design helps in
getting the requirements right from the intended users from the beginning of the process, increasing
the chances of a successful design story. The user involvement brought more accurate information
about the tasks and their perception on the system.

The design of the interface can help us in the improvement of the quality of life of elderly and
visually impaired people by not socially stigmatising them and highlighting users disabilities.

The design interface avoids as much as possible making the user feel ashamed of the artefact looks
for its possible connections with the user disability.

We used subjective measures to gather information on the users opinions on the device, and
objective measures to test the effectiveness of the interface. The results of the experiment showed
certain areas of improvement and the difficulties on dealing with elderly and disable users.

Use of different sound parameters to increase information delivery.

The research can benefit other research in the area of elderly care, domotics and auditory design.
                                    A Bluetooth Overview

                                        Nikolaos Alexiou
                                Email: CMPNALEX@livjm.ac.uk


                                             Abstract
In the modern era of computers where networks have a significant role, a new promising set of
technologies arise to replace the cables with radio waves. Wireless Technologies aim mostly to the
removal of cables in many applications of ourlined lives. One major issue where wireless
technologies want to succeed is the accomplishment of cable standards in security and speed topics.
Bluetooth technology was made for the necessity to connect small handheld devices between their
peripherals, and it is well succeeded in replacing the cables without risking security as many other
technologies in the wireless family.
      ARGUMENTATION TECHNIQUES FOR DOCUMENT FILTERING

                                          Mark Allen
                                  Email: M.Allen@livjm.ac.uk

                                             Abstract

This project is investigating the possibilities of using argumentation techniques, derived from the
legal argument domain, to supply documents to users in a timely manner, supported by rhetorical and
persuasive arguments as to why a particular should be read.

Early work concentrated on analysis of argument and the development of argument generating
programs. The current phase is concentrating on the development of ontologies, using the various
roles of a University as an example, from which we intend to derive the source material on which the
arguments are based.
  A Gateway for the Utilisation of Networked Devices in Ubiquitous Computing

                                      Muhammad Arshad
                               Email: CMPMARSH@livjm.ac.uk


                                             Abstract
There is a range of digital devices and the current trend is moving us closer to an increasingly
interconnected world. Currently device usage is location dependent, for example when you are in
your home environment and your PDA, which contains MP3s, resides in your work environment
there is no mechanism to enable you to use the functionality it provides. You should be able to use
these devices irrespective of where you or your device resides. The challenge is to create a gateway
that will enable users to seamlessly integrate their devices. Before such a gateway can be developed
a number of issues need to be addressed. For example the gateway needs to provide Quality of
Service (QoS); device presence management and secure connectivity in ad-hoc and structured
networks.

In this paper we discuss these issues in detail and describe our theoretical implementation of this
gateway.
      Impact of maternal age and other biosocial factors on the hospital newborn
                       health in Peshawar N.W.F.P Pakistan
                                                         Sareer Badshah
                                                 Email: CMPSBADS@livjm.ac.uk
                                                            Abstract

Aim of the study: to investigate the influence/impact of maternal age and other biosocial factors on
the hospital newborn‟s health, controlling for a range of other variables. Objective of the study: The
main objectives of this study are: (i) to assess birth health i.e., weight, length, head circumference,
and Apgar-scores (heart rate, respiratory effort, muscle tone, reflex irritability, and colour) at the two
hospitals in Peshawar, NWFP-Pakistan; (ii) to examine the effect of maternal age and other biosocial
factors on birth health (weight, height, head circumference and Apgar-score); (iii) to generate
yardsticks/parameters and establish base statistics, for further comparison in future with national &
international studies; (iv) to identify adverse factors affecting birth health, which will inform parents,
husbands, doctors, health department and funding agencies in their decisions. Design: Cross-
sectional prospective study. Settings: Khyber Postgraduate Teaching Hospital, Hayatabad Medical
Complex, Lady Reading Postgraduate Hospital and Maternity Hospital Peshawar, NWFP-Pakistan.
Participants: One thousand and thirty nine childbearing women. Main factors considered:
Maternal age & other biosocial factors, birth weight, height, head circumference and Apgar.
Findings: The preliminary findings (using linear regression analysis; step-wise method) shows that
gestational age is the only parameter, which affect the overall newborn health i.e., birth-weight,
length, head-circumference and apgar scores, which was further confirmed by univariate analysis.
The factors that influence the newborn-weight are gestational age, gravida, diabetes, maternal
registration, maternal weight, father age, nationality, maternal height and maternal education
(F=18.9, P<0.01). Gestational age, father age and preterm delivery (F=48.02, P<0.01) affect birth
height. Gestational age, preterm delivery and hypertension (F=20.23, P<0.01) affect birth head-
circumference and gestational age, marriage duration, anaemia, father education, preterm delivery,
and other risk factors (F=17.219, P<0.01) are correlated with birth apgar-score at birth. Future
work: The study needs further investigations, using other statistical tools, used by other researcher,
in their studies, i.e., Odds Ratio, Univariate Analysis, T-test, ANOVA, Logistic Regression, and
Multiple Linear Regression Analysis.


                                     Maternal age group & Incidence of Low Birth Weight

                          100
   Low birth weight (%)




                           80                                                       R2 = 0.6989
                           60
                           40
                           20
                            0
                                10             20            30            40             50
                                                        Maternal age (years)
                   Augmenting Navigation for the Visually Impaired
                                          Hülya Francis
                                       cmshfran@livjm.ac.uk

                                              Abstract
Innovation in assistive technology is generally focused on technological improvement. However,
despite immense technological innovation in information technology in recent years, three
fundamental dilemmas to successful navigation in sight-impaired situations remain unsolved. The
three dilemmas are related to (a) direction, (b) obstacles and, (c) bandwidth. There is a need to solve
each of the three dilemmas to augment navigation for the visually impaired.

This research study addresses the three dilemmas by:

      Designing and developing an information system that incorporates:

      Direction: The starting point of any navigation is a point on the network. To derive the
       shortest usable path, the system must first determine the direction faced by the user. To
       monitor and take control action wherever necessary it will be necessary to monitor direction
       in real time. To accomplish this a digital compass and derived algorithms to determine the
       direction faced by a user will be designed and incorporated into an information system.

      Environmental Sensing Capability: The identification of risk on a network is a complex
       process. Navigation takes place in a dynamic environment. Multifarious objects interact with
       the environment randomly. Some objects may impose risk to the traveller in the environment.
       There will be a need to make sense of the navigable environment.

      Bandwidth Management: Not all data contained in a map designed for a sighted person may
       be relevant to a visually impaired person. To decrease bandwidth overload at the interface of
       human-machine, it will be necessary to determine which, of the many data objects are useful
       for transfer into information to augment navigation.

      Testing the designed information system by:

Designing a portable, lightweight mobile navigation system incorporating artificial intelligence, a
geographical information system, and global positioning system technology and mobile wearable
computers communicating via a wireless network.

Incorporating the designed algorithms in to the information system.

Carrying out a survey of visual impaired users of the information system in Liverpool city centre.

Although designed to augment navigation for the visually impaired the information system has
applications in defence, civil emergency response, and industry. The current study is partially
supported by the Royal National Institute for the Blind (UK) and Intergraph GeoMedia Inc (USA).
              A Programming Model for Self-Organising Object Systems

                                         Richard Cooksey
                                 Email: R.D.Cooksey@livjm.ac.uk



                                              Abstract
Service Composition simplifies and makes available to ordinary users the capability of bringing
together and connecting a number of services to better meet their requirements. Most existing
implementations of service composition systems involve connections that are simple and high-level
in nature, since they require knowledge by the user of all the components involved to allow them to
make intelligent architectural decisions.

A recent attempt at a solution to this problem was by a team at the „Palo Alto Research Center‟
(PARC) who devised the „SpeakEasy‟ approach. The basic concept of this is that high-level services
are provided in a standardised way that allows human users to connect components in any way they
see fit. An example connection described by PARC is a PowerPoint presentation on a file system
SpeakEasy service linked to a SpeakEasy PowerPoint displaying service which then links to a
SpeakEasy Projector service, all across the local area network. The input and output types of each
service is defined in each, and the source service in a connection determines how the data will be
sent, and if necessary provides „driver‟ code to the destination to decode the data sent – for example,
a video player could choose to stream data in Divx format and provide a codec to the recipient that
will allow the destination to process the data into a format that it „understands‟.

This investigation proposes a new design model and approach to application development based on a
lower-level form of automated service composition that better takes into consideration users ever
increasing desire for mobility, fluid application extension, optimal networked resource usage and
adaptability to circumstances potentially unforeseen at component design-time. Prototype
implementations of our design principles have demonstrated how applications can be dynamically
extended at runtime through connecting objects to themselves, and how a single application can be
distributed across multiple terminals and use their combined resources to its advantage (yielding
similar benefits to the GRID computing approach). Our research focus is on the underlying
mechanisms, or baseline set of functionality that components need to possess to support this kind of
architecture, and the support required in these component's environment(s) that will allow them to
communicate, self-organise and interact effectively.
Developing a Generic Web-based CASE tool for Planning GIS using Soft Systems
                Methodology to underpin the Planning Process

                                        Victoria Craig
                                Email: CMSVCRAI@livjm.ac.uk


                                            Abstract
The aim of this PhD research progeamme is to investigate and develop a generic Computer-Aided
Software Engineering (CASE) tool for deriving a Geographical Information System (GIS) using the
Soft Systems Methodology (SSM) to underpin the planning process. The research programme is
focused on the family of integrated Intergraph GIS-related products. The case tool will provide a
planning platform that is aimed at productivity increases in developing new GIS applications for a
wide spectrum of industries by providing an automated CASE tool that simplifies and speeds up the
development process. The CASE tool will prvide a development process that includes options for
developed applications using state-of-the-art wireless networking, web-based systems as well as
standalone workstation applications; all of which should provide an easier development process, and
should be completed within the constraints of time and availability of resources.
 The evaluation of an intelligent learning system as a tool to support the learning
                of algebraically applied numerical manipulations

                                          Teresa Farran
                                   Email: T.Farran@livjm.ac.uk


                                               Abstract
The aim of the investigation is to test the hypothesis that there are significant differences in learning
derived from the use of an intelligent learning system and a „drill and practice‟ computer
environment.

Post Dearing there is an increasing emphasis on all learners having key skills at an appropriate level
and being given opportunities to improve all their key skills. There are many undergraduates who
have either been unsuccessful at GCSE Mathematics or having „just‟ passed lack confidence in their
own ability at this level and yet will require numerical skill and an adequate background for success
within their undergraduate programme, for employability and as a life skill. Despite the extensive
research of numerical and algebraic common errors and misconceptions amongst secondary school
pupils there has been limited investigation of undergraduates skills and understanding.

With the ever-increasing demands on resources and the need for individualised study opportunities
there is a compulsion to utilise technologies to fulfill this need. The aim of this research is to
investigate how computer technology can be efficiently and effectively adopted. The effectiveness of
the system will be dependent on the relevance of the curriculum content. Hence this study will
encompass an exploratory study to identify common methods of solution and common errors and
misconceptions.

The design of an intelligent tutoring system effective in supporting the learning and understanding of
solving algebraic problems will be based on the outcomes of an evaluation of an existing „trial and
error‟ computer system by a hundred undergraduates during induction. By means of a questionnaire
derived from the work of Squires and Preece focusing on learning users opinions usability of the
software as well as pedagogical strategies and issues has been collated. An automatic progress log of
users actions has provided further evidence of common errors and methods of solution.
     On-Demand Service Composition in Ubiquitous Computing Environments

                                             Paul Fergus

                                  Email: CMPPFERG@livjm.ac.uk


                                               Abstract
User demands and technological advances are driving the complex integration between
heterogeneous devices, and moving us closer to pervasive computing environments. The home of the
future will include embedded „intelligence‟ and accommodate a flexible integration of services to
perform functions that are only limited by our imagination. The family car and working environment
will be closely integrated allowing the children to watch a movie on digital screens embedded in the
back of the driver and passenger seats, by directly streaming the digital content from the DVD player
located in the living room of your home. Furthermore you will be able to access and listen to music
stored on your MP3 player, which you accordingly left at the office on Friday night, via your
Bluetooth enabled wireless headphones, without interrupting the children‟s viewing experience.

This vision will require us to perceive and interact with our devices in ways we have never
experienced before. The constraints placed on us by manufacturers and service providers will
become a thing of the past as the flexible and seamless integration of distributed devices become
common place. One of the challenges is to expose the functionalities offered by devices as
independent services and capitalise on advances made in global communications to revolutionise the
way we interact with the plethora of devices that surround us.

A framework needs to be developed that can effectively exploit the services offered by devices and
provide mechanisms that allow them to be combined to perform specific functions. This will allow
us to make three novel contributions.

         utilises the functions offered by complex devices
         effectively form dynamic service compositions
         combine functions from multiple devices to create virtual appliances.
The challenge is to re-engineer devices to work in this way and implement services that provide
interfaces to the controls used by devices. This is not an easy task because the plethora of devices
and the services they offer bring with it an infinite number of interfaces which are not known a
priory. Devices need to process service interfaces on-demand and determine how a particular service
can be used. Typically service bindings are achieved using pre-determined interfaces or
implementation specific proxies. In ad hoc environments this is not possible because we have no
control over how and when devices join the network. Mechanisms need to be devised that can
dynamically discover services and „intelligently‟ process the signatures they provide that will allow
devices to understand how the service operates and how it can be integrated with existing services
the querying peer has. If we can achieve this it allows us to utilise the management of the
functionalities provided by devices and dynamically discover and integrate the services and the
controls they offer to form absolute configurations.
                          Police Mobile Communication Systems

                                           Peter Kinloch

                    Email: Peter.A.Kinloch/MIA@Merseyside.pnn.police.uk

                                              Abstract

One of the most important parts of a police officer‟s job is to gather as much intelligence as possible
when preparing to deal with a crime or an incident. There have been occasions in the past when
officers have attended a crime scene unprepared and this has caused problems along with
compromising their safety.

I propose to lead an investigation into whether this potential safety risk could be reduced by the
introduction of a police mobile communication system. It would be of great benefit to officers if
they could carry around with them a Personal Digital Assistant (PDA) or a mobile phone that can
access a large diversity of information that may be required at any given time. Officers would be
able to interrogate the databases on the crime recording system, incident recording system and other
such intelligence data stores to get the intelligence they need.

In addition to developing a system that can interrogate the Force‟s systems there is also scope for
GIS mapping or creating links to the Police National Computer (PNC) or the DVLA.

The security of a wireless network such as this is of paramount importance. In addition to this the
bandwidth will need to be able to cope with the demands placed on it with a number of officers
gathering data at the same time from different areas of the constabulary. The suitability of TETRA
or other such wireless communication systems will need to be investigated.

The officer‟s requirements will need to be assessed in order to gain an appreciation of what should
be available on the mobile communication system. It is proposed that Peter Checkland‟s Lancaster
Model from Soft Systems Methodology (SSM) is used for this purpose.

I wish to explore how police mobile communication systems can be utilised by officers and their
ability to provide the intelligence that is a prerequisite in the modern policing environment.
Source separation and localisation of single trial analysis of event relate potential
           (electroencephalogram) for investigating brain dynamics

                                         Thang Hoang
                                Email: CMSTHOAN@livjm.ac.uk


                                              Abstract
Electroencephalogram (EEG) has long been a major technology to investigate the functional
behaviour of the human and animal brain. A particularly important analysis tool consists of recording
continuous EEG responses from human subjects in about one second intervals following repeated
application of a stimulus, resulting in event-related potentials (ERPs).

It is assumed that the underlying sources of ERPs are spatially stationary across time. These sources
are stimulated by the experiment stimulus independently from the ongoing background
electroencephalogram (EEG). Therefore, in order to remove the noise due to the background EEG
activity, the ERPs signals are averaged over the repeated trials. The averaged ERP is then used as
noise free signal to localise the sources in the brain. On the other hand, some researchers have
suggested that ERP features arise from alterations in the dynamics of ongoing neural synchrony
possibly involving distributed sources in the brain, and emphasising the importance of analysis of
single trial ERPs.

In the first part of the thesis, we review and classify current methods of EEG/ERP analysis, and how
they relate to the different hypotheses about the sources and nature of brain activity. From this
review, we show that the assumption about the spatial stationarity of ERPs sources is questionable.

In the second part we compare our analysis of single trial ERPs and to the results from averaged
ERPs. We show that single trial ERPs exhibit a delta band response latency distribution that is
strongly correlated with the experimental condition of monetary „reward‟ or „penalty‟, which is not
apparent in the averaged ERPs. The inter-trial synchronisation of delta response in monetary 'reward'
condition was first discovered in this research. This result cast the doubt on the reliability of using
averaged ERPs to do carry out source localisation, as is common practice.

In third part of the thesis, we provide evidence to support a new approach in EEG dipole source
localisation in which independent component analysis (ICA) is used as a source separation filter to
extract the independent source components of the ERP signal. Each of these source components is
the activation from a separate dipole. These dipoles can then be spatially localised by applying
inverse solution methods on the corresponding source components. Using CO2 laser evoked pain
potentials (LEP) we demonstrate that, in addition to detecting a well-documented caudal cingulate
dipole, this approach also estimates bilateral dipoles at inferior parietal cortex, secondary
somatosensory cortex (SII), premotor cortex, primary somatosensory cortex (SI) and insula. These
regions of dipoles are consistent with findings in positron emission tomography (PET) and functional
Magnetic Resonance Imaging (fMRI), but previously undiscovered by standard LEP dipole source
localisation methods. In addition to the high accuracy of source location, illustrated in repeated
studies, this approach also provides temporal activation of the dipoles across the entire epoch at
single trial level which may prove instrumental in future analysis of the dynamics of pain processing
in single trial LEP. Our findings open a new avenue to investigate pain related brain activities not
time locked to the experimental stimulus and the development of methodologies to monitor ongoing
pain and its treatment.
  Prognostic grouping and profiling of breast cancer patients following surgery
                                            Ian Jarman
                                  Email: I.H.Jarman@livjm.ac.uk


                                               Abstract
After surgery, assigning breast cancer patients into prognostic risk groups is of particular importance
in the management of their treatment. The main tool of the clinician over the last 20 years has been
the proportional hazards model, also known as Cox Regression, usually summarised into a clinical
algorithm such as the Nottingham Prognostic Index (NPI).

This model has been in use, without amendment, since it was first published over twenty years ago.
With the introduction of breast cancer screening in the early 90‟s, availability of more patient data
and the increasing research into Artificial Neural Networks (ANN) for censored data. There is now a
need to reassess the clinically used models to discover if they can be refined to take account of the
new data and analytical tools. The overall aim of the project is to add to the toolkit of the oncologist
in support of their decisions by integrating different methodologies, using the traditional models of
Cox regression alongside alternative models using Artificial Neural Networks.

Initially, we developed a method of cross-tabulation with NPI and PLANN-ARD (an ANN
algorithm), which has given us a new prognostic model with similar survival but with differing
patient allocation into comparable groups. To gain more insight into this we are in the process of
looking at treatment prediction and have used a rule extraction algorithm developed by Dr Terence
Etchells and Prof. Paulo Lisboa, for which we have added a filtering method specifically for this
problem and have been able to refine a set of 11 rules which described the data to just 3 without
much loss of accuracy and convergence of the data.
CASPACE: CASCADING PAYMENT CONTENT EXCHANGE FRAMEWORK
                    FOR P2P NETWORKS

                                            Gurleen Arora

                                 Email: CMPGAROR@livjm.ac.uk

                                               Abstract
The increased popularity of Peer-to-Peer (P2P) file-sharing networks has demonstrated that P2P is a
viable model for digital content distribution. People like the idea of easy access to digital content via
their desktop PCs, Personal Digital Assistants (PDA) or mobile phones. As people become more and
more connected their need for easy access to information at their fingertips also increases. P2P
technology provides a vehicle for the distribution of digital content at low cost to the content
producer and distributor obviating the need for publishing middlemen.

Content producers require modest resources by today‟s standards to act as distributors; hence small
producers may be able to distribute their content without creating relationships with large publishing
firms. Instead they can use the power of P2P technology to push content out and will do so as long as
they can ensure they will get paid. Presently producers rely on their relationship with big publishing
concerns to ensure they get compensated for their work and their copyrights do not get violated.
Ensuring the protection of copyright in the P2P domain brings with it many challenges. One of the
main challenges involves ensuring the copyright owner gets compensated for his effort every time
his content is „shared‟ in the P2P network.

To overcome this challenge we propose a Cascading Payments Model (CPM) which compensates the
content owner by ensuring that the royalty flows back to him every time his content is shared. It also
compensates the middleman involved by paying him a commission thus maintaining the
requirements of traditional economics. The CPM is realised by the creation of our Cascading
Payment Content Exchange (CasPaCE) framework which incorporates the P2P Services architecture.
Our framework‟s functionality has been encapsulated into various services such as Security,
Payment, Bank and Content Exchange Services, which may be used in various combinations to
perform different tasks. These tasks together enable the secure exchange of payments and digital
content in our case study where the following requirements are fulfilled:
   a. Ensuring content producers are compensated every time their content is propagated.
   b. Intermediaries who ensure the persistence and propagation of content are also compensated.
   c. The payment and content are exchanged in a fair manner so that no one party gains an
      advantage over the other.
   d. The transactions are atomic and may not be repudiated.
   e. Payments can only be redeemed by the person they are intended for.
In the current climate, of high speed Internet connectivity via various digital devices, these facts
potentially open up the field of paid content delivery via P2P systems as a viable commerce
opportunity. Where, the provision of content related information can empower the producers and
users by providing them with different business models.
                                        Grid Computing
                                          Philip Miseldine

                                 Email: CMPPMISE@livjm.ac.uk

                                              Abstract

Many commentators have speculated that grid computing will provide a basis for the 2nd generation
of Internet (WWW) technology. Many grid computing research applications are now underway
including Oxford University‟s Centre for Computational Drug Discovery's project that uses more
than one million PCs to find a cure for cancer. Influenced by the service-oriented architecture
including the web services standards, the Open Grid Services Architecture and Infrastructure (OGSA
and OGSI) have been proposed following a recent period of applied research into the application of
grid computing to a range of data intensive and high-throughput domains. OGSA and OGSI are
under development primarily through a re-engineering effort of the Globus toolkit.

OGSA itself is comprised of two parts: the Core Grid Components and Base Grid Infrastructure. The
latter is also known as the Open Grid Services Infrastructure (OGSI) providing grid services using
common protocols including WSDL and SOAP. The Core Grid Components sit upon these services
to manage membership and access to the grid, as well as providing resource management services
amongst other duties. As an analogy, the same way as OLE/COM provided a common framework
for interfacing and exchanging service for the Windows platform, OGSI provides the framework for
grid services that have common interfaces, and that can process requests from clients. Core Grid
components can then consume and manage these services much in the way end-user Windows
applications, like office suites, could interchange service through OLE.

With the expected widespread use of both grid services and web services, it can be foreseen that
competition will arise between rival services offering the same functionality for clients. Research has
been conducted into employing economic models to match the requirements of a user with the cost
and abilities of the competing services. For example, one economic model is that of auction-based
selection. A service is available, and several clients can offer bids for work: one client could offer
£50 to process their request in less than 2 hours, whereas another client could offer £20 to process
their request in 3 hours. Depending on the cost constraints and requirements of the available services,
one bid will be accepted, and another declined.

So far research has concentrated on interactions between single users and singly formed services. In
reality, one service may have dependency on another service to achieve its function. Indeed, these
service dependencies may be many levels deep and interdependent on previous services which
themselves have their own set of dependencies. This leaves research open to explore how economic
negotiations in complex service systems should take place to match a user and their varying
conditions of service (QoS, contract and SLA) with a suitable service, as current economic models
do not satisfy the requirements of negotiating contracts and pricing for these complex systems.

Thus, research is required to investigate scalable mechanisms to support service discovery and
composition taking into account not only functional characteristics but also qualitative parameters
including; service performance, risk and economic attributes. In addition, resources need to be pre-
allocated across the Grid to enable interactions at runtime between services to ensure QoS and SLA
contracts. This in itself introduces further complexity to resource management and differential
allocation schema.
 Automated Response to Producer-Based Denial of Service: The Inevitability of
                             Failure – Part II
                                         David W. Gresty
                                   Email: D.Gresty@livjm.ac.uk


                                               Abstract
This presentation discusses the two elementary underlying problems that make Producer-Based
Denial of Service intractable within computer systems: finite resources and trust.

If the level of risk can be accurately assessed for a problem that cannot be solved, then the „solution‟
is to manage the level of exposure. After highlighting the elementary problems, a mechanism is
proposed to control the exposure to risk from these problems

This presentation readdresses some of the issues that were originally proposed early on within this
research project. The crucial concept of „self-denial‟, which will be used to rapidly reduce load on
an attacked system and the architecture required to implement such a system, is shown along with
several weakness.

Self-denial is a process where a node or co-operating cluster of nodes request that they not receive
inbound traffic from an identified source. This method in effect manages the finite resource issue as
Consumer-based Denial of Service, in an attempt to limit the unnecessary load on the network.

Trust issues are much more difficult to resolve at a purely protocol level, as although cryptography
provides assurance of authenticity, behaviour and managing authorised activities are important
control issues. The presentation will address some of the issues associated with controlling trust, and
decentralising Certification Authorities to limit single-point-of-failure issues.

This presentation concludes with a look to the future of the project and highlights a possible route for
successful implementation of this concept.
          A Dynamic Instrumentation Framework For Distributed Systems
                                           Denis Reilly
                                   Email: cmsdreil@livjm.ac.uk

                                               Abstract

Distributed systems prove difficult to develop and manage due to the possibility of different
component technologies and dynamic runtime behaviour. Middleware technologies, such as
CORBA, DCOM and Jini have been used for some time to simplify the development of distributed
systems by abstracting the complexities of the underlying network transport and operating systems.
However, by doing so, middleware obscures the architectural and behavioural aspects that are
needed to understand and manage distributed systems. From a software engineering perspective, the
middleware abstraction is a highly desirable property, but the down side is that by obscuring the
network platform, the detection of changes in platform architecture, behaviour and/or performance
are hindered. In order to address this dilemma, a solution is proposed, based on well-founded
engineering principles, namely instrumentation. The instrumentation is applied in the form of
distributed services that extend core middleware services and can be dynamically attached
to/removed from application-level components at runtime to monitor behaviour and measure
performance.

The main knowledge contribution of the research is that of a dynamic instrumentation framework
that specifies a series of instrumentation models that together provide a reference framework. In
particular, the framework describes: requirements, classification, usage, formal analysis and
programming models that facilitate the incorporation of instrumentation within a distributed
application. It is intended that the framework may be used directly and/or extended by application
developers or those concerned with the actual development of middleware technologies. In the
longer term, it is hoped that the research will reinforce the case for instrumentation as a core
middleware service. A novel aspect of the framework is the programming model, which facilitates
the integration of instrumentation services to applications with minimum programming effort.
Essentially, through the programming model, instrumentation services may be incorporated within a
distributed application without having to add large amounts of additional instrumentation code to the
application itself – i.e. unobtrusively.

To date, an architecture has been developed to implement the framework and demonstrate the use of
instrumentation for monitoring the runtime behaviour and performance of a distributed application.
The architecture, which has been implemented in Jini middleware technology, has been evaluated
through several case studies involving existing Jini applications. In particular, case studies have been
conducted to demonstrate the use of instrumentation for: detecting interaction patterns, detecting
failure and for logging/performance analysis for both wired and wireless LANs.
   A Scenario of Mobile Communications in hazardous situations and physical
                                 disasters

                                      Alexander Soulahakis
                                 Email: CMSASOUL@livjm.ac.uk

                                              Abstract.

Throughout the last few years, researchers working on behalf of major phone complanies have tried
alternative ways to reduce the congestion in peak times related to GSM network. Previous research
has indicated that a mobile network is design to support only a fraction of the subscribers. It is
obvious that in peak times advanced techniques have been developed for the normal operation of
GSM network related to stability and reliability. Therefor call traffic is a critical issue for GSM. In
this paper, we propose an alternative approach for mobile communication in case of partial or
complete loss of GSM service. In hazardous conditions like flood, earthquakes, fire tornados or any
oter physical disasters as well as extreme situations like terrorist attacks, the GSM network can be
either partially or completely disabled. This paper, intents to set the basis for alternative ways to
deal with GMS communications, by proposing Ad hoc networking and 802.11 as a suitable
communication mean.
 Information Technology Supported Risk Assessment Framework For Scenario
                Analysis In Drug Dicovery and Development.
                                           Jennifer Tang

                                Email: CMPZTANG@livjm.ac.uk

                                              Abstract

Risk assessment involves identifying and evaluating potential risks, through a well-designed
programme that prevents, controls and minimises risk exposure. The research focuses on risk
assessment of drug discovery and development in the pharmaceutical industry.

In pharmaceutical industry, the process of discovering and developing a new drug is long, costly and
risky. Deciding which new product to develop is a major challenge for many growth companies
faced with a plethora of opportunities but limited resources.

The purpose of the project is to construct a model with an algorithm that can be used to objectively
assess the probability of success of a compound being developed as a pharmaceutical agent. To be
useful, the algorithm must be able to generate an output, such as risk/benefit ratio that allows
comparison of one compound against another, thereby informing a judgement about which
compounds to invest in and their priority. A systematic methodology of combination knowledge of
Risk Assessment, Knowledge Elicitation, Mathematical Modelling and Financial Management is
proposed to fulfil the aim.

The simulation model is developed based on a risk analysis software Crystal Ball to estimate total
portfolio value and risk of a candidate drug, also the probability distribution of potential R&D
spending and sales. Sample results of a candidate drug include the forecast of the total cost, the time
to launch the market and the probability of success.

Crystal Ball software is made best usage of to help compare the prediction of the probabilities and
uncertainties of more than one compound, to help the decision-makings. Easy-to-understand
graphical summaries of ranking more than one compound under different scenarios are presented
according to different user requirements, based on the most important outputs such as the probability
of success of candidate drugs according to different timings and costs.
       SELF-REGENERATIVE MIDDLEWARE SERVICE FOR CROSS-
         STANDARD AND UBIQUITOUS SERVICES ACTIVATION

                                            Mengjie Yu

                                 Email: CMSSMYU@livjm.ac.uk

                                             Abstract

For decades, the service-oriented architecture has been widely advocated as a software design model,
which will provide low-cost, fast and flexible application development from published software
components. This has led to a flurry of research in particular in web services, and much more
technical issues remain to be addressed including; support for runtime service activation, and cross-
standard services‟ interoperation such as those deployed using middleware such as; DCOM,
CORBA, J2EE, Web service, JXTA and Jini. Several ongoing related works such as Web Service
Interoperation Framework (SWIF), Openwings and Open Grid Service Architecture (OGSA) have
recognized such issues and provided APIs, which enable cross-standard services. However, most of
these approaches focused primarily at static (design-time) support for cross-standard service
invocation. For example, in the current tested release of WSIF framework, it mainly focuses on
design-time support for interoperation of cross-standard web services deployed on multiple SOAP
packages. Such a support is sufficient as long as already deployed services are to be reengineered or
new SOAP interfaces have been developed for them.

However, in this work, we are more considering a biologically inspired approach, which is
concerned with the general requirements on distributed systems to be capable of runtime
self-regeratation service invocation channels. This emphasises the coexistence and seamless
interoperation/support on varieties of polyarchical software applications, which have been deployed
on legacy or emerging service standards/mechanisms. To this end, a self-regenerative mechanism
for software service connector/adapter generation has been developed to enable/support invocation
and adaptation of runtime service applications activities.

The current research work is based on a generative programming model provided as a polyarchical
middleware service for dynamic code generation and mobility for required service invocation. So far,
the proposed approach presents some attractive features including; ease of extensibility through the
use of template to adopt legacy and/or new or emerging services‟ activation protocols. In addition,
our approach presents a dynamic service invocation adaptation mechanism, which fully utilizes the
template class design model. The concrete service adaptation code is generated with the template
class and service invocation/binding information, which will be at runtime retrieved and parsed from
the service description documents or service interface. Such a self-generated code constitutes the
connector code – for establishing communication and/or service-to-service activation code. A
proposed „Polyarchical Middleware‟ architecture is also introduced to support such self-regenerative
service adaptation, by discovering the legacy services, monitoring the adaptation performance and
constructing the adaptation behaviours, which also hides away all the underneath technical details of
service adaptation from end-users/distributed systems. In addition, code mobility utility service has
been developed to support the deployment of the self-generated code to client required for instance
for scalability, flexibility and performance reasons.

During the talk a detailed presentation of the theoretical and implementation details will be
described.
       Generic Risk and Protection Model in the Unified Modelling Language


                                             Patrick Naylor
                                  Email: CMSPNAYL@livjm.ac.uk


                                                 Abstract

The Generic Risk and Protection Inspection Model (GRAPIM) is based on a methodology developed
by the author, and designed for the analysis of safety risk and the effectiveness of risk reduction
(protection) measures. GRAPIM is principally intended for use in activities and industries where the
key risk is that of physical harm to the individual, and where protective measures are designed to
reduce this risk. It is targeted at entities that are termed installations: engineered systems comprising
equipment and processes, with failure modes and escalation paths that can be analysed and modelled.

GRAPIM is proposed as an eclectic approach that combines previously disparate techniques from the
areas of risk analysis, safety and reliability engineering into a structured metamodel/framework that
allows the user to focus on a particular problem in relation to an installation, or a given protection
system.

The model is developed using the principles of object-oriented software engineering, using the
Unified Modelling Language (UML2), fast becoming the de facto standard language for object-
orientation. Its use is justified due to the nature of the entities being modelled, their continual state of
change, and the need for adaptation of the model to new scenarios. The application – and
consequently extension – of the UML to model this architecture, particularly the risk analysis
package, is contended to be the novel contribution to knowledge in the sphere of object-oriented
software engineering, and hence the UML.

The research involved the development of GRAPIM arising from the survey of the current body of
knowledge in both domain and modelling areas through the design process to the model and its use.
Case studies were undertaken to enable evaluation of the developed model .
  Development of evidence and knowledge based decision support system for the
    selection of post-operative adjuvant treatment for breast cancer patients

                                         Karen Murphy
                                 Email: K.Murphy@livjm.ac.uk


                                              Abstract
The aim of this project is to produce a Knowledge-Based System (KBS) to augment clinicians
decision validation by triangulation of statistical evidence from statistical modelling, knowledge-
based reasoning and data-visualisation and to support good clinical practice in decision making
concerning post operative adjuvant treatment for patients diagnosed with breast cancer.

The system will be based on British government approved guidelines augmented by specialist
medical knowledge and referring to research evidence providing assistance and guidance to both the
clinician and patient in the treatment decision making process to ensure the best treatment plan for
that individual patients prognosis and preference. Evidence and decision traceabilty of each will be
presented to both clinician and patient in the most appropriate form. To augment clinicians decision
validation by triangulation of statistical evidence from statistical modelling, knowledge-based
reasoning and data-visualisation.

The novel aspect of this work is to understand and characterise patients and clinicians cross-cutting
concerns and their impact on decision and information provision in specialist breast cancer referral
units.

Rule-based models, decision trees and a initial basic prototype of the decision system have been
developed for testing and validation purposes. Further work in progress includes a detailed
knowledge base and a suitable user interface.
             The Development Of Adaptive Dependable Software Systems

                                           Henry Forsyth

                                 Email: H.L.Forsyth@livjm.ac.uk

                                              Abstract
The software development industry is facing a seemingly insurmountable problem. The functionality
and requirements of software is likely to increase exponentially in the next few decades. As software
systems grow in complexity, it becomes infeasible for humans to monitor, manage and maintain
every detail of their operations.

The software engineering industry already is currently suffering from an increasing software
maintenance backlog due to increasingly frequent changes in the “real world” requirements. It has
been suggested that the likely result of this problem will be that software development teams will
increasingly become maintenance teams therefore reducing our capacity to develop new applications.

The likely requirement for adaptive software is made more urgent by the fact that software
development is becoming more complex and unpredictable. External pressures such as increasing
competition, more sophisticated requirements means that traditional software development
techniques will find it increasingly difficult to satisfy the requirement for quality software in the
future

Future software development will increasingly require software to adapt with its operating
environment in order to operate robustly and dependably. The key features of this software will be its
ability to:
   Be aware of its environment
   Monitor its relationship with the environment
   Adapt its behaviour as necessary in order to operate robustly

There are a number of research issues within the emerging field of adaptive software but they can be
divided in two main areas internal (within software) and external (environmental).

In order for adaptive software to be successfully developed it will be vital to be able the model the
external environment in which the software operates. To develop robust dependable software
systems will require the ability to adapt to an unknown complex environment.

This project will seek to develop a software tool, which will enable these environments to be
modelled rapidly in order to allow the most robust dependable systems to be more easily developed.
   Modelling Adjustable Autonomic Software through the Stochastic Situation
                                             Calculus
                                          Martin Randles
                                Email:CMSMRAND@livjm.ac.uk
                                             Abstract

The increasing complexity of computer systems, spread over a wide heterogeneous network, has
brought about a new paradigm of distributed software engineering and lifetime management. This
model advocates that many of the computer systems management functions, including tuning and
maintenance, will be delegated to the software itself. This means that the software is required to
possess awareness capabilities to continuously monitor its own operating, security and environment‟s
conditions. In this way it adjusts its behaviour for safe and optimal operation ensuring its own
configuration, optimisation, healing and protection.

Much research work exists regarding self-adaptive software, proactive computing, reflective
middleware and autonomic computing. However this new work aims to study and develop models
for adjustable autonomic software design. The study applies results emerging from the field of
Distributed Artificial Intelligence (DAI). Related work, developed by IBM, amongst others, sought
to adopt a rule based approach to implement the “sensor-actuator” mechanism for autonomic
behaviour. These methods, however, often lead to the creation of huge production systems, which
become difficult to manage and make the system‟s behaviour hard to understand. The proposed
solution is to provide a formal semantics for the event-situation-condition-action sequence via the
situation calculus of McCarthy and Hayes extended by Reiter and Levesque. This allows the
complete specification of agents and modularises the system behaviour into situations. The further
extension of stochastic situation calculus can also be used where necessary. In addition, the
developed „autonomic models‟ can be integrated within deliberative software agent architectures
based on the Belief-Desire-Intentions (BDI) model. The investigation proposes extensions to BDI to
make an extensible BDI that may include an Epistemic-Deontic-Axiologic (EDA) or Beliefs-
Obligations-Intentions-Desires (BOID) model to support software self adaptation in a formal
situation calculus representation leading to the safe, dependable, desirable and predictable software
management that facilities autonomic computing.
      Human-Human-Interaction replicated in Human-Computer-Interaction

                                         William A Janvier
                                  Email: CMSWJANV@livjm.ac.uk

                                                 Abstract

Keywords: Communication Preference, Human-Computer Interaction, Learning Styles, Neuro-
linguistic Programming Language Patterns, Personality Types, Subliminal Text Messaging.


Following the successful sale of a pension to clients, the salesman, John, said to Eddy, his manager,
“I don‟t understand what happened.”

“Easy,” says Eddy. “What happened was effective communication. Both Bill and Cathy understood
what pension planning was about and, with this knowledge, decided to increase their pension.”

Eddy reminded John what happened in the meeting explaining that Bill pictures his thoughts, is an
introvert, needs time to think and makes decisions slowly, whereas his wife, Cathy, tends to
concentrate on sounds to stimulate her thoughts, is extrovert, makes up her mind quickly, and, as the
driving force in the marriage, continually gives Bill permission to go ahead by using body-language
so that he is seen to be the main force.

Eddy says, “The art of selling is not to sell. It is vital to communicate with the clients at their level of
understanding, using the correct words to help them to understand. Once they really do understand
then they can make a well-informed decision. In other words a good salesman helps a buyers to buy
and makes sure that they don‟t over-spend. You might recall that after about fifteen minutes I used
visual language when I talked to Bill and took time to explain everything clearly. I brought Cathy in
by using quick comments and auditory language. When they showed signs of emotion I changed to
emotional comments and back to visual and auditory as necessary. When they asked if they could
spend £2,000 per month I advised them to reduce this to an easily manageable level.”

In practice it requires deep concentration and the constant observation of body language including:
eye movements, certain gestures, breathing patterns, voice tone changes, very subtle cues such as
pupil dilation, skin colour changes and facial movements to enable Human-Human-Interaction to be
effectively changed as relevant.

This presentation discusses how part of this Human-Human-Interaction has been successfully
replicated in Human-Computer-Interaction in WISDeM.
         IMPROMPTU: ON-DEMAND SELF-SERVICING SOFTWARE
                         FRAMEWORK


                                    ELLA GRISHIKASHVILI

                                    Email: CMSEGRIS@livjm.ac.uk


                                             Abstract

The expectations of “new” users are increasing, as they will require interacting with computers to
support them in their everyday tasks, no matter where they are. To this end, software should enable
them to develop applications on demand, which can be assembled from published software
components. It is essential that users have facilities that enable the rapid assembly of a number of
services to allow them to undertake their work. Moreover the assembled services are expected to
self-manage and reconfigure in the event of malfunction, inconsistency and/or failure.

Software self-healing behaviour is recognized as a key desirable capability to incorporate in next
generation software. A number of new international initiatives are now underway to study aspects of
adaptive software engineering including: DARPA funded Self-adaptive software initiative, and the
new IBM Autonomic Computing model. These initiatives are overall applying advanced software
engineering methods and intelligent systems principles to allow software objects self-governance to
monitor and changes its behaviour and structure to adapt to unpredictable environment and need for
changes.

Considering the new trends of computing systems and combining distributed middleware with
service-oriented programming, this research introduces new methods and tools for component-based
on demand service assembly. This research project is studying models and requirements of meta-
systems reasoning for self-healing software architecture -- Impromptu, which provides a developed
on-demand self-servicing software framework providing runtime services discovery, execution,
assembling and monitoring distributed components.

The framework provides a utility to enable XML-based description language to assemble new
applications from discovered services, which is then executed to activate the specified assembly.
Also, using both reflection and service description language new services are invoked at runtime
using RMI protocol.

In addition, this work also contributes in developing a design pattern language for on-demand service
delivery and assembly software.
     Knowledge Extraction and Automation within the Recruitment Industry
                                        Guy Pilkington
                                     Email: gdp@orange.net

                                             Abstract

Assessing the suitability of an individual for a particular job can present a number of different
problems within the recruitment industry. These problems are exaggerated by the fact that the
primary mechanism to convey information about an individual, the CV, has no set style or format.

This makes the comparison of a group of individuals a difficult and complex task, which has
previously relied on human decision making to infer the information necessary for the comparison of
two dissimilar documents. CV matching carried out by computer relies almost entirely on keyword
searches to identify individuals who may posses desirable attributes required for a job.

The aim of this research is to produce a more robust system that can account for all of the different
factors that recruitment consultants use when selecting individuals and then return an intelligent
collection of candidates which are ranked according to suitability for the job.

By immediately disregarding candidates who are obviously unsuitable because they are not an
accurate match to the job requirements, the system will have removed a process which takes a
consultant several hours. The net result being a superior set of candidates identified in a shorter
space of time.
    Machine Learning: Middleware Services for Self-Management Open Grid
                          Services Infrastructure

                                         Wail Omar
                               Email: CMPWOMAR@livjm.ac.uk

                                              Abstract

Within the grid computing domain, web services are now adopted as the standard for the
development of Open Grid Services Architecture (OGSA). As stated by Foster [1], “… the alignment
and augmentation of Grid and Web services technologies ...”.

Although, a full specification of the OGSA has been proposed leading to an abundance of research
interests addressing various grid computing concerns including; load-balance, performance, Quality
of Services (QoS), security, distributed resource management, network management and fault-
tolerance. There is still a need for research into dynamic management of decentralised software
services, which will enable a more intelligent software monitoring, tracking and control to respond to
unpredictable changes of software, hardware and/or network services.

The proposed research is concerned with the development of a middleware and associated software
framework for decentralised Open Grid Services Architecture (OGSA) with system‟s monitoring
including; control, sensing and instrumentation. Based on anonymous publish/subscribe protocol
instrumentation data will be used to provide a semantic utility for instance; for intelligent control,
fault detection, anticipate and recovery and/or services tracking. Machine learning and filtering
techniques will be used to improve system‟s events signatures identification and conditional action
triggering such as; for load-balance, resource management and/or Quality of Services (QoS)
management.

The .Net framework is here used to develop an experimental self-organising and managing
middleware services for WAN-based grid systems. To this end, an Assembly Service/Infrastructure
Description Language (ASIDL) has been proposed and developed to describe required software
applications assembled (composed) from network available software services (components) and/or
OGSA compliant infrastructure (Grid). Also, a first prototype of Sensing and Actuation Description
Language (SADL) has been developed which enables deployment and discovery of different types of
sensors. In particular, a .Net-based sensor software factory and environment has been developed to
support remote monitoring, logging and analysis of a range of web services properties including;
structural, functional and operational aspects.

The remaining work is primarily focus on the application and development of machine learning
middleware services for proactive adaptation and situation anticipation. To this end, a range of
classical machine learning techniques are studied including; Case-Based Reasoning (CBR),
induction learning (decision tree), and other statistically grounded techniques such as; Support
Vector Machine (SVM).

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:5
posted:8/30/2011
language:English
pages:31