File 2 - Paper by tabindah

VIEWS: 323 PAGES: 30

F. Zulkernine, W. Powley, W. Tian, P. Martin, T. Xu and J. Zebedee School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada {farhana, wendy, tian, martin, ziqiang, zebedee}

ABSTRACT The growing complexity of Web service platforms and their dynamically varying workloads make manually managing their performance a tough and time consuming task. Autonomic computing systems that are self-configuring and self-managing have emerged as a promising approach to dealing with this increasing complexity. In this paper we propose a framework for an Autonomic Web Services Environment (AWSE) constructed using a collaborative framework of Autonomic Managers that leverage each component in the AWSE with self-managing capabilities. We present a unique approach to designing the Autonomic Managers for the AWSE framework by combining reflective programming techniques with extended database functionality. The validity of our approach is illustrated by a number of experiments performed using prototype implementations of our autonomic managers. The experimental results demonstrate self-tuning capabilities of the autonomic managers in achieving preset performance goals geared towards satisfying a predefined Service Level Agreement for an Autonomic Web Services environment. Keywords: Autonomic, Reflective, DBMS, Web Service, Management, AWSE



Web services are autonomous software that are remotely accessible by standard interfaces and communication protocols and provide specific services to the consumers. Due to the potential of building complex software systems crossing organizational boundaries by orchestrating multiple Web services, they are now well accepted in Enterprise Application Integration (EAI) and Business to Business Integration (B2Bi) [8]. Performance plays a crucial role in promoting the acceptance and widespread usage of Web services. Poor performance (e.g. long response time) implicates loss of customers and revenue [1]. In the presence of a Service Level Agreement (SLA), failing to meet performance objectives could result in serious financial penalties for the service providers. As a result, Quality of Service (QoS) is of utmost importance, and has recently gained a considerable amount of attention [4], [25]. Accessibility and functionality information of Web services are described in WSDL (Web Service Description Language) [32] documents, which are published or discovered via a UDDI (Universal

Description, Discovery and Integration) [29] registry. SOAP (Simple Object Access Protocol) [24] is the most common message passing protocol used to communicate with Web services. A Web service hosting site typically consists of many individual components such as HTTP servers, application servers, Web service applications, and supporting software such as database management systems. If any component is not properly configured or tuned, the overall performance of the Web service suffers. For example, if the application server is not configured with enough working threads, the system can perform poorly when the workload surges. Typically components such as HTTP servers, application servers or database servers are manually configured, and manually tuned. To dynamically adjust resources in an ever-changing environment, these tasks must be automated. Unacceptable Web service performance results from both networking and server-side issues. Most often the cause is congested applications and data servers at the service provider’s site as these servers are poorly configured and tuned. Expert administrators, knowledgeable in areas such as workload identification, system modeling, capacity

UbiCC Journal – Volume 3


planning, and system tuning, are required to ensure high performance in a Web service environment. However, these administrators face increasingly more difficult challenges brought by the growing functionalities and complexities of Web service systems, which stems from several sources: Increased emphasis on Quality of Services Web services are beginning to provide Quality of Service features. They must guarantee their service level in order that the overall business process goals can be successfully achieved. Advances in functionality, connectivity, availability and heterogeneity Advanced functions such as logging, security, compression, caching, and so on are an integral part of Web service systems. Efficient management and use of these functionalities require a high level of expertise. Additionally, Web services are incorporating many existing heterogeneous applications such as JavaBeans, database systems, CORBA-based applications, or Message Queuing software, which further complicate performance tuning. Workload diversity and variability Dynamic business environments that incorporate Web services bring a broad diversity of workloads in terms of type and intensity. Web service systems must be capable of handling the varying workloads. Multi-tier architecture A typical Web service architecture is multi-tiered. Each tier is a sub-system, which requires different tuning expertise. The dependencies among these tiers are also factors to consider when tuning individual sub-systems. Service dependency A Web service that integrates with external services becomes dependent upon them. Poor performance of an external service can have a negative impact on the Web service. Autonomic Computing [13] has emerged as a solution for dealing with the increasing complexity of managing and tuning computing environments. Computing systems that feature the following four characteristics are referred to as Autonomic Systems: Self-configuring - Define themselves on-the fly to adapt to a dynamically changing environment. Self-healing - Identify and fix the failed components without introducing apparent disruption. Self-optimizing - Achieve optimal performance by self-monitoring and self-tuning resources. Self-protecting - Protect themselves from attacks by managing user access, detecting intrusions and providing recovery capabilities. Autonomic Computing will shift the responsibility for software management from the human administrator to the software system itself. It is expected that Autonomic Computing will result in

significant improvements in terms of system management and many initiatives have begun to incorporate autonomic capabilities into software components. One of these initiatives, proposed by Powley et al. [20], provides autonomic computing capabilities through the use of database functionality and reflective programming techniques. A reflective system maintains a model of self-representation and changes to the self-representation are automatically reflected in the underlying system. Reflection enables inspection and adaptation of systems at runtime [18] thus making reflection a viable approach to implanting autonomic features in computing systems. The Database Management System (DBMS) is used for data storage, creation of a knowledge base, and for controlled execution of logic flow in the system. Autonomic Web Services Environment (AWSE) is a general framework for developing autonomic Web services [26]. It can be extended for use in any Service Oriented Architecture (SOA). Autonomic Web Services can also render efficient use of Web services for Enterprise Resource Management (ERM) [19], thus extending the use of Web services to areas other than e-business. In this paper, we use the approach by Powley et al. to develop autonomic managers that are in turn used as the building blocks for an autonomic Web service based on the AWSE framework. The viability of our autonomic approach to Web service management is shown by the illustration of the selftuning capabilities of a sample Web service to meet specified performance goals under changing workloads. The remainder of the paper is structured as follows. Section 2 discusses the related work. The AWSE framework including the main concepts and technical details for building Autonomic Managers is described in Section 3. In Section 4 we validate our approach using a sample Web service implemented using the AWSE framework. Section 5 summarizes and concludes the paper. 2 RELATED WORK

SLA-based service management has been proposed by several researchers where different aspects of service management have been addressed. Dan et al. [9] propose a comprehensive a framework for SLA-based automated management for Web services with a resource provisioning scheme to provide different levels of service to different customers in terms of responsiveness, availability, and throughput. The customers are billed differentially according to their agreed service levels. The framework comprises the Web Service Level Agreement (WSLA) Language, a system to provision resources based on Service Level Objectives (SLO), a workload management system that prioritizes

UbiCC Journal – Volume 3


requests according to the associated SLAs, and a system to monitor compliance with the SLA. Translation of WSLA specifications into systemlevel configuration information is performed by the service providers. Our research focuses on making each component in the Web service environment self-managing by augmenting each with an autonomic manager, and thereby, providing an overall QoS by the collaboration of the autonomic managers. Levy et al. at IBM Research [17] propose an architecture and prototype implementation of a performance management system to provide resource provisioning and load balancing with server overload protection for cluster-based Web services. The system uses an inner level management for queuing and scheduling of request messages, and an outer level management for implementing a feedback control loop to periodically adjust the scheduling weights and server allocations of the inner level. It supports multiple classes of Web services traffic but requires users to first subscribe to services. While we address similar problems here, we use SLA based service provisioning rather than classification of service customers. Moreover, we use reflection and the facilities provided by a DBMS to implement the feedback control loop for autonomic management. Sahai et al. [23] propose a Management Service Provider (MSP) model for remote or outsourced monitoring and control of E-services on the Internet. The model requires E-services to be instrumented with specific APIs to enable transaction monitoring using agent technology. An E-Service Manager is then deployed that manages the E-services remotely with the help of several other components. Sahai et al. [22] also propose an automated and distributed SLA monitoring engine for Web services using the Web Service Management Network (WSMN) Agent. Their approach uses proxy components attached to SOAP toolkits at each Web service site of a composite process to enable message tracking. WSMN Agents monitor the process flow defined using WSFL (Web Service Flow Language) to ensure SLA compliance. Our approach concentrates on management of individual Web service environments rather than using an agent framework for composite service management. Specific service management issues have been addressed by different researchers. Fuente et al. [12] propose Reflective and Adaptable Web Service (RAWS), a Web service design model that is based on the concept of Reflective Programming. Source code maintenance can be done dynamically using RAWS without hindering the operation of the Web services. Web Service Offering Infrastructure (WSOI) [27] is proposed to demonstrate the usability of a dvWeb Service Offering Language (WSOL) in management and composition of Web services. Birman et al. [5] propose a framework that focuses

on reliable messaging, speedy recovery, and failure protection in order to implement high availability, fault tolerance and autonomous behavior for Web service systems. We propose a framework for managing all the components required to host a Web service and maintain an agreed-upon level of service performance. The self-healing, self-configuring and self-optimizing aspects of autonomic management are mainly addressed in our research work. The Web Service Distributed Management (WSDM) [19] standard specification consists of two parts: Management Using Web Services (MUWS) defines how an IT resource is connected to a network and provides manageability interfaces to support local and remote control; Management of Web Services (MOWS) specifically addresses the management of the Web services endpoints using standard Web service protocols. By implementing Web service interfaces for the autonomic managers, the AWSE framework can be extended to demonstrate both MOWS and MUWS, and thereby, support the WSDM standard specifications. The IBM Architectural Blueprint for Autonomic Computing [14] defines a general architecture for building autonomic systems. Diao et al. [10] identify a strong correspondence between many of the autonomic computing elements described in the blueprint and those in control systems and show how control theory techniques can be used for implementation of autonomic systems. Tzialas and Theodoulidis [28] also use control theory techniques for system control. However, their approach to building autonomic systems takes an ontological approach based on an extension of the Bunge ontology and the Bunge-Wand-Weber models. The ontological component models are captured using software engineering diagrams and the system is modeled as an organized whole, exhibiting a coherent overall behavior by controlling and managing the interactions between the components. The feedback control loop in our model uses the concept of reflection and is controlled using the features and functionality of a database management system. Chung and Hollingsworth [7] propose an automated cluster-based Web service performance tuning infrastructure where single or multiple Active Harmony servers perform adaptive tuning using parameter replication and partitioning to speed up the tuning process. The paper also presents and evaluates a technique for resource sharing and distribution to allow Active Harmony to reconfigure the roles of specific nodes in the cluster. The self-tuning feature is implemented using a controller that implements optimization algorithms for determining the proper value of the tuning parameters. Bennani and Menascé [4] present self-managing computer systems by incorporating mechanisms for selfadjusting the configuration parameters so that the

UbiCC Journal – Volume 3


QoS requirements of the system are constantly met. Their approach uses analytic performance models with combinatorial search techniques for designing controllers that run periodically to determine the best possible configuration for the system given its workload. We use policy guided automatic tuning to achieve specified performance goals. Farrell and Kreger [11] propose a number of principles for the management of Web services including the separation of the management interface from the business interface, pushing core metric collection down to the Web services infrastructure. They use intermediate Web services that act as event collectors and managers. Our approach uses common tools provided by most database management systems as well as reflective programming techniques to incorporate self-awareness into the autonomic system. 3 AWSE FRAMEWORK

Service Consumer SLA Negotiator Site Manager HTTP Server Application Server SOAP Engine WS1 WS2


Legacy/Backend Application


Figure 1: Web services hosting site management capabilities of a resource, or a set of resources, called managed elements. A managed element may be any type of resource, hardware (e.g. storage units, servers) or software (e.g. DBMS, custom application), that is observable and controllable. Each managed element is augmented with an autonomic manager that performs the selfmanagement tasks. The autonomic manager, as described in the IBM Architectural Blueprint for Autonomic Computing (as shown in Fig. 2), typically consists of the following components: a Monitor (including sensors) that collects the performance data; an Analyzer that uses the performance data in light of system policies and goals to determine whether or not the system is performing properly; a Planner that determines, when necessary, appropriate actions to take; and the Executor (including effectors) that executes the suggested action(s) to control the managed element. The autonomic manager requires a storage facility to manage and to provide access to the system knowledge. Finally, it requires communication mechanisms and standard interfaces for the internal components to share information as well as for autonomic managers to communicate amongst themselves. The autonomic manager is typically implemented as a feedback control loop, sometimes referred to as the MAPE loop, encompassing the Monitor, Analyze, Plan and Execute components. Central to all of the MAPE functions is knowledge about the system such as performance data reflecting past, present and expected performance, system topology, negotiated Service Level Agreements (SLAs), and policies and/or rules governing system behavior. Therefore, data management is an important aspect of the autonomic manager. A potentially large amount of data is collected, processed, stored, and queried by the different components of the autonomic manager.

A Web services environment consists of multiple components including an HTTP server, application servers, database servers, and Web service applications. In our proposed architecture, we refer to a Site as a collection of components and resources necessary for hosting a Web service system provided by an organization (as shown in Fig. 1). Since these components may reside on multiple servers connected by network, a site can span multiple servers. AWSE is an Autonomic Web Services Environment, that is, a system that is capable of selfmanagement to ensure SLA compliance. SLAs are negotiated between a service consumer and the site’s SLA Negotiator. The system automatically monitors performance and configures itself to ensure that the SLAs are met. This self-management is a continuous process whereby the system dynamically adapts to workload shifts and other changes in the environment. In AWSE, we consider each component of the Web services environment to be autonomic, that is, self-aware and capable of self-configuration to maintain a specified level of performance. The core management software in the framework is the Autonomic Manager (as shown in Fig. 2) that manages an associated component such as the DBMS, or the HTTP server. The component, thus augmented with an autonomic manager, becomes an Autonomic Element that is capable of selfmanagement. These autonomic elements form the basis of the AWSE framework and collaboration among the managers provides overall system management. 3.1 Key Concepts The autonomic manager, the key building block of the AWSE framework, implements the

UbiCC Journal – Volume 3


Figure 2: Autonomic Manager These capabilities are obviously best provided by a database management system. Our autonomic managers are built using a novel methodology which we call a reflective and database-oriented approach. Our approach relies heavily on common database management system (DBMS) concepts and tools such as triggers, userdefined functions and stored procedures. Not only does the DBMS store the system knowledge, performance data, policies and other information, it also serves as the central control of the autonomic manager, implementing the control flow logic for the manager. Database tables augmented with triggers that invoke stored procedures orchestrate the components of the MAPE loop. This reliance on the DBMS for storage and control is what makes our approach “database-oriented”. A reflective system is one that exposes a representation of its functional status allowing this representation to be inspected and modified while these modifications are “reflected” (effected) in the internal workings of the system [18]. In our approach, the self-representation of a component is stored in database tables. Triggers defined on the Self-Representation table invoke functions that effect the change within the corresponding component. In our approach, we consider the autonomic element's controllable features, along with their current status, to be the component's selfrepresentation. When a configuration change is warranted, the self-representation table is updated. The update in turn fires a DBMS trigger that invokes a stored procedure that effects the configuration change within the component. 3.2 Autonomic Manager The general reflective and database-oriented framework for building autonomic managers is applied to all autonomic managers in AWSE. Each component exposes a self-representation that can be modified either by the component itself, or by other autonomic managers. Each autonomic manager implements the MAPE loop via DBMS triggers, stored procedures and user defined functions. The managers can share a single DBMS or each can use

their own DBMS where knowledge such as policies and system topology are replicated across the DBMSs as necessary. In the following subsections we describe our framework for the construction of autonomic managers and outline how the various components are implemented using our reflective and databaseoriented approach. For illustrative purposes, we detail two examples of the autonomic managers employed in the AWSE environment, namely an autonomic manager for the buffer pool of a database management system, and an autonomic manager that controls the connection pool of database connections for a Web service. The buffer pool area of a database management system (DBMS) is a key resource for performance in a database system. The DBMS buffer pool acts as a cache for all data requested from the database. Given the high cost of I/O accesses in a database system, it is important for the buffer pool to function as efficiently as possible, which means dynamically adapting to changing workloads to minimize physical I/O accesses. A simple autonomic manager for the DBMS buffer pool involves sizing the buffer pool to minimize physical I/Os. For Web services that make use of a database system, establishing a connection with the DBMS can be very slow. Most commercial products such as WebLogic [31] and WebSphere [30], offer connection pools as an efficient solution to this problem. A connection pool is a named group of identical connections to a database that are created when starting up the Application Server. Web services borrow a connection from the pool, use it, and then return it to the pool. Besides solving the aforementioned performance issue, a connection pool also allows us to limit the number of connections to the DBMS to ensure overload protection and to allocate the connections among different workloads to offer differentiated QoS. We describe an autonomic manager that dynamically adjusts the number of connections dedicated to each type of workload to meet response time goals. We describe the various components of the autonomic manager using the DBMS buffer pool and the connection pool managers as examples. The examples are simplified for the purpose of illustration and validation of the framework. Typically in a realistic system, the amount of data collected and stored would be far greater than illustrated in these simple examples. As such more complicated algorithms may be necessary for the self-tuning logic than the ones used in our experiments. 3.2.1 Knowledge The MAPE loop requires knowledge about the system topology, performance metrics, componentbased and system-wide policies, and the expectations, or system goals. Knowledge used by

UbiCC Journal – Volume 3


the MAPE loop is stored in a set of database tables that can be accessed internally by the autonomic element, or externally by other autonomic managers via standard interfaces. For our example buffer pool autonomic manager, the basic system knowledge is represented by the set of tables as shown in Fig. 3. These tables include BP_Perf_Data (the performance data for the buffer pools), Self-Representation (the current configuration settings for parameters affecting the buffer pools), Analyzer_Result (the results from the analyzer), Goals (performance expectations for the buffer pools) and Policy (policies governing the buffer pools). Similar tables are defined to store the data relevant to the connection pool autonomic manager.
BP_Perf_Data timestamp (PK) numlogicalreads numphysicalreads datawrites indexwrites asynch_datawrites async_indexwrites asyncreads physicalreadtime physicalwritetime hit_rate Analyzer_Result manager_id (PK) timestamp (PK) result Self-Representation manager_id (PK) pool_size change_pg_thresh dlock_chk_time lock_list max_locks io_cleaners sort_heap goal Policy manager_id (PK) policy_name (PK) policy_spec Goal manager_id (PK) goal_type (PK) goal_value

Figure 3: Database tables for DBMS buffer pool autonomic manager The performance data for a managed element is collected by sensors and is stored in database tables specifically defined to accommodate the data required by the managed element. The autonomic manager is most efficiently implemented by extracting a subset of relevant data from the set of collected data. This requires the identification of the problem indicators for the managed element, that is, identifying which pieces of data are most indicative of the root cause of a problem, and which data will be most affected by changes to the system. This data is typically identified by experts, or is discovered by way of experimentation. For the buffer pool autonomic manager, the data that most reliably predicts and most accurately depicts potential problems related to buffer pool performance is stored in the table BP_Perf_Data (see Fig. 3). It includes information about the types of I/O required by the database (data versus index reads, physical versus logical reads, asynchronous versus synchronous I/O) and, most notably, the buffer pool hit rate (the likelihood of finding a requested page in

the buffer pool) which is, in most cases, a good indicator of buffer pool performance. The self-representation of the autonomic manager reflects the status of the controllable features of the autonomic element. This information is stored in the Self_Representation table. A number of DBMS configuration parameters are related to buffer pool usage including the buffer pool size, the size of the sort area, and the number of asynchronous processes that can be spawned for pre-fetching data or for writing dirty pages back to disk. These parameters are the controllable features of the buffer pool and modifying the values of these settings has an effect on the efficiency of the buffer pool. In our approach, these parameters and their current values form the self-representation of the DBMS buffer pool. The Self-Representation table is initially populated using an SQL query to the DBMS catalog tables that store the DBMS configuration information. Goals and policies for the components are stored in the Goal and Policy tables respectively. Buffer pool goals are typically specified in terms of hit rate and/or response time goals. Policies describe the rules used to adjust buffer pool sizes. For example, in our simple DBMS buffer pool autonomic manager, a hit rate of less than 80% triggers an increase in the buffer pool size. For the connection pool autonomic manager, the two metrics tracked and stored are rejection rate and response time. These metrics are indicative of the efficiency of the connection pool. The controllable feature of this manager, hence the selfrepresentation, is the current number of connections to the database. The goal of the connection pool is expressed in terms of response time. 3.2.2 Monitor/Sensor The implementation of the monitor for a managed element is dependent upon the type of interface and/or the instrumentation provided by the element. Commercial DBMSs typically provide several APIs and various monitors that can be used for monitoring the performance of the system [6]. The sensors are responsible for extracting relevant performance data for the managed element. Depending upon the autonomic element, the sensor may extract the data from log files, run monitoring tools associated with the element, or it may use an API to collect the data. The rate of data collection and the detail of collection may vary over time, depending upon the needs of the manager. The monitor component of the MAPE loop is invoked by the sensors. It is used to filter and correlate sensor data and insert the data into one or more relational database tables specific to the element. Therefore, the monitor component must be customized to the needs of the autonomic manager. We combined the monitor and the sensor as a single component to reduce the overhead. An application program represents the monitor/sensor

UbiCC Journal – Volume 3


component in our implementation and uses the monitoring APIs for the DBMS to retrieve the data relevant to buffer pool performance. The program collects the data periodically, and since we use raw data for analysis, it simply inserts that raw data into the BP_Perf_Data table. The frequency of data collection is a parameter passed to the monitoring utility. The monitor/sensor for the connection pool autonomic manager retrieves the rejection rate and the response time by monitoring the requests to the Web service. 3.2.3 Analyzer/Planner The analyzer and planner modules of the MAPE loop are custom built components specific to the autonomic element. The analyzer is responsible for examining the current state of the system and flagging potential problems. The planner module determines what action(s) should be taken to correct or circumvent a problem. These components may be policy-driven, or they may require complex logic and/or specialized algorithms. In our example autonomic managers, the analyzer defines the logic to examine performance data in light of the defined policies and goals for the buffer pools and the connection pool, and determines whether or not the component is meeting its expectations as defined by the component goals. The planner implements the algorithm(s) that define the adjustment(s) required to achieve the expectations for the managed element. For the buffer pool example, the planner simply increases the buffer pool size by 1000 4K pages to achieve better performance. For the connection pool example, the planner uses a Queuing Network Model (QNM) that incorporates a feed-forward approach to predict the number of connections which will satisfy the response time goals. In our approach, a multi-class Mean Value Analysis algorithm [16] is employed to estimate the workload response time for multiple classes of workloads. Both the number of concurrent requests and the think-time for each workload class are acquired from the workload monitor. When a workload changes, the planner performs an exhaustive search to generate the best allocation plan that meets the response time goals, and accordingly, the connections are re-allocated among the different classes of workloads. This is feasible in our case because of our small numbers of workload classes and connections pools. If search efficiency is required, a heuristic-based search [21] can be considered. The analyzer and the planner for both the connection pool and DBMS buffer pool autonomic managers are implemented as user defined functions and are thus stored in, and are accessible from within the DBMS. 3.2.4 Execute Module/Effectors The concept of reflection is used in our approach to implement the effectors for an autonomic manager. The self-representation of the system

embodies the current configuration settings for the managed element. These represent the features of the managed element that can be controlled. The selfrepresentation for the DBMS buffer pools includes the current settings for tunable parameters such as the size of the buffer pool, the number of I/O prefetchers, and the number of I/O cleaners (threads to asynchronously write pages back to disk). The value of each parameter is adjustable and, when changed, affects the system performance. For the connection pool, the self-representation is simply the number of connections in the connection pool. The self-representation information for a managed element is stored in the Self_Representation table. An update trigger on the value attribute in this table is used to implement the effectors, that is, the mechanisms that effect change to the managed element. When a change is made to the self-representation, the trigger initiates the execute module, which invokes the effector to make the actual configuration change(s). In the DBMS buffer pool example, the trigger calls external code that uses the DBMS API to change the buffer pool size.
Analyze nalyze

Plan lan Analyzer Result Trigger Self-Representation Trigger

Trigger BP_Perf_Data Monitor onitor Knowledge Sensor


E Effector ffector

E Execute xecute

Figure 4: Autonomic Manager for Buffer Pools 3.2.5 Logic Flow The MAPE loop is implemented as a feedback control loop that repeatedly monitors the component, analyzes its status, and makes necessary adjustments to maintain a predefined level of performance. The control of the feedback loop in our approach is largely implemented by database triggers defined on the database tables that store the system knowledge. The logic flow is shown in Fig. 4. We describe the flow of control using the DBMS buffer pool autonomic manager beginning with the sensors. The sensors periodically collect data and pass it to the monitor which inserts this data into the performance data tables. An insert trigger is defined on the performance data table that invokes the analyzer module whenever new data arrives. Depending on the frequency of data collection the trigger may be modified to fire only periodically, as opposed to every time new data is inserted. The trigger defined on the BP_Perf_Data table is defined (using an IBM DB2 database) such as the following:

UbiCC Journal – Volume 3



SLA Negotiator Site Manager Performance Goal Interface Interface Set HTTP Server Performance Goal Interface Interface Application Server Performance Goal Interface Interface Web Service Performance Goal Interface Interface DB Server Performance Goal Interface Interface Site Manager Service Provider’s Site Application Server Objects WS1 WS2 JDBC DB WS3 SOAP Ext.WS

The analyzer code may be implemented as a stored procedure, a user defined function, or as an external program that can be called by the database trigger. In our example, the analyzer code is defined as a user defined function called “AnalyzeBPData”. If the analyzer detects a problem, it places a result or a notification in the Analyzer_Result table in the database. An insert trigger on the Analyzer_Result table calls the planner module whenever the result indicates that some action may be required. The planner determines the appropriate action to take and updates the appropriate data in the SelfRepresentation table to initiate the reconfiguration. The update trigger on this table signals the execute module to take the suggested action. In the IBM blueprint, the execute module effects change to the monitored element by way of the effectors, that is, it makes changes to the configuration of the managed element. In our reflective approach, the planner makes a change to the managed element’s selfrepresentation and an update trigger defined on this table acts as the effector, the mechanism that actually makes the change to the managed element. The implementation of the routine to make the change depends upon the nature of the managed element. Changes may be implemented via the element's API, or they may involve updating configuration files and possibly restarting the component.
public interface Performance { // retrieves a list of data available for the component public Vector getMetaData(); // retrieves the most recent performance data public Vector getCurrentData(); // returns a specified part of most recent performance data public Vector getData(Vector params);

HTTP Server XML SOAP Engine



Figure 6: AWSE Architecture element, thus allowing external management of a component. Meta-data methods promote the discovery of associated goals and additional methods allow the retrieval of current goals. 3.3 AWSE Management Hierarchy As mentioned previously, AWSE is composed of components that are autonomic. However, although a component may appear to be performing well in isolation, it may not be functioning efficiently in an integrated environment with respect to the overall system objectives. An autonomic environment requires some control at the system level to achieve system-wide goals. Each component (such as the HTTP server or the database server) in the AWSE framework may require one or more autonomic managers for monitoring service provisioning and controlling different parts of the component. Thus AWSE is composed of a hierarchy of communicating and cooperating autonomic managers. The higher level managers in the hierarchy query other managers at the lower levels to acquire current and past performance statistics, consolidate the data from various sources, and use pre-defined policies and goals to ensure satisfactory management of lower level managers. Our proposed architecture for AWSE, with the communication interfaces, is shown in Fig. 6. At the top level of the component manager hierarchy is the AWSE Site manager (the highest level autonomic manager in AWSE), which manages the overall performance of the site and ensures that SLAs are satisfied. It also provides external communication to allow remote and system-wide management of multiple sites within an organization. Autonomic managers, therefore, must be able to communicate to share information. In our approach, communication is achieved via standard Web service interfaces exposed by the autonomic managers. 4 VALIDATION

public interface Goal { // retrieves a list of goals that can be set for the component public Vector getMetaData(); // retrieves the current goal for the component public Double getGoal (String goalType); // set a goal for the component public Boolean setGoal(String goalType, Double value) }

Figure 5: Management Interface Specifications 3.2.6 Communication Interfaces Two management interfaces, shown in Fig. 5, are defined for each autonomic element; the Performance Interface and the Goal Interface. The Performance Interface exposes methods to retrieve, query and update performance data. Each element exposes the same set of methods, but the actual data each provides varies. Meta-data methods allow the discovery of the type of data that is stored for each element. The Goal Interface provides methods to query and establish the goals for an autonomic

We demonstrate two of the autonomic managers to illustrate the validity of our AWSE architecture;

UbiCC Journal – Volume 3


one for the buffer pool of a database management system, and the other that controls the connection pool of database connections for a Web service. Our AWSE implementation prototype consists of a single site containing an HTTP server, an application server, a Web service application, and a DBMS that is accessed by the Web service. As described in Section 3, the negotiated SLAs are translated to system level goals that are monitored and managed by the Site manager. The Site manager primarily sets the goals for the autonomic managers of the components of the site such as, the HTTP server, Application server, Web services, and the DBMS. The autonomic managers of these components, in turn, set individual goals for the autonomic managers of their internal components. For example, the DBMS autonomic manager sets a goal for the buffer pool autonomic manager, and the Web service autonomic manager sets a goal for the connection pool autonomic manager. The strategy for mapping of the SLAs to system level goals and distribution of the goals for the lower level autonomic managers is beyond the scope of this paper. These issues are currently being researched as future extensions to the AWSE framework. Our current implementation assumes predefined response time goals for the buffer pool and connection pool autonomic managers. The Web service in our prototype AWSE provides two methods; one represents an Online Transaction Processing or OLTP-like workload (short transactions that retrieve small amounts of data), and the other represents an Online Analytical Processing or OLAP-like workload (longer running queries accessing large amounts of data). We call these two Web service methods OLTP and OLAP respectively. Each Web service method invocation queries data from a specific database table. A client application (running on another machine) simulates users to generate interactive workloads for the Web service according to a given ratio of OLTP/OLAP calls. “Workloads” referred to in our experiments are differentiated by the type of method invocation. Therefore, there are two workloads; an OLTP workload which is comprised of all calls to the OLTP Web service method, and an OLAP workload which corresponds to all calls to the OLAP method. Response time goals are defined individually for each workload. We assume that queries to the OLAP Web service method are more important than those to the OLTP method to resolve priorities in meeting the response time goals. In this section we show how the response time goals are achieved autonomically for the Web service connection pool and the database management system. We use Apache HTTP server [2], Tomcat application server [3], an Axis-based Web service, and a DB2 Universal Database (UDB) [15] database server in our experimental setup. The HTTP server,

application server, and Web service reside on a single machine (IBM Thinkcenter, 2.8Ghz, 1GB RAM) while the DBMS resides on a separate identical machine. 4.1 Connection Pool Fifteen connections are created and allocated to the Web service upon initialization of our experimental environment. When a call is made to the Web service, a connection is allocated to the call if one is available; otherwise, the call is queued until a connection is available. As such, the number of connections allocated to the workload can have a significant impact on the service response time. An appropriate allocation of the connections among the workloads, therefore, is important to achieve response time goals. In our experiments, we define a 5000ms response time goal for the OLAP Web service method. The goal for the OLTP method is considered “best effort”, so no specific goal is defined for this service. The load on the Web service is varied dynamically by modifying the number of concurrent users making calls to each method. Fig. 7 shows the dynamically changing workload. Initially, we test the case where all fifteen connections are shared, that is, no autonomic control is exerted. The results are shown in Fig. 8. In this case, we see that the defined response time goal for the OLAP Web service call is violated at some points due to the competition for connections between the two methods. We also observe that the response time of the OLAP service is more sensitive to the OLTP workload variations than it is to its own (comparing with the workload variation in Fig. 7), while the OLTP service call’s response time is sensitive to increases in both types of workloads. The results show that 42% of the completed requests have response times greater than the 5000ms goal. On average, the requests violating the goal exceed it by approximately 20%. To meet the response time goals, the autonomic manager allocates the fixed number of connections to different workloads to reduce the competition. Separate queues are used in a First Come First Served (FCFS) manner to buffer database requests of different workload classes.
35 30 Number of Users 25 20 15 10 5 0 0 20 40 60 80 100 Elapsed Tim e (seconds) 120 140 160


Figure 7: A dynamic workload

UbiCC Journal – Volume 3


9000 8000
Response Time (ms)


8000 7000
Response Time (ms)


7000 6000 5000 4000 3000 2000 1000 0 0 20 40 60 80 100 Elapsed Tim e (seconds) 120 140 160

6000 5000 4000 3000 2000 1000 0 0 20 40 60 80 100 120 Elapsed Tim e (seconds) 140 160

Figure 8: Shared connections

Figure 10: Adaptive response times 4.2 Buffer Pool A single buffer pool is used to cache the database data for both the OLTP and the OLAP services. The goal for buffer pool is to maximize the hit rate, thus minimizing physical I/O. Given the nature of the OLAP Web service workload, which involves mainly sequential scans of the table, the hit rate will increase only when the entire table fits in the buffer pool. In this experiment, we begin with a pure OLAP workload and observe how the system adjusts the buffer pool size to achieve the response time goal. Once this goal has been reached, we introduce a second workload, the OLTP workload, which interferes with the OLAP performance. We expect that the system will adjust the buffer pool size further to accommodate both workloads. For the purpose of this experiment, we assume that there is sufficient memory in the system to satisfy the required buffer pool changes. The OLAP service call scans a table that is close to 30MB in size, thus requiring approximately 7700 4K pages to accommodate the table in the buffer pool. We start with a buffer pool size of 1000 4K pages. The system monitors the hit rate every 20 seconds, and increments the buffer pool size by 1000 pages whenever the hit rate is found to be below 80%. Fig. 11 shows the autonomic adjustment of the buffer pool size and Fig. 12 shows the corresponding changes in buffer pool hit rates at different times corresponding to different buffer pool sizes. In Fig. 11, we observe that the buffer pool reaches 8000 4K pages, enough to hold the entire table in the buffer pool, at around time 140. Fig. 12 shows that around
Buffer Pool Size(pages)

Using the workload shown in Fig. 7, we ran the experiment under the control of the autonomic manager. The connection allocation schemes used by the autonomic manager are shown in Fig. 9, which gives more importance to the OLAP workload as mentioned earlier in this section. We see that the number of connections allocated to OLAP Web service calls increases gradually in response to the increases in the OLAP workload. The resulting response times are shown in Fig. 10 . The surge of the OLTP response time between time 60 and 72 is caused by the heavily biased allocation of 14 connections to the OLAP workload and only 1 connection to the OLTP workload, during the presence of a higher workload of 20 OLAP users and 15 OLTP users. Such allocation is made due to the priority policy used by the autonomic manager to mitigate the violation of the response time goal of the OLAP service. We observe that the response time for the OLAP service hovers close to the response time goal of 5000ms up to time 120. Up to this time, 13% of the completed requests violated the goal, but this violation is less than 4% of the goal. After time 120, the system demonstrates an extreme allocation scheme where all the connections are allocated to the OLAP service. However, the response time of the OLAP service still remains far above the 5000ms goal. This is caused by the heavy OLAP workload of 30 users as shown in Fig. 7. Therefore, we can conclude that 15 connections are not sufficient to satisfy the requirements of such high (more than 85%) OLAP workload.

Number of Connections

14 12 10 8 6 4 2 0 0


12000 10000 8000 6000 4000 2000 0












120 160




Elapsed Tim e (seconds)

Elapsed Tim e (seconds)

Figure 9: Connection allocations

Figure 11: Buffer pool size adjustments

UbiCC Journal – Volume 3


1.2 B uffer P ool H R it ate 1 0.8 0.6 0.4 0.2 0 0 40 80 120 160 200 240 280 Elaps e d Tim e (s e conds )

Figure 12: Buffer pool hit rate adjustments
the same time (140s) the hit rate remains at 100% and no further adjustment is necessary at that point. At time 180, we introduce the OLTP workload which competes with the existing OLAP workload for buffer pages. As a result, we observe that the hit rate dramatically drops to 0, and the tuning process is invoked again. The tuning process completes when the buffer pool reaches10000 pages, which is large enough to accommodate both tables. Fig. 13 shows the resulting reductions in response times for the OLAP and OLTP service calls. The spikes are caused by the overhead of reconfiguring the buffer pool size.
1200 1000 Response Time (ms) 800 600 400 200 0 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 Elapsed Tim e (seconds)



Figure 13: Web service response time with autonomic buffer pool adjustment

Web service hosting site. Our approach exploits the powerful capabilities of a database management system for system control as well as for managing and storing the vast array of knowledge required for an autonomic system. The experimental results presented in this paper show that goal-oriented reflection-based managers are effective in achieving autonomic resource management. The manager automatically controls an element’s resources to meet the predefined individual and thereby, the system’s overall performance goals. The self-managing components thus render a complete autonomic environment through the organization hierarchy and communication of the autonomic managers. AWSE can extend the implementation of Web services more efficiently to areas such as Enterprise Resource Management (ERM) [19]Error! Reference source not found., or Enterprise Project Management (EPM) [33], and network and systems management [34]. The general framework for the implementation of autonomic managers can be adapted to design more complex heterogenic systems by identifying the system components and leveraging them to autonomic elements. Our current research focuses on implementing WSDM [19] standards in AWSE to enhance standard based communication among the autonomic managers. We are also working on the development of a site manager to control the hierarchy of managers and to perform goal distribution and assignment, which involves breaking down a high level system goal into appropriate component level goals for the hierarchical management in AWSE. We also plan to extend the framework by incorporating SLA negotiation, automating Web service discovery, deriving appropriate models for different types of autonomic managers, and generating metrics with the performance data to generate QoS statistics to facilitate QoS based service discovery. REFERENCES
[1] Akamai Technologies: The Impact of Web

Autonomic systems are deemed indispensable for managing the growing complexity of information technology. Web services require an adaptive and autonomic management system to satisfy a widely varying workload and maintain agreed-upon service levels to leverage its usage in e-commerce. We propose AWSE, which builds on the IBM Architectural blueprint and provides a novel framework for autonomic Web services. The core component of AWSE is an autonomic element, which is basically any component within the Web service hosting site, augmented with an autonomic manager, thus making the component self-managing. We present a design and implementation model for the autonomic manager as well as a coordination framework and the interfaces of autonomic managers for the management of a

[2] [3] [4]


Performance on E-Retail Success. White paper, (2004). Apache HTTP Server, Retrieved from: (2007). Apache Tomcat, Retrieved from: (2007). M. Bennani, and D. Menascé: Assessing the Robustness of Self-Managing Computer Systems under Highly Variable Workloads, In Proc. of the Int. Conf. on Autonomic Computing (ICAC), NY, USA, (2004). K. Birman, R. Renesse, and W. Vogels: Adding High Availability and Autonomic Behavior to Web Services, In Proc. of Int. Conf. on Software

UbiCC Journal – Volume 3


Engineering (ICSE), Scotland, UK (2004).
[6] P. Bruni, N. Harlock, M. Hong, and J. Webber:

[21] V.J. Rayward-Smith, I.H. Osman, C.R. Reeves,

DB2 for z/OS and OS/390 Tools for Performance Management. IBM Redbooks (2001). [7] I. Chung and J. Hollingsworth: Automated Cluster-based Web Service Performance Tuning. In IEEE Conf. on High Performance Distributed Computing (HPDC), Hawaii USA (2004). [8] M. Clark, P. Fletcher, J. Hanson, R. Irani, M. Waterhouse, and J. Thelin: Web Services Business Strategies and Architectures, Wrox Press, (2002). Retrieved: (2007). [9] A.Dan, D.Davis, R.Kearney, A.Keller, R.King, D.Kuebler, H.Ludwig, M.Polan, M.Spreitzer, and A.Youssef: Web Services on Demand: WSLA-driven automated management. IBM Systems Journal, Vol.43(1), pp. 136–158 (2004). [10] Y. Diao, J.L. Hellerstein, G. Kaiser, S. Parekh, and D. Phung: Self-Managing Systems: A Control Theory Foundation. IBM Research Report RC23374 (W0410-080), (2004). [11] J.A. Farrell, and H. Kreger: Web Services Management Approaches. IBM Systems Journal, Vol. 41(2), pp.212-227 (2002). [12] J. Fuente, S. Alonso, O. Martínez, and L. Aguilar: RAWS: Reflective Engineering for Web Services, In Proc. of IEEE Int. Conf. on Web Services (ICWS), San Diego, CA, USA (2004). [13] A.G. Ganek, and T.A. Corbi: The Dawning of the Autonomic Computing Era, IBM System Journal, Vol. 42(1), pp.5-18 (2003). [14] IBM. An Architectural Blueprint for Autonomic Computing, White Paper (2005). [15] IBM DB2 Universal Database. At: http:// (2006) [16] E.D. Lazowska, J. Zahorjan, G.S. Graham, and K.C. Sevcik: Quantitative System Performance, Computer System Analysis Using Queuing Network Models, by prentice-Hall, Inc. Englewood Cliffs, NJ (1984). [17] R.Levy, J.Nagarajarao, G. Pacifici, M. Spreitzer, A.Tantawi, and A.Youssef: Performance Management for Cluster-based Web Services, In proc. of IFIP/IEEE Int. Symposium on Integrated Network Management (IM’03), Colorado Springs, USA, pp. 247-261 (2003). [18] P. Maes: Computational Reflection. The Knowledge Engineering Review, Vol. 3(1), pp. 1-19, (1988). [19] OASIS: An Introduction to WSDM. Committee Draft, 2006. Retrieved from: m-1.0-intro-primer-cd-01.doc (2006). [20] W. Powley and P. Martin: A Reflective Database-Oriented Framework for Autonomic Managers. In Proc. of Int. Conf. on Networking and Services (ICNS), San Jose, CA, USA(2006).



[24] [25]





[30] [31] [32] [33]


Smith, G.D. (Eds.): Modern Heuristic Search Methods, John Wiley and Sons, Canada (1996). A. Sahai, V. Machiraju, M. Sayal, A. Van Moorsel, and F. Casati: Automated SLA Monitoring for Web Services, In Proc. of the IFIP/IEEE Int. Workshop on Distributed Systems: Operations and Management (DSOM’02), Montreal, Canada. LNCS, Springer, Vol. 2506, pp. 28-41 (2002). A. Sahai, V. Machiraju, and K. Wurster: Monitoring and Controlling Internet based Services, In Proc. of IEEE Workshop on Internet Applications (WIAPP), San Jose, CA (2001). SOAP Ver. 1.2 Part 1: Messaging Framework, At: (2007). M. Tian, T. Voigt, T. Naumowicz, H. Ritter, and J. Schiller: Performance Impact of Web Services on Internet Servers, In Proc. of Int. Conf. on Parallel and Distributed Computing and Systems (PDCS), Marina Del Rey, USA (2003). W. Tian, F. Zulkernine, J. Zebedee, W. Powley, and P. Martin: Architecture for an Autonomic Web Services Environment. In Proc. of the Joint Workshop on Web Services and Model-Driven Enterprise Information Systems (WSMDEIS), Miami, Florida, USA (2005). V. Tosic, W. Ma, B. Pagurek, and B. Esfandiari: Web Services Offerings Infrastructure (WSOI)A Management Infrastructure for XML Web Services, In IEEE/IFIP Network Operations & Management Symposium (NOMS), Seoul, South Korea, pp. 817-830 (2004). G. Tziallas and B. Theodoulidis: Building Autonomic Computing Systems Based on Ontological Component Models and a Controller Synthesis Algorithm. In Proc. of the Int. Workshop on DEXA, Prague, Czech Republic, pp. 674-680 (2003). UDDI Version 3.0.1, UDDI Spec Technical Committee Specification, 2003. At:, (2007). WebSphere: pe/docs/WebSphereMon.htm. (2007). WebLogic: orm/index.html (2007). Web Services Description Language (WSDL) 1.1 At: (2007). L. Zhang, H. Cai, J. Chung, and H. Chang: WSEPM: Web Services for Enterprise Project Management, In Proc. of the IEEE Int. Conf. on Services Computing (SCC’04), Shanghai, China, pp. 177-185 (2004). L. Zhang and M. Jeckle: Web Services for Integrated Management: A Case Study. In Proc. of Web Services: European Conf. (ECOWS), Erfurt, Germany (2004).

UbiCC Journal – Volume 3


Saowanee Schou Center for Information and Communication Technologies Technical University of Denmark, Denmark ABSTRACT This paper presents a conceptual service architecture for adaptive mobile location services, which is designed to be used in the open service environment of the next generation wireless network. The designed service architecture consists of a set of concepts, principles, rules and guidelines for constructing, deploying, and operating the mobile location services. The service architecture identifies the components required to build the mobile location services and describes how these components are combined and how they should interact. As a means of exploring and validating the designed architecture, a scenario representing a novel mobile location service utilizing the architecture has been developed, and an illustrative case study of this service has been carried out to demonstrate the interactions between different components in the architecture and to demonstrate the applicability of the architecture Keywords: Service architecture, Mobile location services, Context-awareness. 1 INTRODUCTION of different operators in the same country. Universal access is not supported by the existing mobile location service. The sharing of network resources and information resources (i.e. location information and user profile) between different stakeholders is very limited or not possible. The way mobile location services are offered today is not compatible with the open service environment of the next generation wireless network. The next generation wireless network is based on functional integration and convergence of heterogeneous wireless access networks [3][4]. The next generation wireless network includes not only cellular networks but also emerging wireless access networks such as WMAN, WLAN, WPAN, high speed portable internet, digital broadcasting networks and other forthcoming wireless technologies [5][6]. These wireless access networks will coexist to provide a variety of multimedia services via a common IPv6-based network (i.e. allIPv6 network) [4][5][6]. The service environment of the next generation network will be open, and the users will be able to access a mobile service regardless of geographical location, terminal model, access network, network operator and service provider 1 [3][4]. Service providers and content

Mobile location services refer to mobile services that provide information based on the geographical location of people or objects. Mobile location services exploit location technologies for determining where the user is geographically located, thus making the provision of different services based on a given location possible. Since 2000, countless mobile location services have been launched on 2G and 3G networks in different parts of the world. The areas with the greatest attention to providing mobile location services are Asia, Western Europe and North America, each with very different technologies, different business models and different outcomes. Mobile location services have been taken up more enthusiastically by mobile users in Asia, especially in Japan and South Korea, compared to other parts of the world. However, the overall usage of mobile location services is still not very high compared to other entertainment and messaging services. One of the main inhibitors to the adoption of the existing mobile location services, besides the lack of location methods that can provide high accuracy location information in closed environments (e.g. inside buildings and undergrounds) and urban areas, is the lack of adaptability and offerings tailored to different users’ requirements in particular contexts of use [1][2]. The existing mobile location services are typically available locally within the network of specific operators or available on different networks

A service provider offers different kinds of mobile location applications and services (based on the location information obtained by a mobile location technology) via a mobile network to a user requesting the content (e.g. traffic information, weather information, restaurant list, etc.) provided by the content provider.

UbiCC Journal – Volume 3


providers will be able to provide their services and content independently from the operators. Location and charging information can be transferred among networks and applications [4]. Both seamless roaming and universal access is expected to be achieved in the next generation wireless network. Mobile terminals and networks will be multi-mode, operating at different frequencies and using a variety of wireless access technologies. This paper proposes a conceptual service architecture for adaptive mobile location services, which is designed to be used in the open service environment of the next generation wireless network. The architecture supports a provision of new-concept mobile location services that are not possible with the existing service architecture on the current mobile networks. The provision of mobile location services that allow users to access a service and content based on the location information of other users on the all-IPv6 network will be possible with the service architecture proposed in this paper. For example, the location-based information service may be provided based on the location of other users at the global-level instead of based on the current location of the user himself like the services available today. The proposed service architecture fits in a multisupplier/provider/operator environment and which allows the coexistence of a number of stakeholders performing various roles. The service architecture supports universal service access and the end-users are allowed to access a service independently of the physical location, type of access network and type of terminal being used by the users. One of the important features of the designed architecture is adaptability. Based on the analyses made in [1][2], the lack of adaptability is one of the inhibitors of the mobile location services’ take-up. Adding adaptability to the designed architecture, the service’s behavior and content can be adapted to best fit with the user’s expectations, requirements and/or preferences in a particular context of use. This is the way to improve the quality of a mobile user experience and also improve the possibility of being successful for mobile location services. The importance of user experience to the successful of mobile location services and the influences of context of use (i.e. environment where the service is used) to the user experience formation is presented in section 2. Section 3 presents adaptation possibilities for mobile location services. Section 4 presents user profile and service and content profile which are required in the process of service adaptation. The platform for context-based service adaptation is presented in section 5 and the conceptual service architecture for adaptive mobile location services is then proposed in section 6. The case study of a novel mobile location service utilizing the designed service architecture is made in section 7 as a means of exploring the developed

service architecture as well as demonstrating its applicability. Concluding remarks are given in section 8. 2 THE INFLUENCE OF USER EXPERIENCE AND CONTEXT OF USE

The term “user experience” is used more and more in discussions and articles instead of the term “usability”, as it is believed that context, emotions, expectations, and overall service processes are becoming more important than ever with regards to mobile services [7][8]. The user experience is the experience that the user gets when using a service in particular contexts of use [8]. Good user experience is one of the important factors in providing successful mobile services, as the users’ willingness to continue paying for a service depends on whether or not they get a good experience from using it [8][9]. The users’ expectation is the fundamental concept behind the user experience formation, and how good an experience the users get varies depending on how well the service matches their expectations, requirements and/or preferences in particular contexts of use [8][10]. If the service fails to live up to the user expectations and does not meet the user’s requirements, the trust towards the service is violated, which can lead to different emotions such as disappointment, anger and annoyance, and this will form a negative user experience [10][11]. In order to provide a mobile service with a positive user experience, the service must be adaptable to match the user expectations, requirements and/or preferences in particular contexts of use. In the case of a mobile location service, different users use a service at different places and times and in different situations. They access the service through different mobile networks using different location technologies, which provide different levels of data rate and different levels of accuracy. They want to accomplish different tasks using different terminals with different user interfaces. They use the service in different roles and with different social aspects. Using the same service in different contexts of use can result in significantly different levels of user experience [7][12]. For example the service may appear funny or annoying depending on how busy the user is. The quality of the user experience is likely to be improved and the service has a high possibility of being a success if the content of the service can be filtered based on a particular context of use and if the service behaviors can be adapted to match the current context of use. Figure 1 illustrates the relationship of the five perspectives of the context of use (users, tasks, technologies, physical environments and social environments), where the physical environment plays an important role in the context model of mobile location services.

UbiCC Journal – Volume 3


Figure 1: Generic context model of mobile location services. User-service interactions take place in a particular physical environment (location) with particular social and culture patterns (social environment), which may influence the user’s behaviors, expectations, preferences and requirements. The interactions between the user and the service are made through the technology context, e.g. mobile terminal and network, available at the user’s current location. The five perspectives of the context of use presented in figure 1 are described in the following. Users: User refers to people or groups of people who interact with the service [13]. The purpose of designing the service is to fulfill the users’ needs and help them finish their tasks and reach their goals. It is therefore very important, when designing a mobile location service, to know who the target users are, what they want, how they want it, where and in which situation they want it. Tasks: Tasks are the activities undertaken to achieve a goal [13]. To achieve the goal, the user might need to accomplish several tasks. Knowing the current tasks of the target users gives the service designer the opportunity to predict the next task and the fundamental goals of the users. This opens many opportunities to design a successful service; coming up with new tempting tasks, reducing the number of annoying tasks, or making the completion of a task easier can make the service become a success [8]. Technology: Technology context means the technologies involved in providing mobile location services. The technological context is one of the most important parts with regards to how the service will be experienced. Knowing the technology context gives an opportunity to a service developer to know how the service should be designed and how the available technology can be used to improve the quality of the user experience in particular contexts of use. Physical environment: The physical environment is the obvious factor which directly affects how the user will experience a mobile location service. For mobile location services, the ability to pinpoint the location of the user varies depending on the physical environment. High accuracy location information might not be available when the user is deep inside buildings or the accuracy might be degraded when the user is in rural areas. Designing a mobile location service should take these context attributes into account and the service should provide alternative ways for the users to complete their task when the quality of the service is degraded because of the physical environments (e.g. an option of determining the user’s location manually). Social environment: Social acceptance, the way people think about other people, has a profound effect on the ways people behave and think [8]. Social environment has a significant influence on the service adoption of the user [14]. The social acceptance of technology and its applications and services determine how and when it is used. Therefore, it is essential to know how certain technologies and services are perceived in the culture where they are supposed to be used, and what social rules apply in connection to the service usage. 3 ADAPTATION POSSIBILITIES FOR MOBILE LOCATION SERVICES

The context-based service adaptation can take place at five different levels: Technology level, service behavior level, user interface level, presentation level and content level [15]. The five different levels of service adaptation are described in the following. Technology level: In this level, the service is adapted to the technology context. For example, information is encoded for specific mobile terminals with different characteristics (e.g. display size and resolution, memory, CPU power, etc.) or to the transmission media (e.g. network bandwidth). Service behavior level: The service behavior is adapted to the user’s tasks and physical environment.

UbiCC Journal – Volume 3


For example, the parents define in their profiles (task related information) that the location notification of their child should be sent to them every hour on the weekday and location notification should not be delivered in the weekend. The service behavior will be adapted based on this information and the parents do not need to interact with the service every hour in order to check their child’s whereabouts. An example of service behavior adaptation based on the physical environment context is the zone alert feature of a tracking service, where the alert message is sent to specific persons when the user leaves a predefined zone. User interface level: The user interface is adapted to the user’s tasks, the system in use (e.g. terminal and network), and the user’s physical conditions. For example, the user interface is changed from graphic user interface to voice interface when a blind user is accessing the service or when a user is driving. The user interface may also be adapted to a child-friendly version when the child is accessing the service. The service adaptation in the user interface level requires that the terminal can support different kinds of user interfaces; otherwise adaptation in the user interface level is limited or not possible. Presentation level: The visualization of the service is adapted to user, tasks, system in use (e.g. terminal and network), social aspect, and physical condition of the user. For example, the visualization of the service is changed based on social aspects (e.g. mature look for European users and colorful version for Asian users), or text information is presented with “large text” when elderly people are accessing the service [16]. Content level: In this level the content of the service is adapted to the current location, situation and user (e.g. age groups, gender). For example, the system detects that the user is a child, and the service then adapts the content to an “easy to understand and child-friendly” version.

Figure 2 gives an overview of service adaptation illustrating why the service should be adapted, how and to which context. The service adaptation can be activated when the context is changed. The change of context can be the change of one or more perspectives of use contexts (e.g. change of user, terminal, location, etc.). The context-based service adaptation platform proposed in section 5 will minimize the need of interactions between the user and the service without taking the overall control of the system from the user. To support the context-based service adaptation, the user profile and service and content profile are required. 4 USER PROFILE AND SERVICE AND CONTENT PROFILE

Figure 2: Perspective of the context-based service adaptation for mobile location services.

The user profile is merely a conceptual entity representing a unique lifestyle and current context surrounding and situation of a user. Any user profile will contain details of the user and his personal requirements in a form that can be used by the system to deliver the required behaviors [16]. In some proposals, e.g. in the MAGNET project [17], the user related information (i.e. user profile in the MAGNET project) is maintained separately from the context information. This allows the service adaptation to be based on the user related information alone which will result in an adaptive personalization service. If the context information is also applied for the adaptation, the result will be a personalization and context-awareness service. In other proposals the user profile contains information related to both the user and context attributes, e.g. the user profile management specified by European Telecommunications Standards Institute [16], the user profile proposed in ePerSpace project [18] and the user profile specified in the TINA service architecture [19]. The user profile in this paper is compatible with the user profile proposed in ETSI, ePerSpace and TINA, where the user profile contains the user’s related information, user’s personal requirements, and attributes of the current context of use. The information maintained in the user profile should be real-time and up to date information. The user profile can be viewed and modified by the user and/or an agent of the user upon the user’s permission. The information maintained in the user profile and used by the context-based service adaptation platform proposed in section 5 is the information related to the user, task, terminal, physical environment and social environment. These five groups of information are the fundamental part of the user profile, as they are unique for each individual user. The groups are related to the current context of use where the user is a part of the context as presented in section 2. The users create some part of the user profile (e.g. user related information and

UbiCC Journal – Volume 3


task related information). The terminal related information is provided by the terminal itself, and the location related information is analyzed from the location information which is delivered from the location server upon request or upon the user’s privacy rules regarding the usage and sharing of location information. The service and content profile is a set of information related to a service and the content provided by the service. This profile is established and managed by the service and content providers. The service and content profile is maintained in the application and content server. It contains information about what the service can and cannot provide to the user including the information of adaptation possibilities. 5 CONTEXT-BASED SERVICE ADAPTATION PLATFORM

an appropriate approach for the service focusing on the user experience. The service adaptation proposed in this paper relies on the user-controlled self-adaptation, where the service adapts itself based on the conditions defined by the user only. The service based on usercontrolled self-adaptation only works if it is easy to understand and if the user knows how to define and control the service adaptation [21]. Mobile location services can be adapted to the use contexts in five different levels as presented in section 3. This section describes the process of context-based service adaptation based on the context attributes stored in the user profile together with the service and content profile. The contextbased service adaptation for mobile location services proposed in this paper is based on three principles: 1. 2. The user should always have full control of the service adaptation (the user can decide whether he wants adaptation at all or at certain levels). The service adaptation has to be transparent meaning that the user should know that adaptation is taking place and it should be in the form of suggestions whenever possible. For example, if the service is going to adapt the user interface from text to voice in order to fit with the current context (e.g. when the user is in a driving car), the user should know that the adaptation is going to take place. The adaptation notification may be sent to the user in the form of interaction message and the user can choose whether he would like the adaptation to be performed at all. The user might turn out to be a passenger who is sitting in a taxi rather than driving the car. The user should always be able to manually adapt the service behaviors. The manual adaptation should be allowed at the same levels as the adaptive behavior of the system.

An adaptive mobile location service, in this paper, is a service that is able to adapt itself according to the changes of use contexts (or context of use). Not all services can be adapted or are able to adapt themselves, and adaptive mobile location service refers to the fact that adaptation is in principle possible. In some cases, service adaptation requires that the mobile terminals have capabilities for supporting the service adaptation (e.g. supports different kinds of user interface and different input and output channels). Two adaptation processes can be applied for the real-time context-based service adaptation for mobile location services: Self-adaptation and user-controlled self-adaptation [20]. In the self-adaptation process, the service adaptation is driven by a computer based on the information collected by the system (e.g. user’s behaviors, preferences and roles). Since the self-adaptation process is made without the user’s control, the outcome of the adaptation cannot be guaranteed to match the user expectations and requirements in a particular context of use. Another obvious problem of the self-adaptation approach is the lack of data protection or privacy, as the information related to the user is collected by the system without the user’s control. For the usercontrolled self-adaptation process, the adaptation conditions (e.g. how the service should be adapted, when, where and in which condition) are defined by the user. From the user experience point of view, the user should always have control over the behaviors of the service. As the self-adaptation process is controlled by the system without definitions of the user, it is not


Figure 3 shows the conceptual platform for service adaptation based on the context of use. This platform is placed at the application and content server, which is owned by the service provider. The components included in this platform are a change measurement unit, a trigger and an adapter, and the information required for the adaptation process is in the user profile and in the service and content profile. The change measurement unit detects changes in the attributes of the current context of use stored in the user profile, the trigger triggers the service adaptation and the service adaptation unit adapts the service to best fit with the current context of use.

UbiCC Journal – Volume 3


Figure 3: Context-based service adaptation platform for mobile location services inspired by [22]. The service adaptation is triggered by any change or difference of context attributes (stored in the user profile) between stage S1 and S2, exceeding a specific threshold level. The threshold value (e.g. pre-defined location, pre-defined time, etc.) may be found empirically or set by the user beforehand. These threshold values indicate states where one or all of the technology, service behavior, user interface, presentation (or service visualization) and content levels do not fit well to the user’s requirements or preferences in his current context of use. An adaptation could also be triggered manually by the user while he is using a service. Differences of the context attributes stored in the user profile between stage S1 and S2 could, e.g., occur when the attributes of the current context of use surrounding the same user is changed (e.g. change of location or change of social environment, etc.). When an adaptation is triggered, the context parameters are sent to the decision engine. The decision engine checks whether the adaptation is necessary. If an adaptation is necessary, the decision engine selects an appropriate adaptation strategy. Then the rules for service adaptation are selected from the adaptation model, and the adaptation engine selects the appropriate methods and parameters. Furthermore, the adaptation engine chooses the adaptation levels of the service (i.e. technology, task, user interface, presentation, or content levels) that will be adapted. The last step builds the adaptation execution, i.e. activates the adapter. This adapter adapts the mobile location service by applying the chosen methods, parameter values and rules. The context-based service adaptation platform presented in this section is an important part of the service architecture for adaptive mobile location services proposed in the next section. The main task of the conceptual platform for context-based service adaptation is to adapt a mobile location service to best fit with the user’s expectations, requirements and/or preferences in a particular context of use, which is the way to improve the quality of a mobile user experience and thereby increasing the possibilities of making a mobile location service successful. 6 SERVICE ARCHITECTURE

This section presents the conceptual service architecture for adaptive mobile location services which is designed to be used in the open service environment of the next generation wireless network. An adaptive mobile location service refers to a mobile location service that is able to adapt itself according to the changes of context of use. The architecture supports a wide range of services and allows the provision of new-concept mobile location services that have not been possible on the current network. The architecture supports the universal service access and the end-users are able to access services independently of the physical location, type of access network and the types of terminal being used.

UbiCC Journal – Volume 3


Figure 4: New conceptual service architecture for adaptive mobile location services on the next generation wireless network. Figure 4 illustrates the components that form the service architecture, and the descriptions and roles of the individual components are given in the following. The service portal handles session management, requests handling, authentication of subscribers and manages the billing system. The service portal contains the “user billing profile” and “service provider charging profile”. When the user accesses the service, the service usage will be recorded and the billing and charging reports will be updated in the user billing profile and service provider charging profile, respectively. The user and service provider can access and check their profiles at anytime regardless of geographical location, access network, terminal model and network operator. However, editing and deleting of the profiles is not allowed. The user billing profile contains the actual information of the user, which is required for billing management such as the real name, real address, telephone number, credit card number and a list of subscribed services. The service portal allows users (or subscribers) of different network operators to access the services of different service providers from anywhere on the all-IPv6 network with the feasibility of managing the billing for the users and revenue sharing between different stakeholders. The service portal is placed in every domain and it is owned and managed by the network operator who administrates the domain. The service portal makes it easy for the user and service providers, as the user can use different services without having to pay different bills for different service providers and the service providers do not have to handle the billing management of different users but instead let the network operator handle this task. The application and content server handles different tasks from providing the service and content to the user and adapting the service behaviors and content to best fit with the user’s requirements in a specific context of use to manage the user experience towards the service. This server is owned and managed by the service provider. The context-based service adaptation platform plays an important role in adapting the service to best fit with the current context of use as previously presented in section 5. Adapting the service to best fit with the current context of use is the approach to manage the user experience in the usage stage of the service, as the context plays an important role in defining how the user will experience the service [8]. The service adaptation is made based on the current context of use and the adaptation conditions defined by the user and maintained in the user profile, and the service and content profile created by the service and content providers, as previously presented in section 4 and 5 Another task of the application and content server is to manage the user expectation, and this task is handled by the user expectation management platform. The main task of this platform is to inform the user about the service (e.g. if the service is temporarily unavailable, new features, new service update, etc.) and to inform the user of the reason and further suggestions in the case that the service adaptation requested by the user cannot be made (e.g. 3D navigation is not possible because the user’s terminal does not support 3D display). This is the platform that controls the user experience by providing an understanding of what the user can and cannot expect from the service. This approach prevents the user from generating unrealistic expectations that the service cannot live up to. The profiling management agent has been added to the developed conceptual service architecture to support the idea of one network many services, where different service providers provide their services on a common IPv6 network (i.e. all-IPv6

UbiCC Journal – Volume 3


network) [2]. The profiling management agent allows users to access different services from anywhere on the all-IPv6 network and still being recognized by the service providers. The profiling management agent can be placed anywhere on the all-IPv6 network and it is owned by a new stakeholder, which in this paper is called the “profiling broker”. The role of the profiling management agent is to maintain, manage and update the user profile of registered users as well as to handle authentication and authorization. This agent acts as a broker handling the usage and sharing of the information in the user profile according to privacy rules defined by the users. Based on the open service environment concept of the next generation wireless network, the users should be able to access, edit or delete their user profiles anywhere and any time they desire [3]. The location server maintains the location information of all registered users on the local domain (the local domain is a collection of networks that are aggregated together based on factors such as geographic proximity or administrative control), and to manage authentication and privacy control. This server is owned and managed by the network operator. There should be at least one location server in every domain. In the location server, the location of individual users is stored in a profile called the user location profile. This profile maintains the actual location of the user and the privacy rules of using and sharing this information is defined by the users. As the mobile user may travel into different domains administrated by different network operators, it should be possible to exchange the user location profile between different network operators according to the user mobility. For example, if the user moves from domain A to domain B, the transfer of user location profile from domain A to B should be possible. This requires new mechanisms for handling the user location profile transfer process and the agreements between network operators. The ways the components in the service architecture, presented in figure 4, interact with each other vary depending on the types of service, adaptation conditions, privacy rules, and context. However, the typical service requests and responses are as follow. The user accesses the service via the service portal. The service portal authenticates the service request, records the service usage and updates the user billing profile and provider charging profile. The service request is forwarded to the application and content server. The application and content server asks for the user profile from the profiling management agent. The location server may send the most up to date location information of the user to the profiling management agent depending upon the roles of using and sharing of location information defined in the user location profile. In the case that the service (e.g. navigation service) requires real-time location information, the

application and content server may request the realtime location information directly from the location server. The application and content server then delivers the requested service back to the user. 7 AN ILLUSTRATIVE CASE STUDY OF A NOVEL MOBILE LOCATION SERVICE

The demonstrations of the services based on the service architecture proposed in this paper are made through the “come along with me” scenario presented below. The scenario represents new possibilities of utilizing mobile location services, possibilities that are not possible with the existing service architecture on the current service environment and current network. With the new-concept mobile location service presented in the “come along with me” scenario, the user will experience a new way of communication. Instead of saying where I am, the user may share her real-time location information on the map (2D or 3D) or even invite other users to travel with her virtually. This will open up new ways of experiencing mobile location services for the future mobile users, as presented in the following scenario. Scenario: Come along with me “Claus has an online friend in Thailand who he normally chats with everyday - Mai. Sunday morning, Mai is ready to go shopping at Chatuchak weekend market 2 and she wants to bring Claus along. She wants Claus to get a good impression of the largest weekend market in the world. She will rather bring Claus along in the virtual world instead of explaining how the market is and how she is going to get there. She also wants Claus to get a clear picture of how people spend their weekend in Thailand - the country where all the shops are open everyday from early morning to late at night. She shares her location information with Claus and guides him on the way. Claus can see all the places where Mai has passed by. Claus can look at the real-time route map based on Mai’s location and he can choose either a 2D or 3D map. When Mai passes important places, the information box explaining the places pops up on Claus’ terminal so that he can get more information about the place and he can, at the same time, chat with Mai about the places he finds interesting and plans to visit together with Mai when he goes to Thailand next summer. Claus can also choose to disable the pop up box and only let Mai guide him on the way”.

Chatuchak market (Bangkok, Thailand) is one of the largest markets in the world. The market is only open at the weekends, Saturday and Sunday from 7 a.m. until late. It covers over 35 acres (142,000 m²) and contains upwards of 15,000 stalls. It is estimated that the market receives between 200,000 and 300,000 visitors each day. This amazing market has nearly everything you could ever wish to buy and many things that you would never want to.

UbiCC Journal – Volume 3


Mai finally arrives at the Chatuchak market by skytrain and now Claus is ready to virtually discover the weekend market together with Mai. Mai and Claus get the feeling of traveling together in the virtual world and they have common experiences to talk about. Claus gets a good user experience from traveling virtually with Mai at the weekend market and he cannot wait visiting Thailand until next summer. He decides to book the ticket and fly to Thailand next weekend and one of his destinations is Chatuchak. Utilizing the conceptual service architecture for adaptive mobile location services on the next generation wireless network, the mobile location service presented in the “come along with me” scenario can be realized based on the following assumptions. The network is an all-IPv6 based network, meaning that all elements on the network can carry IPv6 addresses. The conceptual IPv6-based location method developed in [23] is adopted as a location method on the unified IPv6 network. The existing location methods that can provide high accuracy location information such as GPSbased location methods may also be used especially in outdoor environments. The service environment is open allowing the users to access the services regardless of location, access technology, terminal model,

network provider and service provider. The business agreements of using and sharing user profiles and locations of the users are made between different stakeholders involved in realizing the service. The new business model of how to share revenue between different stakeholders has been developed by these stakeholders. The billing management is handled by the network operator who owns the network domain where the user is accessing the service. The users have already registered for the service presented in the scenario. The users have registered for adaptive functionality provided by the service. Adaptation conditions have already been defined by the users in advance. The user profiles have been generated beforehand through the mobile or web browser. The privacy rules have been defined beforehand by the users. The user may also adopt the privacy rules defined by the service provider. The user profiles of Clause and Mai are maintained in different profiling management agents located in different domains in different countries. Based on the scenario and the assumptions described above, the interactions between the elements in the conceptual service architecture are demonstrated in figure 5

Figure 5: Service request and response sequences based on the “come along with me” scenario. The users are in different domains and different countries. The actual location and the user profiles of the users are maintained in different location servers and different profiling management agents. The exchange of the required information stored in different places owned by different stakeholders is assumed to be made through open standards in the open service environment based on the business agreements made between these stakeholders.

UbiCC Journal – Volume 3


In figure 5, the service requests and response sequences of the “come along with me” scenario start from the point where Mai invites Claus to come along with her in the virtual world. 1. Mai sends a request to the service portal on her current domain via the wireless access network available at her current location. 2. The service portal authenticates Mai’s request, records the use of the service for billing management, and forwards the service request to the application and content server. 3. The application and content server requests Mai’s user profile from the profiling management agent 1. 4. The profiling management agent 1 provides the information that is necessary for the requested service to the application and content server. This information will be used for adapting the service behaviors based on the adaptation condition set by Mai. 5. The application and content server forward Mai’s request to the service portal of the domain where Claus is currently located. 6. The service portal sends the acknowledgement to Claus to ask whether he will accept the invitation to come along with Mai in the virtual world. 7. Claus accepts the invitation, and the acknowledgement is sent back to the service portal. The service portal records the use of the service, which may be used for billing management depending on the business model. 8. The service portal forwards the acknowledgement to the application and content server. 9. The application and content server requests Claus’s user profile from profiling management agent 2. 10. The profiling management agent checks the privacy rules set by Claus and forwards the required information in Claus user’s profile to the application and content server. This information will be used to adapt the content and service behaviors based on the adaptation conditions set by Claus. 11. The application and content server asks for the actual location of Mai maintained in the user’s location profile located in the location server. The use of Mai’s actual location is subject to the privacy rules defined in Mai’s user location profile. 12. The real-time location information of Mai is sent to the application and content server. 13. The content based on Mai’s location is delivered from the application and content server to Claus. This content may be adapted based on Claus’s requirements and preferences pre-defined in his user’s profile, as Claus is the one who receives information utilizing the location information of Mai. While Claus is enjoying the content based on Mai’s current location, Mai has a possibility of controlling

the service. She may give Claus permission for receiving information based on her location for one or two hours and this can be disabled anytime by Mai. Norman has stated in the book “Emotional design” [11] that the service that almost always guarantees success is the service that provides social interaction and emotional connection between people. With the new concept of mobile location services, the mobile users can travel virtually to anywhere in the world utilizing the location of friends or family members. The service based on “come along with me” is not only an information service, but it is a kind of service that provides social interaction and emotional connection between users. The users can get the felling of being together and have a common topic to talk about. This kind of service has a high potential of being a success if it functions as it is presented in the scenario. “Come along with me” is the new concept of a service utilizing real-time location information of other users on the all-IPv6 network. The service based on the “come along with me” scenario cannot be made available with the current technologies and service environment, due to the lack of open standards and open service environments, and the fact that the mechanism for sharing network resource, information resource (e.g. location information, user profiles) and revenues between the different involved parties have not been developed. 8 CONCLUSION

This paper presents a conceptual service architecture for adaptive mobile location services to be used in the open service environment of the next generation wireless network. In the designed service architecture, the service portal and profiling management agents play important roles in realizing the concept of one network many services. The context-based service adaptation platform, user profile and service and content profile play important roles in adapting the service to best fit with the user requirements in a particular context of use. In the designed architecture, the user profile is handled by the profiling management agent and not by the service provider like the case today. This allows the sharing of the user profile between different stakeholders involved in providing mobile location services. The service provider can only utilize the user profile when the user wants to use the service. This will minimize the privacy concern towards the use of any mobile service. It is obvious that the profiling management agent plays an important role in providing mobile services in the future. The profiling management agent must be trustworthy and the users should feel safe and comfortable allowing the agent to protect the use of their personal information as well as their location information. The developed service architecture supports the

UbiCC Journal – Volume 3


provisions of new kinds of mobile location services that have not been possible on the current network. The services such as global-level tracking service and location-based information services that allow the user to access the information utilizing the location information of other users on the all-IPv6 network will be possible. Adaptability is one of the new and important features of the proposed service architecture. This new feature will improve the quality of a mobile user experience and improve the possibility of being success for mobile location services on the next generation wireless network. 9 REFERENCES

[1] De La Vergne et al.: Forecast: GPS in Mobile Devices Worldwide 2004-2010, Market analysis report, ID Number: G00144746, Gartner, Inc., (2006). [2] Shen et al.: Hype Cycle for Consumer Mobile Applications, Analysis report, Gartner, Inc., (2007). [3] ITU-T: Mobility Management Capability Requirements for NGN, Next generation network, Global standards initiative, International Telecommunication Union, ITU-T NGN FG Proceedings, Part II (2005). [4] Reynolds T. & Jin-Kyu J.: Shaping the future mobile information society: The case of the republic of Korea, Case study report: SMIS/07Strategy and Policy Unit, International Telecommunication Union (ITU), February, pp 24 (2004). [5] Chen et al.: Reconfigurable Architecture and Mobility Management for Next-Generation Wireless IP Networks, IEEE transaction on wireless communication, Vol. 6. Issue 8, pp. 3102-3113 (2007). [6] Kappler et al.: Dynamic Network Composition for Beyond 3G Networks: A 3GPP Viewpoint, Journal of IEEE Network, Vol.21 Issue.1, pp 4752 (2007). [7] Arhippainen L. and Tahti M.: Empirical Evaluation of User Experience in two Adaptive Mobile Application Prototypes, Proceedings of the 2nd International Conference on Mobile and Ubiquitous Multimedia, Norrkoping, Sweden, (2003). [8] Hiltunen et al.: Mobile user experience, IT press, Finland, pp. 9-31 (2002). [9] Usability Hub: Test it like Pros do, Idean Research together with Forum Nokia, .php, [cited 14 September 2007; 12:45]. [10] Schrammel J. and Tscheligi M.: Experiences Evoked by Today’s Technology -Results from a Qualitative Empirical Study, Proceeding of the

20th International Symposium on Human Factors in Telecommunication (HFT 06), (2006). [11] Norman D. A.: Emotional Design: Why We Love (Or Hate) Everyday Things, New York: Basic books, p.19, (2004). [12] Forlizzi J. and Ford S.: The building blocks of experience: an early framework for interaction designers, DIS2000. Designing Interactive Systems Processes, Practices, Methods, and Techniques, Conference Proceedings, pp. 41923 (2000). [13] ISO: Guidance on usability", International standard (ISO 9241-11), First edition (1998). [14] Hwang et al.: An empirical research on factors affecting continued intention to use mobile internet services in Korea, International Conference on the Management of Mobile Business, 2007. ICMB 2007, p.56 (2007). [15] Reichenbacher T.: Adaptive methods for mobile cartography, Proceedings of the 21st International Cartographic Conference ICC: Cartographic Renaissance, 10-16th August 2003, Durban, South Africa. , pp. 311-132 (2003). [16] ETSI: Human Factors (HF): User Profile Management, EG 202 325 V1.1.1, European Telecommunications Standards Institute (2005). [17] Olesen et al.: The conceptual structure of user profiles", My Personal Adaptive Global NET and Beyond, MAGNET deliverable D1.2.1 (2006). [18] Rodriguez et al.: ePerSpace Intermediate Trials, In the proceeding of International conference on Networking and Services (ICNS'06), p110 (2006). [19] Abarca et al.: Service Architecture, Version 5.0., Telecommunications Information Networking Architecture Consortium (TINA-C) (1997). [20] Dietrich et al.: State of the Art in Adaptive User Interfaces, in M. Schneider-Hufschmidt, T. Kühme and U.Malinowski (Eds.), Adaptive User Interfaces, Amsterdam: North-Holland (1993). [21] Nielsen J.: Personalization is Over-Rated,, [cited 11 August 2007; 11:39]. [22] Reichenbacher T.: Mobile Cartography – Adaptive Visualisation of Geographic Information on Mobile Devices, Ph.D. thesis, Department of Cartography, Technische Universität München, Germany (2004). [23] Schou S. T. and Olesen H.: Detection of mobile user location on next generation wireless networks”, Journal of Internet Technology Special Issue on “Heterogeneous IP Networks”, vol: 6, pp. 273-279, (2005).

UbiCC Journal – Volume 3


Shafeeq Ahmad Azad Institute of Engineering & Technology, India Dr. Vipin Saxena Babasaheb Bhimrao Ambedkar University, India

ABSTRACT The Unified Modeling Language (UML) is one of the important modeling languages used to design the software problems. The main aim of this paper is to develop a complete process of software architecture for the object-oriented software system. This software architecture will ensure non-functional requirements as well as the functional requirements of the software system. The software architecture will also consider the requirements for the domain of the problem. To describe the functional & non functional requirements, a case study of ATM machine is considered. The major finding of the paper is to trace the outline of architecture from problem domain. Keywords: UML, software architecture, functional & non-functional requirements 1 INTRODUCTION different models & available in [9] and [11]. View Models play an eminent role in all scientific and engineering disciplines. For Physics, Mathematics, Biology, Chemistry and Economic works, lots of models are provided to solve the complex tasks. In the present work, a procedure for the different views of software architecture is explained with functional and non-functional requirements and these requirements are according to the domain of the software problem. 2 ELEMENTS OF SOFTWARE ARCHITECTURE

The term “Software Architecture” is very difficult to define. Some of the researchers define this term. In [3], [8], [9], [12] and [13] Software Architecture defines with the highest level of a system design and system architecture can be described as a set of elements (components) along with their externally visible properties and relationships among them. This term can also be defined in terms of pattern oriented software architectures & this is available in [4], [2] and [5]. The Unified approach of software development is designed & developed by Jacobson et al. [6], [7] and [10]. The role of software architecture is similar in nature to the role architect plays in building construction. Building architects look at the building from various viewpoints which is useful for civil engineers, electricians, plumbers, carpenters and so on. This allows the architects to see a complete picture before construction begins. Similarly, Software Architecture System is described as different viewpoints of the system being built. These viewpoints are captured in

System architecture is defined as a set of design decisions. These decisions may be technical and commercial in nature. The present paper suggests and describes different elements of software architecture shown below in Fig. 1. The below software architecture is influenced by some important factors [1] i.e. stakeholders requirements and domain characteristics for which software system is being developed. These factors provide help to derive the software architecture.

UbiCC Journal – Volume 3


The derived-architecture avoids irrelevant things, which are not concerned with the problem domain and this makes development procedure very simple. Influence factors; domain characteristics and stakeholders requirements work as input to objectoriented software architecture as shown below in Fig. 1. The influence factors help in describing architectural elements and taking different design decisions.

Domain Characteristics

Design Model Views

Stakeholder Requirements

Architectural Patterns

of actions, including variants. A system yields an observable result of value to an actor. An actor is a coherent set of roles that users play when interacting with the system. An actor might be another system. The components in this model view are the actors and the use-cases. The relationships are associations between actors and the use-cases, and dependency relationships between use-cases. Usecases can be used to describe all types of stakeholders’ requirements in a context, not only the functional requirements but also non-functional requirements. Example 1: The following Fig. 3 shows a usecase diagram with an actor for the functioning of the ATM machine:

Design Supplements

Login System

Figure 1: Elements of the software architecture
Design model views, Architectural patterns & design supplements are described below in brief: 3 DESIGN MODEL VIEWS

Withdraw Money

Logout System

The Model view is an abstraction that excludes details that are not relevant for a particular model view of the system. Each model view can be considered as a software blueprint and each can use its own notation, can reflect its own choice of architectural patterns, and can define what is meant in its case by components, and relationships. Model views are not fully independent. The components of one model can relate to components in another model. It is also necessary to find out the relations between them. There is no standard set of model views to consider. However, the model views, shown in the following Fig. 2, are taken on the basis of the works of [9], [3] and [7].

Figure 3: A Use-case model view of an ATM system In the above figure User is an actor who can login the ATM machine & withdraw the desired amount and after the use of system, user log out the system. The above figure is only use case model view of the ATM system. When one wants to design the software architecture then one should focus on the use-cases that pose control on the software architecture. This means use-cases that capture the system’s critical requirements, e.g. developer’s requirements such as modifiability and client’s requirements such as the functionality are most important and are used most frequently. Usecases are used both to drive the process of defining the software architecture and to evaluate if the stakeholders requirements have been fulfilled. [3] describes a Scenario based Software Architecture Analysis Method (SAAM), that is close to the usecase approach. SAAM shows how all requirements can be described in scenarios that are same as usecases, and how they can be used to define architecture and to assess the fulfillment of the stakeholders’ requirements. An excellent software

Use-case Model View

Class Model View

Process Model View

Module Model View

Physical Model View
Figure 2: Design model views

3.1 Use-Case Model View A use-case is a description of a set of sequences

UbiCC Journal – Volume 3


development process is that in which testing is applied at all stages of development. 3.2 Class Model View The class model view deals both with the structural aspects of the system as well as the dynamic aspects of the system. The components in this model view are objects or classes. The relationships are generalizations, aggregations and associations.

record of the accounts of the User and it is associated with the Bank. 3.3 Process Model View The process model view deals with the dynamic issues of communication and synchronization in a running system. The relationships between the processes deal with process communication, synchronization and concurrency. The components of process model view are processes and threads. A process is a sequence of instructions with its own control. Fig. 5 shows the sequence diagram under process model view. A process may have a number of threads.













Figure 4: A class model view of an ATM system
Classes are a way to organize the system, and the objects describe the dynamics of the running system by their behavior and interactions. The class model view encapsulates data and their related operations for reading, treating and updating of the data. The class concept can both be used to describe the conceptual entities of the system, its surroundings as well as the implementation of the system. The above Fig. 4 is an example of ATM system. In this diagram which is self explanatory, major classes are User, Bank, Bank_account, ATM & Transaction and these classes are abstracted from the problem domain of the ATM. The Transaction class is further categorized as Deposit, Enquiry, Transfer & Withdrawal classes. ATM class consists of Menus, Dispenser & Keypad Class and User is associated with the ATM class & record of the User is available in the Bank. Bank_account keeps the A process can be started, shut down, recovered, reconfigured, and can communicate and synchronize as necessary with other processes. The diagram shows the behavior of the ATM system. There are three major objects selected from the class model view & these objects are the User, ATM & Bank. The vertical pipe shows the life line of each object. Initially user inserts a credit card & corresponding PIN number which is verified by ATM object through Bank. After this, options will appear on the screen & user will select option withdrawal and enter the withdrawal amount. After transaction and verification from Bank ATM machine will dispense the cash amount to the user. As per the user action, machine will print the receipt, and then eject the card and after this main screen will be displayed on the ATM machine.

UbiCC Journal – Volume 3





insert card

request PIN

enter PIN


verify PIN valid


request amount

enter amount

process transaction

3.5 Physical Model View The physical model view also deals with the dynamics of the running system, mainly how the processes are using processor capacity. The components in this model view are processors. The relationships are “communicates-with” associations. The physical model view is the model view showing the deployment of software onto hardware. Fig.7 is the physical model view of an ATM system. If systems are running on only one machine then this model view is covered in the process model view.

dispose cash


collect cash

confirm continuation

terminate print receipt

eject card colllect card

Customer Database

Cash Dispenser Display

display main screen

Card Reader

Receipt Printer

Figure 5: A sequence diagram of an ATM system 3.4 Module Model View The module model view deals with the structural issues of the system. The components identified in this model view are modules. The relationships between the components are aggregation and dependency. The module model view is used to modularize and organize the system into comprehensible units, for example the organization of system into applications that are equivalent to subsystems. The module model view often reflects the conceptual elements of the system. The following Fig. 5 shows the module model view of an ATM system: The above figure shows the user database attached with Cash dispenser, Display, Card Reader and Receipt Printer and these are associated with the computer systems.

Computer s
Figure 7: A physical model view of an ATM system 4 ARCHITECTURAL PATTERNS

Customer Database


Figure 6: A module model view of an ATM

The goal of patterns within the software community is to create a body of literature to help software developers to resolve recurring problems encountered throughout the software development process. Architectural pattern describes a problem that occurs over and over again, and describes the core of the solution to that problem in such a way that the solution can be used again. Patterns can therefore be used when the problem is a recurring problem and that has been described and solved already. Christopher Alexander et al. [2] introduced the term pattern language means that one should go through patterns in a sequence, moving from the larger patterns to the smaller. Alexander thereby stressed the importance of recognizing the various levels of patterns in designing architecture. Both patterns and frameworks have to achieve largescale reuse by capturing successful software strategies within a particular context. The primary difference is that frameworks focus on reuse at the level of detailed design, algorithms and

UbiCC Journal – Volume 3


implementation. In contrast patterns focus more on reuse of recurring architectural design themes. 4.1 Types of Patterns The term "pattern" is often used to refer to any pattern that addresses issue of software architecture, design, or programming implementation. The following are the three types of patterns: 4.1.1 Architecture Pattern An Architecture Pattern expresses a fundamental structural organization or schema for software systems. It provides a set of predefined subsystems, specifies their responsibilities, and includes rules and guidelines for organizing the relationships between them. 4.1.2 Design Pattern A Design Pattern provides a scheme for refining the subsystems or components of a software system, or the relationships between them. It describes commonly recurring structure of communicating components that solves a general design problem within a particular context. 4.1.3 Idiom An Idiom (sometime called coding pattern) is a low-level pattern, specific to a programming language. An idiom describes how to implement particular aspects of components or the relationships between them using the features of a given language. Coding patterns, such as command, adapter, bridge etc. are now well established in the programming community. Patterns have also been extended into the architectural area; client-proxy server, pipe and filter, and so on. Adoption of these and other architectural patterns help to solve recurring architectural design problems in the same way that coding patterns have done for programmers. Some of patterns and their uses are given below in Table 1. Table 1: Patterns and their uses

Example, Pipe-and-filter pattern is used where output from one component forms the input to the next. A typical example is the use of UNIX pipes. A layered pattern is used to focus on the different abstraction levels in a system, such as the software in a personal computer. A stack of boxes or a number of concentric circles is often used to represent a layered system as, shown in Fig. 8.

Resource Allocation and Security
File System
I/O System Memory System Process Management Hardware
Figure 8: The layered pattern of a PC 5 DESIGN SUPPLEMENTS

Beside design model views and architectural patterns, there are some other issues that are considered during design of object-oriented software architecture. These are known as design supplements and are given below: 5.1 Skilled Levels Software development process requires several types of skilled people who play important role in development of software system. The architect should suggest these skilled people and their skilled levels. Project manager initiates development process. System requirements are then collected from Subject Matter Experts (SME). SMEs are the people in the process who provide the information on what needs the system to be built. They serve in the most important role in the development process, despite not being the part of the permanent development team. In most cases they are often called client or user. Other important human beings are system analysts, architect, developers, testers, deployment managers and trainers. 5.2 Design Tools Architect also confirms different design tools such as platform, programming languages, database, testing tools etc. to be used during software development. 5.3 Design Decomposition Design decomposition is design abstraction, such as design-in-the-large, design-in-the-small and coding. Design-in-the-large considers mainly structuring, where design-in-the-small addresses functionality such as algorithms and data structures.

Patterns Use
Client Proxy Acts as a concentrator for many low-speed links to access a server Server Isolates code from technologyAdopter specific APIs Decouples event from its processing Reactor

Replicated Servers Layered Architecture

Replicates servers to reduce burden on central server A decomposition of services such that most interactions occur only between neighboring layers Transforms information in a series Pipe and of incremental steps or processes Filter Subsystem Manages the dependencies between cohesive groups of functions Interface

UbiCC Journal – Volume 3


One can expand these design abstractions, because the terms design-in-the-large and design-in-thesmall do not cover the increasing size and complexity of the systems with increasing demand on architecture focus. Design decomposition can be done as follows: 5.3.1 Global System Global system comprises several enterprises. The key issues addressed involve the impact of software that crosses enterprise boundaries. The global system can provide a set of standards and protocols that benefit organizations by allowing a general means of integrability and communication across different enterprises. 5.3.2 Enterprise System The enterprise system is the highest level within an organization. The enterprise system comprises multiple systems, where each system comprises several applications. The goal of the enterprise system is to provide software access through a consistent set of policies and services usable throughout the organization. 5.3.3 A System A system comprises several integrated applications. The applications provide the functionality where the system provides an infrastructure for the applications. The issues to address at this level are integrability, communication and coordination between applications. Access to data stores and management of inter-process resources occur at the system level. At a system, the applications use common functionality and data. 5.3.4 Application Applications typically involve numerous object classes, and one or more frameworks. At the application, the primary goal is to implement the end-user functionality defined by the requirements. 5.3.5 Frameworks In the framework, the goal is to allow the reuse of both software code and the design used in writing the code. An object-oriented framework is a semi-complete application. Programmers form complete applications by inheriting and instantiating parameterized framework components. A framework thereby provides an integrated set of domain-specific functionality. 5.3.6 Objects This is the lowest level, the code level, where in object-oriented programming; the classes and objects are defined and managed. 6 CONCLUSIONS & FUTURE SCOPE OF WORK

architectural patterns and design supplements. In the above architecture, outline of architecture must be traced from problem domain. Based on the domain characteristics and stakeholders’ requirements, the context of the architecture is called as domain-driven architecture. The above approach can be implemented by taking a case on domain-driven architecture of object-oriented software system. 7 REFERENCES

[1] S. Ahmad and V. Saxena: Influence Factors for Software Architecture of Object-Oriented System, Journal of System Management (ICFAI Press, India) Vol. 5(2), pp. 35-43(2007) [2] C. Alexander, S. Ishikawa, M. Silverstein, M.: A Pattern Language, Oxford University Press (1977) [3] L. Bass, P. Clements and R. Kazman: Software Architecture in Practice- Second Edition, Pearson Education. (2003) [4] F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad and M. Stal: Pattern-Oriented Software Architecture - A System of Patterns, John Wiley (1977) [5] E. Gamma, R. Helm, R. Johnson, and J. Vlissides: Design Patterns, AddisonWesley(1994) [6] I. Jacobson, G. Booch, J. Rumbaugh: The Unified Software Development Process, Pearson Education(1999) [7] I. Jacobson , M. Christerson, P. Jonsson and G. Overgaard : Object-Oriented Software Engineering, A Use Case Driven Approach, Addison-Wesley(1992) [8] P. Kruchten, H. Obbink and J. Stafford: The Past, Present, and Future for Software Architecture, IEEE Transactions of Software Engineering, 23(2), PP. 22-30. (2006) [9] P. Kruchten: The 4+1 View Model of Software Architecture, IEEE Transactions of Software Engineering, 12(6), PP. 42-50(1995) [10] P. Kruchten: The Rational Unified Process: An Introduction 3/e; Reading, MA, AddisonWesley(2004) [11] B. Manfred: Architecture Driven Modeling in Software Development, Proceedings of the Ninth IEEE International Conference on

From the above work, it is concluded that architecture of object-oriented software system must contain domain characteristics, stakeholders’ requirements, architectural model views,

UbiCC Journal – Volume 3


Engineering Complex Computer Systems Navigating Complexity in the e-Engineering Age( 2004) [12] M. Shaw and G. David: Software Architecture, Prentice Hall(1996) [13] M. Shaw and P. Clements: The Golden Age of Software Architecture, IEEE Transactions of Software Engineering, 23(2), PP. 31-39, (2006)

UbiCC Journal – Volume 3


To top