EDUCATIONAL by kaseebhotla83


More Info
									J2ME Enterprise Development
Imtiyaz Haque and Brian O'Connor


M&T Books
An imprint of Hungry Minds, Inc. Best-Selling Books • Digital Downloads • e-Books • Answer Networks • e-Newsletters • Branded Web Sites • e-Learning New York, NY • Cleveland, OH • Indianapolis, IN

J2ME™ Enterprise Development Published by M&T Books 909 Third Avenue New York, NY 10022 Copyright © 2002 Hungry Minds, Inc. All rights reserved. No part of this book, including interior design, cover design, and icons, may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording, or otherwise) without the prior written permission of the publisher. Library of Congress Control Number: 2001093598 ISBN: 0-7645-4900-6 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 1Q/SQ/QT/QS/IN Distributed in the United States by Hungry Minds, Inc. Distributed by CDG Books Canada Inc. for Canada; by Transworld Publishers Limited in the United Kingdom; by IDG Norge Books for Norway; by IDG Sweden Books for Sweden; by IDG Books Australia Publishing Corporation Pty. Ltd. for Australia and New Zealand; by TransQuest Publishers Pte Ltd. for Singapore, Malaysia, Thailand, Indonesia, and Hong Kong; by Gotop Information Inc. for T aiwan; by ICG Muse, Inc. for Japan; by Intersoft for South Africa; by Eyrolles for France; by International Thomson Publishing for Germany, Austria, and Switzerland; by Distribuidora Cuspide for Argentina; by LR International for Brazil; by Galileo Libros for Chile; by Ediciones ZETA S.C.R. Ltda. for Peru; by WS Computer Publishing Corporation, Inc., for the Philippines; by Contemporanea de Ediciones for Venezuela; by Express Computer Distributors for the Caribbean and West Indies; by Micronesia Media Distributor, Inc. for Micronesia; by Chips Computadoras S.A. de C.V. for Mexico; by Editorial Norma de Panama S.A. for Panama; by American Bookshops for Finland. For general information on Hungry Minds’ products and services please contact our Customer Care department within the U.S. at 800-762-2974, outside the U.S. at 317572-3993 or fax 317-572-4002. For sales inquiries and reseller information, including discounts, premium and bulk quantity sales, and foreignlanguage translations, please contact our Customer Care department at 800-434-3422, fax 317-572-4002 or write to Hungry Minds, Inc., Attn: Customer Care Department, 10475 Crosspoint Boulevard, Indianapolis, IN 46256. For information on licensing foreign or domestic rights, please contact our Sub- Rights Customer Care department at 212-884-5000. For information on using Hungry Minds’ products and services in the classroom or for ordering examination copies, please contact our Educational Sales department at 800-434-2086 or fax 317-572-4005.

For press review copies, author interviews, or other publicity information, please contact our Public Relations department at 317-572-3168 or fax 317-572-4168. For authorization to photocopy items for corporate, personal, or educational use, please contact Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, or fax 978-750-4470.


is a trademark of Hungry Minds, Inc.

is a trademark of Hungry Minds, Inc.


Part II: Developing Enterprise Applications for J2ME

Chapter 6

Planning Development with J2ME
In This Chapter ♦ Identifying enterprise challenges ♦ Reviewing N-tier applications ♦ Introducing the MSales enterprise application ♦ Architecting the MSales application ♦ Programming in limited environments The emphasis in previous chapters has been on programming issues for small devices and introducing J2ME. This chapter introduces the concept of enterprise applications and how you can develop them using J2ME. With J2ME, you can develop a wide variety of applications – games, personal applications, business applications, and so on. An enterprise application is a special case of a business application tailored for a large organization. The next natural step for small devices is the development of the same robust applications that today's enterprise users have come to expect. The same applications that solve today's enterprise problems can and will be successfully re-adapted to serve small wireless devices. The clients of the future will include a proliferation of small connected devices. In this chapter, we introduce the MSales application as an example of enterprise programming for a J2ME application. We created MSales to illustrate a dynamic and distributed sales application targeted for enterprise-wide use. Finally, this chapter presents the challenges you face when you program in a limited environment and offers tuning techniques that can help you.

Identifying Enterprise Challenges
Enterprise application is a broad term that can be broken down into several specifications that together define the whole. The following sections describe the challenges you face when you set out to solve an enterprise application problem.

Chapter 6: Planning Development with J2ME 125

An enterprise problem requires a scalable solution. Although a prototype may work just fine with 20 simultaneous users, an enterprise solution must be capable of handling hundreds, thousands, or even tens of thousands of users. When a solution is scalable, it can be easily expanded to add support for more users or processes. The ideal system scales linearly: when the number of server components is doubled, the effective throughput of the application is doubled. Web servers, for instance, scale very well because they can be replicated across multiple machines. The end user does not necessarily need to know what machine is serving a particular request. It's important that applications developed for mobile small devices be extremely scalable on the server side. Like today’s cell phones, these emerging applications for mobile devices must be capable of supporting thousands of simultaneous users. The users of these devices expect instant-on applications and so should not be susceptible to the lag or dropped connections that typify a dial-up connection. Applications targeting these devices need to be scalable enough on the server side to successfully maintain an acceptable level of performance across a multitude of concurrent clients.

Scalability and reliability are often mentioned together. A scalable solution is one that can most often be moved to multiple units, to distribute requests from the clients. At the same time, this approach can be used to provide very robust and reliable solutions. In a Web farm, when multiple web servers are linked together to provide better throughput, more clients can connect because there are more machines to connect to. At the same time, this configuration can be very robust. If one machine fails, the majority of the clients can still connect. Reliability is key to enterprise applications. Because they are often characterized as mission-critical applications, downtime and program unavailability are unacceptable and must be avoided at all costs.

The performance of an application is typically measured in terms of the response time for any action taken by a user. When you are dealing with a small device and trying to solve complex problems, you might ask how you can have a high-performance application running on such a platform. The answer lies in the fact that the performance of an enterprise application is not necessarily based on the extent of optimization of each of its various subcomponents. Code optimization is important on the client side when small devices are being used, but the overall performance of the application is not necessarily dependent on the code optimization of each subcomponent. The server components of the application can benefit from scalable processing power, ample disk space, and plentiful memory. As servers are added to a Web farm, for instance, the number of clients per machine goes down and overall application performance increases. This is the hallmark of enterprise computing: The application works better as a whole than its individual parts because of the way the overall architecture is organized. The whole system approach is necessary when developing enterprise applications. The ultimate performance of an application is often determined by bottlenecks at levels higher than the code, such as network performance issues between various components of an enterprise application.


Part II: Developing Enterprise Applications for J2ME

Network connectivity
Network connectivity is a key aspect of an enterprise application. Unlike a game or individual productivity application, enterprise applications have to store and access data from enterprise datastores, communicate with other enterprise applications, and so on. For example, a sales application deals with customer account information and price lists, an order processing application collects orders that capture inventory items for certain customers at given prices, a shipping application processes the shipping of orders, and an inventory application back orders missing items in orders. All these enterprise applications solve a portion of the overall enterprise problem, and they do so by sharing data across local or wide area networks. The current state of wireless networks, in terms of their coverage and reliability, is not as robust as the wired networks we typically use in our day-to-day activities on the corporate intranets or the Internet. Any solution made to solve an enterprise problem must take into account the coverage and reliability of the network in overall application architecture.

Most enterprise applications are driven by the identity of the person using the system. For example, many Web sites include user authentication. User rights are determined by each user’s identity, and many enterprise solutions have specific features enabled or disabled based on the user type. An enterprise solution often interfaces with directory services to provide this sort of information at runtime. Because of the mission-critical nature of enterprise applications, security is a central issue — not only clearly identifying users and their rights in the context of the program but also in securing the transmission of information itself. One of the most common ways for adding security to an application on the Web is through Secured Sockets Layer (SSL).

Cross-platform support
An enterprise application usually requires a cross-platform solution. Although cross-platform issues affect both the server and client platforms, they are more acute on client platforms because of specific issues related to human/computer interaction. The reasons for having multiple platforms in an enterprise are many: a large enterprise has many disparate groups making different platform decisions, an enterprise based on business mergers brings different platforms from the constituent entities, and so on. Java, HTML, CSS, and XML technologies have been developed over the last few years so that client-side presentation is standardized over different platforms. As you move to mobile devices as client platforms, you face the same challenges. A variety of mobile devices — PDAs, cell phones, pagers — run a variety of operating systems. The enterprise solution you choose should run on the various operating systems with little or no change.

Chapter 6: Planning Development with J2ME 127

Development simplicity
An enterprise application generally solves complex problems, evolves over a period of time, and integrates with other enterprise applications. For starters, you need a simple and standard programming language that a programmer can easily pick up — a language that supports the notion of solving a complex problem by dividing it into small subproblems as well as integrating solutions for the subproblems to solve the overall complex problem. The language must be fairly complete to implement the presentation layer, network connectivity, and storage requirements.

A Review of N-Tier Applications
In its most basic form, an enterprise application is divided into two tiers. The first tier is a presentation tier that creates the user interface and handles the display and user manipulation of data. The second tier handles all the back-end tasks, such as information storage and retrieval into a database. This approach was common in the traditional client/server applications, where most of the business logic also resides on the client side. This type of application model is sufficient and useful when the business logic is neither very complex nor evolving with time and the data is fairly localized. Typical applications dealing with tasks such as accounting and inventory in a small organization often use a two-tier model (see Figure 6-1).

Figure 6-1: Typical two-tier architecture.

More complex enterprise applications use three or more tiers, called N -tiered architecture (see Figure 6-2), which we focus on in this section. In a N-tiered architecture, the application is divided into the following tiers: ♦ Presentation: For a Web application, this could be HTML rendered in aWeb browser. ♦ Business Logic: These tiers typically use servlets and Enterprise Java Beans (EJB). ♦ Data Storage: A persistence store for all the information the application uses.


Part II: Developing Enterprise Applications for J2ME

Figure 6-2: Typical N-tier architecture.

Introducing the MSales Application
MSales is the enterprise programming sample application we created to help you understand the concepts of J2ME enterprise development using a real-world example. Our goal with this application is to assist mobile field sales agents in managing their schedules and to give them access to the product information they need to close sales. We put the MSales application to work in subsequent chapters to illustrate useful design and implementation techniques for enterprise applications. The following sections help you become more familiar with the MSales application.

Application requirements
Our main goal in establishing the requirements for the MSales application was to enable a salesperson to be even more productive by having easy access to product information, price lists, customer purchase histories, and more. This information helps her schedule her visits, make sales, and take orders. Some of her customers are located in areas with good wireless network coverage, whereas others are out in the suburbs and countryside where wireless network coverage isn't always optimal. Here are the specific requirements we established for the MSales application. The application must do the following: ♦ Run on at least a Palm OS device and preferably on other mobile devices. ♦ Work in areas where a network connection is not available or is very spotty. It must be able to use very limited bandwidth connections and effectively transfer all the information it needs in a limited time.

Chapter 6: Planning Development with J2ME 129
♦ Reliably support a large number of mobile sales agents. ♦ Perform responsively and be able to quickly find answers for the mobile sales agents. ♦ Authenticate each sales agent by requiring a username and password to protect access to sensitive customer information. ♦ Supply all necessary product information and price lists for agents trying to close a sale. ♦ Supply all customer information, including purchase history, to mobile sales agents.

Technologies used
The development platform of choice for the MSales application is Sun’s MIDP profile, which works on top of the CLDC configuration. The MIDP profile and the CLDC configuration come bundled in the J2ME Wireless Toolkit from Sun’s Web site. You can download the J2ME Wireless Toolkit for free at the following URL:

Architecting the MSales Application
To address the requirements we established for the MSales application, the application uses the architecture shown in Figure 6-3.

Figure 6-3: Architecture of the MSales application.


Part II: Developing Enterprise Applications for J2ME

MSales client
The MSales client is the main client application running on the mobile device. Here are the important design considerations for the mobile client: ♦ Develop the MSales client using J2ME in order to support the cross-platform development requirement ♦ Supply the necessary data on the device itself so that the application works in areas where network connections are not available or are very spotty ♦ Make the data available on the device itself in order to help with the performance
NOTE: Mobile devices do not have an unlimited capacity to hold data. Therefore, you need to get a slice of data

from the central database that serves the needs of both the agents and their customers. With the MSales application, for example, your strategy could be to get the customer information and order history for only the customers on the schedule for that day.

MSales gateway
The MSales gateway is the server that services the mobile clients and provides connectivity to the MSales servers. To improve reliability and scalability, the gateway is designed as a stateless server so that you can use multiple gateway servers if necessary, and the client can communicate with any one of the gateway servers.
NOTE: The gateway can be implemented as a servlet inside a servlet container and does not need to be a

stand-alone server.

Figure 6-4 shows the subcomponents of the MSales client and gateway and the interaction between the client and the gateway.

Figure 6-4: Subcomponents of the MSales client and MSales gateway.

Chapter 6: Planning Development with J2ME 131

MSales admin
The MSales admin is the administrative client and is likely a Web-based client on a wired network. Most of the heavy processing of data and administration tasks are handled by this client.

MSales server
The MSales server has most of the server business logic running on it. The server can also communicate with other applications in the enterprise. To improve reliability and scalability, the server is designed as a stateless server so that we can use multiple servers, if needed, and the gateway and admin clients can communicate with any one of the servers.
NOTE: The server can be implemented as a stateless EJB Session Bean inside a EJB container and does not

need to be a stand-alone server.

MSales central database
The MSales central database stores all data about customers, products, and orders.

Programming in Limited Environments
Given the limitations imposed by hardware restrictions, the designers of J2ME have focused their design goals and architecture to effectively use whatever resources are available on small devices. A programmer for J2ME also must deal with the limitations imposed by both the device and J2ME itself. Fortunately, there are many simple techniques for maximizing the effectiveness of a program, in terms of both improved performance and lessened runtime requirements. This section focuses on a high-level overview of this tuning process, which can be decomposed into several sequential steps. The process begins at the inception of a project and continues throughout the development cycle. Tuning works best if time is reserved toward the end of a program’s development, when a near-finished application can be tested and bottlenecks targeted. This section introduces not only the motivations and timing of application optimizations, but it also provides an overview of each step. The ideas introduced here are helpful not only on the J2ME platform but are also general enough to apply to server environments and desktop programming. The same process of tuning is used throughout Part II, when both a J2ME client and supporting server technologies are examined.

Why and when to tune
The decision to tune comes from the desire to make an application perform faster, to minimize its memory footprint, or both. The process of tuning describes an iterative approach to improving a program’s performance, memory utilization, or even just the perceived responsiveness for the end user. There are many ways to judge performance; for J2ME, the most important criteria are application memory footprint and processing time. These two areas are most limited in small devices and therefore necessitate special consideration. Improvements to application memory consumption and performance are affected most by smart design decisions and the correct selection of algorithms and data


Part II: Developing Enterprise Applications for J2ME

structures. During both the design and implementation of a program, some fairly simple changes can end up saving considerable time and space. An application developer is not only interested in the actual performance of a device but must be cognizant of its apparent performance as well. When users interact with a small device, their understanding of a program’s performance is not necessarily tied to the particular efficiency of the underlying code. The perceived speed of applications can be affected with active user feedback; status messages, time estimates, progress bars, or similar approaches can make an application appear to run faster. This fact should not be overlooked during the optimization process. For server environments, issues such as throughput and raw performance are the key motivating factors for tuning. Unlike a program for a small device, applications for servers need to handle hundreds, thousands, or even more simultaneous users. Luckily, in the server environment, memory and processing power are not constrained in the same ways they are with small devices. Servers have the luxury of fast network connections, powerful CPUs, plenty of memory, and a surplus of storage. Server applications can more freely use caching and other resource-intensive techniques to speed requests. At the same time, many of the techniques used in small devices can be applied to servers in order to improve applications. For example, object reuse, as described later in this chapter, reduces memory needs in small devices, but it also speeds code on servers by diminishing the time needed to create new objects. Many of the approaches documented in the next section apply equally well to both small devices and servers alike. Tuning an application can lead to considerable performance gains in memory, throughput, performance, and responsiveness. Although the process of tuning is simple, the most difficult aspect is accurately judging when the tuning should be done and in what place. Deciding how may not be as difficult as deciding how much. It is important to avoid overtuning an application, because this can lead to code that is difficult to understand and, subsequently, difficult to maintain. The challenge is to clearly identify the current bottlenecks in an application, whether resource- or performance-based, and then identify an acceptable performance level. After these two criteria have been established, optimizations can be applied and performance retested. This tuning process can then be applied iteratively, improving the current bottleneck until the desired level of performance is reached. By setting clear performance goals, an application can be successfully tuned without adversely affecting the long-term maintainability of the code base.
TIP: When a programmer writes J2ME applications on small devices, performance is likely to be an issue.

Always leave time for tuning an application — in restricted environments, it can make a real difference.

Tuning an application
Tuning can be broken down into a series of discrete steps that fit in with the overall application development process. These distinct steps include an iterative process that re evaluates tuning efficiency on each revision. The process begins with the choice of tools and continues on through the actual implementation. Some steps are easy and quick, whereas others are a bit more difficult and require an iterative process. The majority of time you spend in performance tuning is in testing enhancements and measuring performance. It is

Chapter 6: Planning Development with J2ME 133
important to keep a good record of the tuning throughout the process in order to understand the efficiencies gained and to determine when your performance goals have been met. Figure 6-5 illustrates the process of tuning.

Figure 6-5: Performance tuning begins at the earliest stages of a program's development and continues through its lifecycle.

Creating specifications
The first step in tuning an application begins with creating the specifications that drive development. This includes everything from choosing the Integrated Development Environment (IDE) to the device itself. J2ME, although quickly establishing itself as a leader in small device marketplace, may not be available or appropriate for every device. While new device families continue to be added to the platform, the availability and capabilities of a runtime environment for the target device must be examined. Even though J2ME is unrivaled in the number and types of devices it supports (and will support in the near future), it is always important to early on match the appropriate tools with the target platform.


Part II: Developing Enterprise Applications for J2ME

Prototyping applications
Creating specifications naturally leads to a prototyping phase, during which the feasibility of both the application’s goals and underlying architecture can be tested. Prototyping is important for development because it lets the developer see exactly how well a given technology lives up to its promises. It also lets a lead developer establish a model for future work and samples appropriate for the project as a whole. More important for tuning needs, these prototypes can lead to some early performance numbers and expectations for the full application. Prototyping should not only be used in emulation environments but also on actual target hardware. This ensures that any discrepancies between the emulation environment, the published specification, and the actual environment are caught early. Some areas that are particularly important to test include the user interface, networking, and thread support, as these are areas that are most likely inconsistent.

Implementing the application design
After you make some basic choices regarding the application's specifications, its target platform, and the tools used, it is time to implement the solution. It may be tempting to immediately begin code optimizing during implementation, but a better approach is to focus on developing a clean application first. The most important concern should be developing a welldesigned program that uses good object-oriented programming (OOP) practices, clean logic, and plentiful documentation. If the focus is, instead, on producing the fastest code possible, the end goal of the application may become blurred. Programmers who develop this way tend to deliver code that is difficult to read, debug, and maintain.

Identifying bottlenecks
Assuming the focus of an application’s development is on clean rather than fast code, there are probably several areas that can benefit from tuning. It is much easier to estimate where the biggest performance gains can be had once the application is complete than trying to measure along the way. The bulk of code optimization begins after an application is initially complete. The program can be examined when a somewhat stable version of the program is complete. The goal is to find the locations in the program that are responsible for bottlenecks that aversely affect performance. There are many ways of identifying bottlenecks in a program, ranging from making some basic system calls and recording the results to using sophisticated profiling utilities. The most basic forms of profiling begin with examining the running time of a particular code segment. The following method makes calls to one test method, for example. Timing these calls and averaging a set of runs can identify a mean runtime for each. Notice that the garbage collector is explicitly called before timing the runTest1 method call in the following code. This ensures that garbage collection does not occur during the timing test, which would skew the results.
public void runTimingTest() { System.gc(); long time = System.currentTimeMillis(); runTest1();

Chapter 6: Planning Development with J2ME 135
time = System.currentTimeMillis()-time; System.out.println("runTest1 time: "+time); ... }

Generally, although printing the system time provides a very broad view of the timing of particular components in an application, it does not always provide an accurate view. Not only can the call to currentTimeMillis take up to half a millisecond itself, but it is really just a measure of the total elapsed time of the system. In a multithreaded environment, this total time may not indicate the time spent in the particular method being examined. The operating system may be running other processes that affect the overall CPU time allocated to the Java VM, or a process within the VM could affect the overall timing. It is important to run tests multiple times and then, after a particular area is identified as a bottleneck, investigate the situation further. Analyzing the particular components of the runTest1 method, for example, may reveal that most of its time is spent in an internal method call, which is really the bottleneck for the application. Despite these limitations, using this system method provides a handy way to estimate running time, assuming that you have accounted for its limitations. Another simple way to understand an application’s performance is to look at memory utilization using the verbose garbage collector output. The parameter for J2SE's VM looks like this:
java -verbose:gc MyProgram

The resulting output of this program might look like the following:
[Full GC 215K ->113K(1984K), 0.0253965 secs] [GC 1959K->1959K(2496K), 0.0056435 secs]

This VM option shows when garbage collection is taking place, how long it takes, and the total memory before and after collection. Each of the preceding lines represents a different type of garbage collection. The first line is the full, most expensive garbage collection, which tries to free memory from all available sources. The second line illustrates a faster garbage collection that occurs to remove temporary, short-lived objects. This example illustrates a generational approach to garbage collection: The younger objects that can be reclaimed rather quickly after being created are garbage collected often. At the same time, older and longer-lived objects are not immediately collected; they are examined for garbage collection only when memory runs low. In addition to garbage collection profiling, the amount of free memory can be a useful piece of information. In the same way that the runtime of an application can be examined using the currentTimeMillis method, the free memory of the VM can be displayed using the freeMemory method. In the following example, the free memory before and after a large allocation is calculated:
private void runBusinessMethod1() { System.gc(); Runtime rt = Runtime.getRuntime(); System.out.println("Current Free Memory:"


Part II: Developing Enterprise Applications for J2ME

+rt.freeMemory()+" Current Total Memory:" +rt.totalMemory()); Vector vec = new Vector(); for(int i=0; i<100000; i++) { vec.add(new String("test")); } System.out.println("Current Free Memory: " +rt.freeMemory()+" Current Total Memory: " +rt.totalMemory()); vec = null; }

The resulting output looks similar to the following:
Run 1 [Full GC 215K ->113K(1984K), 0.0251431 secs] Current Free Memory: 1915112 Current Total Memory: 2031616 [GC 625K ->583K(1984K), 0.0169030 secs] ... Current Free Memory: 195536 Current Total Memory: 6094848

As in the currentTimeMillis method, the free memory estimates can be skewed by multiple threads or incomplete garbage collection. Still, this method provides a basic tool for tracking memory utilization of particular application components. Placed carefully, these methods together can greatly aid in the identification of bottlenecks not only on the client J2ME application but also on the server side. More sophisticated profiling tools are available and are built on top of the Java Virtual Machine Profiler Interface (JVMPI). Profilers are powerful because they enable the real-time examination of a program as it executes and they quickly identify problem areas. The JVMPI provides a standardized, C-based interface to the running VM, which can be invoked by using a command-line interface or a third-party tool.
NOTE: Profilers are contingent upon the support of language features such as reflection and are therefore not

available in all Java implementations.

Tuning bottlenecks
Between the simple method calls illustrated in the preceding section and more complex profiler utilities, a lot of information can be generated about trouble spots in an application. After this necessary first step has been taken, the next step is to understand how the performance bottlenecks affect the overall application. An application needs to be broken into its functional components, and the developer needs to establish the performance expectations for each section. These expectations include those driven by the customer and also backend needs driven by the developers. After an application’s components have been profiled and the expectations for each established, the next step is to prioritize the performance enhancements. By focusing on a few select areas, the greatest gains in performance can be obtained with the least amount of work.

Chapter 6: Planning Development with J2ME 137
After the performance areas have been prioritized, the next step is to enhance performance iteratively until it meets the requirements established previously. It is important to analyze the application both before and after optimizations have been made in order to understand the efficacy of your performance tuning. When the target performance has been reached, the goal should be to stop and move on to the next component. Overtuning may produce code that is faster but difficult to understand and maintain. Spending additional time to further enhance an already satisfactory component can take time away from other sections of code that need tuning as well. This is especially salient because most applications are developed on a strict timeline, and there is never time to tune everything perfectly. Tuning should therefore be confined to the areas that need it the most and should be stopped when predefined performance goals have been met. The tuning process should balance the long-term maintainability of the code while keeping in mind that the technical capabilities of devices will continue to improve over time. For example, an algorithm designed to minimize memory usage on a device today may not have the same memory limitations on future devices. It makes sense to prioritize tuning based not only on how much good it can do for the current application but what impact it will have on future versions as well.

Strategies for success
Optimizing applications for small devices faces distinct challenges based on the limitations imposed by these devices. These challenges include memory restrictions, slow processors, unreliable networks, and other limitations imposed by the form factors of these devices. Overcoming these restrictions is a key concern for J2ME developers and affects the decisions made during the implementation and evaluation of applications. The types of solutions commonly used in small devices can be generalized into the following four categories, all of which are general approaches for improving the performance of applications on limited devices: ♦ Use of threads ♦ Active awareness of processing limitations ♦ Improved object management ♦ Better network operations The previous section, “Tuning an application,” made very broad statements about the process of enhancing an application. This section takes a finer-grained approach to tuning and examines more specific techniques for improving performance based on the four categories just listed. This section also provides specific strategies for increasing application performance and reducing the memory footprint of applications on small devices. These tips and techniques are applicable to many versions of Java but are driven by the needs of small devices and are prioritized accordingly. You can find more specific techniques tied to particular J2ME versions in Chapter 5.

The use of threads on very limited devices may may initially seem like a poor optimization technique. The overuse of threads can quickly eat up system resources and bring an application to a grinding halt. But the use of a few well-designed threads can actually speed


Part II: Developing Enterprise Applications for J2ME

up an application. Threads work especially well for high-latency activities, such as a slow network connection, or for GUI applications, where processes need to run in the background yet the interface needs to stay responsive to the user. Even if threads do not increase overall application speed, they do allow a programmer to partition activities and minimize dead time for the end user. The following pseudo-code illustrates using a thread to handle a buffered reading process that involves a slow connection. The particular protocol and data read is unimportant to this example; what is salient is the use of the Runnable interface to enable an ordinary Java object to participate in threading. The only method that needs to be implemented is the run method, but this is never called directly by the client. Instead, a thread is launched for the DataGetter object that controls how and when the run method is called.
class DataGetter implements Runnable { DataGetter() {} public void run () { try { synchronized (this) { while (true) { getDataFromSlowBuffer (); wait (200); } } } catch (Exception e) { System.out.println("Exception " + e); } } public static void main (String[ ] args) { Runnable run1 = new DataGetter(); new Thread(run1).start(); } }

TIP: Another way to increase the perceived performance of an application for the end user is to include

feedback. A progress bar, status window, or some other GUI element allows users to understand the process taking place and set their expectations accordingly. Without seeing progress information, users might assume the application has crashed when, in fact, the device is simply busy. This tip is especially applicable for small device programs, because these devices are slower than the desktops and full-featured devices consumers generally use.

Chapter 6: Planning Development with J2ME 139
Although threads are an excellent way to manage slow network connections and other latency-prone tasks, their overuse can have disastrous effects for small devices. Because these devices are usually very limited in their processing power and memory, more than two or three threads can easily overwhelm a device. It is best to use caution when working in a multithreaded environment for another reason as well: A program that has been written to work with multiple threads needs to be thread-safe. If multiple threads are using the same collections or objects, the methods used to access these objects need to be synchronized. This is a resource- and performance-expensive process; synchronization takes a significant toll on an application’s speed. Java makes the syntax for working with threads straightforward, but concurrency issues can always be difficult to track down. It is therefore very important that any programmer planning to use threads understand them thoroughly. When writing a multithreaded application, you should limit the overall number of threads and confine their access to a predetermined set of supporting objects. Any shared objects should be synchronized, and those that are not shared should avoid synchronization for performance reasons. Also, synchronizing on a per-method basis rather than synchronizing particular blocks of code tends to be faster, as shown in the simple example that follows. Only one thread at a time can access this method, ensuring that changes do not happen simultaneously.
public synchronized void changeValue(int value) { //changes a private value that multiple //threads are trying to alter at the same time internalValue = value; }

Circumventing processor limitations
In addition to threading, there are other simple techniques to improve performance. Finding ways to reduce processing requirements for small devices is a very important optimization technique. It can be done in many ways, but this section focuses on reducing the use of slow methods and Java language features. Actively avoiding exception overuse and slow Java APIs increases an application’s performance and results in more efficient utilization of limited memory. Furthermore, small devices can immediately gain from effective offloading of computationally intensive tasks to a server. This section explores these techniques in detail and provides practical advice you can use on any type of small device.

The exception facility in Java works very well to identify and handle runtime errors. Exceptions enable a given program to detect cases in which a serious problem has occurred, giving the program an opportunity to deal with it or exit. For example, a program attempts to open a file but instead a FileNotFoundException is caught, indicating that the target file does not exist or cannot be read. This exception represents a serious and fundamental error, and a program may not be able to continue if it cannot find its input file. Exceptions are incredibly useful and necessary for robust programs that can either fail gracefully or attempt to recover from a severe problem. Exceptions are also indispensable for tracking the nature of an error and are a valuable tool for debugging problems during development.


Part II: Developing Enterprise Applications for J2ME

Coding to catch exceptions does not add any overhead to a program; try and catch blocks are innocuous and are a required safety feature when working with methods that can throw exceptions. When an exception is thrown, however, the process of creating and propagating exceptions is very resource-intensive and should be avoided in circumstances in which resources are limited. When using the standard Java API, the client has no control over which methods can throw exceptions. But in custom application code, a programmer can control the number of application-specific exceptions thrown and caught. Often these application-specific exceptions are abused as a messaging system between various components. In the following example, the application uses exceptions to communicate between two methods.
//method 1, throws an exception to indicate no result public String findMatchingObject(String name) throws ObjectNotFoundException { ... } //method 2 uses method 1 public String searchResults(String name) { String result; try { result = findMatchingObject(name); } catch (ObjectNotFoundException e) { return(“no results”); } ... }

The solution works: The client can easily understand the nature of the problem based on the type and description of the error message. But in this case, the exception is only being used to indicate that no results are found. It does not signify the sort of catastrophic failure that exceptions are intended to represent. A less resource-intensive approach might be to return null. The following example shows what this might look like:
//method 1, returns null to indicate no result public String findMatchingObject(String name) { ... } //method 2 uses method 1 public String searchResults(String name) { String result = findMatchingObject(name); if (result == null) return(“no results”); ... }

The second, non-exception approach is very convenient for small devices for two reasons. First, the runtime resource requirements for throwing and catching exceptions are much more

Chapter 6: Planning Development with J2ME 141
apparent on these limited devices. Using exceptions as a messaging scheme can lead to considerable application slowdowns. Second, the versions of Java designed for small devices may limit the number of Error and Exception types available to an application for a reason. Exceptions in these devices are treated as catastrophic events, and an uncaught exception can result in the device automatically resetting. Exceptions, for both performance and error-control reasons, should be reserved for critical errors only.

Avoid slow Java classes and methods
Java's rich and varied API provides many classes with a variety of useful methods. Most of these classes are implemented in the most generic way possible, which does not necessarily translate to the fastest algorithm. Furthermore, the platform-agnostic nature of Java precludes the use of platform-specific algorithms that can speed some of these methods. The result is that many of the classes programmers rely on can be faster, but there are some simple steps to avoid or circumvent these limitations. The String object is one of the most useful and often-used objects in Java, but it presents performance challenges. Because of their ubiquity, string methods can often cumulatively slow a program. Strings are inefficient because they are immutable — once created, their contents cannot be changed. Methods or actions that change a string actually result in an entirely new object being created. For example, in the following code, two String objects are concatenated together. But because strings are immutable, they cannot be added together directly. Instead, a new intermediate structure must be made, which is then converted to a final String object. In all, two new objects are created in a line of code that appears quite innocuous.
String string1 = "foo"; String string2 = "bar"; String sting3 = string1 + string2;

This same code can be optimized using a StringBuffer object in place of the strings. Stringbuffers differ from strings because they are mutable; their content can be changed without creating a new object. In the following example, the Stringbuffer is initially created with the "foo" string and later the "bar" string is appended. This can be done without a new object, because the Stringbuffer is initialized with a preset storage size that can later be enlarged as needed. This initial capacity can be customized, which makes it easy to create a string-representing object well suited to a particular application.
StringBuffer sb = new StringBuffer("foo"); sb.append("bar");

Another way in which strings are often used in an inefficient way involves controlling program flow. A method, for instance, might use String constants to define particular behavior and check the program's arguments at runtime using the equals method. The following code example illustrates this idea:
String controlStr = "yes"; ... if(controlStr.equals("no")) { ... }


Part II: Developing Enterprise Applications for J2ME

The comparison between strings and objects takes considerably longer than comparing an integer constant with another integer, as illustrated in the following code, yet the net effect is the same. The program can be controlled in a very simple way without relying on the slow string methods.
final int NO = 0; final int YES = 1; if(control == NO) { ... }

Client offloads to server
The techniques presented in the previous sections are extremely effective in reducing an application’s runtime resource needs and increasing overall performance. All these techniques focus on making the code on the small device or server as efficiently as possible. Because many of these applications will be targeted at wireless small devices, the server can be relied on to make certain key computations beyond the capabilities of the small device. For wireless devices, in addition to client-side optimizations, a server can be used to augment the capabilities of the device. When a network connection is present, the server can be used to perform complex calculations and return the result to the client application. The MSales application presented throughout this part of the book includes this type of optimization in its related purchases feature. Using this tool, a sales associate can query the server for suggestions of other products that might interest a specific customer. This feature works by examining the buying history of the current customer and then crossreferencing that with the purchasing patterns of others. Implementing this feature on a small device is difficult, because it necessitates a considerable amount of processing power, storage space, and network transfers. Provided a network connection exists, this task can easily be offloaded to a server that has the data readily available and can process it very quickly. This offloading approach saves time and limits network connections, because it necessitates sending only the results rather than the background information needed for this calculation. This approach also keeps the business logic on the server side, which makes the application much more easy to update without affecting the client's code. The recommendation functionality can be completely reworked, and as long as the results are formatted the same way, the client application code can remain unchanged.

Object management
Avoiding slow objects and language features in Java can go a long way toward improving an application’s performance. But there are some additional points to consider regarding object initialization and usage. For example, creating new objects on the heap (the area in memory where objects created with the new keyword are stored) in Java is expensive when compared to local primitives on the stack (memory used for variables created inside methods), and the process of automatic garbage collection is generally slow. This s ection therefore focuses on techniques to reduce object instantiation and the number of objects in memory. When a program effectively uses the object it has and reduces the total number of objects instantiated and destroyed, it not only saves memory, it also saves time translating to a faster, more efficient program.

Chapter 6: Planning Development with J2ME 143 Reusing objects
One strategy for reducing the number of objects created and destroyed is to carefully monitor method calls and to minimize those that create new objects excessively. This can happen in many situations, including method calls that generate an object to return some result. A better approach is to reuse existing objects and update their state when appropriate. For example, in the following code, a new object is created to store a method’s output and then is returned to the calling client:
//called once BankInfo bi = null; bi = getAccountInfo(); ... // the bank account is manipulated ... //update the account //the original object is thrown away bi = getAccountInfo(); ... private BankInfo getAccountInfo() { BankInfo info = new BankInfo(150, “Smith, Paul”); return(info); }

A more efficient alternative is to pass an object into the getAccountInfo method, which fills in the appropriate information. This alternative may not make a huge difference if this object is used only once, but when the process is repeated over and over again, there will be a performance difference. In the following example, the getAccountInfo method takes a BankInfo object as an argument. The method then updates the content of the object to reflect its current state. Because the BankInfo object is a method argument, both the current method and the calling client can manipulate this object. Compared to the preceding code example, no new objects are created, which saves both memory and the processing time involved with creating new objects and garbage collecting the old ones.
//An alternative to this method private void getAccountInfo(BankInfo info) { info.balance = 150; info.username = “Smith, Paul”; }

Another simple way to reduce the number of new objects created is to carefully monitor the use of temporary objects. These are objects that are instantiated for only the current method or block of code. For example, in each of the loop iterations that follow, the BigComplexObject gets created anew seven times:


Part II: Developing Enterprise Applications for J2ME

//the BigComplexObject gets created seven times for(int i=0;i<7;i++) { BigComplexObject bco = new BigComplexObject(); DoSomething(bco); } //a more efficient use of the BigComplexObject BigComplexObject bco = new BigComplexObject(); for(int i=0;i<7;i++) { DoSomething(bco); }

In this more efficient code example, the BigComplexObject is created only once. Although this example is trivial, it does show how simple loops can hurt performance by instantiating objects many times over. In this loop, instantiation happens seven times, but in another method, the loop could repeat thousands of times (or more). Performance can benefit by avoiding situations in which objects are created, only to be immediately discarded. Performance can also benefit from the reuse of collections, just as BigComplexObject is reused in this example. Reusing collections eliminates the overhead of continuously re-creating collection objects and can lead to significant performance gains in a program that instantiates a large number of collections. In the following listing, a Vector is presized to accommodate a thousand integer entries. Instead of throwing this structure away and creating another one later in the program, the controlling class clears the structure and subsequently reuses it for further method calls. The structure's capacity will automatically adjust to accommodate its new contents or, if the new capacity is known at runtime, the Vector size can be preadjusted using the ensureCapacity method. Both reusing and effectively sizing collection objects can ensure peak performance from these commonly used data structures.
private void testObjectReuse() { Vector dataList = new Vector(1000); for(int i=0; i<1000; i++) { recordInformationInList(dataList); } processInformationInList(dataList); dataList.removeAllElements(); //dataList can now be reused ... }

Chapter 6: Planning Development with J2ME 145
NOTE: In this section, we demonstrate how reusing objects can greatly save time because of the costs

associated with continually re-creating objects. This alternative should be tempered by the fact that objects waiting to be reused do take up space, which can constrain an application’s free memory — especially one running on a small device. It's important to strike a careful balance between object reuse and memory footprint. The key is to find the middle ground based on a specific application’s needs.

Object initialization
Two more approaches can be used to further optimize object management. The first practice is called lazy initialization and is useful in a scenario in which large and complex objects are both resource-intensive to create and are rarely used. The programmer can choose to initialize the object separately ? on first use rather than during the application’s startup, eliminating the costly initialization of the object during a time when the application is busy initializing. The initialization on first use can slow down the initial access, but there are times when this is preferable to a slow application start. Lazy initialization is quite applicable to small clients because hardware performance is usually quite limited and the end user expects applications to start very quickly. In this scenario, features can be initialized as they are accessed, allowing the application to initially load very quickly and save memory by instantiating only the objects needed for the features currently in use. It's also good practice to inform the end user of this approach if the application's responsiveness will be affected during the initialization. The MSales application examined in Part II of this book, for example, uses lazy initialization to speed up its loading time and reduce its memory footprint while running. Another comp lementary technique uses the opposite approach to object initialization. If the application can initialize objects early on (during a startup sequence, for instance), the application reduces the amount of waiting for resources throughout. A photo album application for a small device, for example, might initialize the next photo before a user specifically requests it in order to speed its load time. This technique can be applied to small devices but usually it is more applicable on the server side, where large data structures are routinely created and cached at various times in order to boost performance. The initial load time slowdown and increased memory footprint are both acceptable, especially considering they afford increased responsiveness for the clients. Data is immediately available because it has been initialized early during the startup. This technique is also illustrated in the sample MSales application server side discussed in this part of the book.

The way a program uses its local and instance (or member) variables can also affect its performance. In Java, the fastest variables are primitive local method variables defined within a particular code block or method. These are considered short -lived objects; they only need to persist while the thread of execution traverses the current code block. Local variables are created in a temporary memory space, called the stack, and are rapidly accessible to the current thread. After these objects become out of scope, they can quickly be freed. Instance variables, by contrast, are those allocated when an object is created on the heap with the Java new command. These variables persist for the lifetime of the object and are freed only during garbage collection. This is a relatively slow process compared to removing values from


Part II: Developing Enterprise Applications for J2ME

the stack. On the stack, the VM readily knows when variables should be removed, whereas objects being garbage collected must undergo a more resource-intensive process of checking references. Primitive types, in general, are faster than wrapping objects, so using primitives as variables can improve performance. The fastest instance variables are those that are immutable, by using the final keyword. Instance object variables are slower than stack variables, but in contrast, they are faster than accessing values from an array. This is due to the requisite bounds checking inherent to Java arrays. Compared to collection objects (such as Hashtable or Vector), arrays are the fastest storage mechanisms for multiple values.

Constrained size
So far we have focused on improving performance, but that is not always the primary goal. In a very constrained memory footprint, making small applications is more important than making very fast ones. The speed and memory consumption of applications are not always easily optimized at the same time. When designing for speed, customized objects and aggressive early initialization are two approaches that work well yet take up more memory than the standard methods. Alternatively, keeping code closely tied to the standard API and lazy initialization can slow down the application, but the reduction in the object layer depth and the reduction of large objects greatly reduce memory size. Striking a balance between resource consumption and speed, especially for small devices, is an important part of the tuning process.

Network strategies
The final area of optimization targeting wireless small device applications looks at the communication strategies between the server and client. Given today's limitations in broadband wireless access, enhancing the communication layer between the client and server makes a lot of sense. These optimizations can take several concrete forms, but for the purposes of this section, three general strategies are explored. They are generic and generalized to provide the most functionality. Specific examples are explored in Chapters 9 and 10 of this book during the construction of the MSales application.

When dealing with slow, intermittent network connections, caching is one of the best ways to keep an application usable at times of high latency and unavailable service, providing access to content while a device is off-line. More generally, caching can speed the application even when a network is available. Important and commonly used information can be stored and accessed locally rather than over the network. Used in key areas, caching can greatly speed up an application. Caching works exceptionally well when the data is read-only and when limitations, such as memory restrictions, are a primary concern. Caching can be difficult to implement when the information is writable and changes must be propagated back to a central datastore.

Chapter 6: Planning Development with J2ME 147 Small messages
Client and server communication can be very resource-intensive, especially when a large amount of information is sent across a network. When the network bandwidth is very limited, as is often the case in wireless networks, the amount of information exchanged needs to be optimized. This might be as simple as replacing large, chatty messages with smaller, simpler messages; or it could include more drastic changes, such as using text -based messages in place of a large, distributed object framework such as CORBA (Common Object Request Broker Architecture) or RMI (Remote Method Invocation). XML (eXtensible Markup Language) is becoming an increasingly popular alternative for exchanging information. Depending on the capabilities of the device, the additional overhead of XML messages may be acceptable given XML’s rich capabilities to structure complicated information. The following example illustrates how flexible XML can be, enabling an application to create very complex messages using a customized tag set. This code allows information to be structured conveniently for the target application:
<customer_info> <name> <first>John</first> <last>Smith</last> </name> ... </customer_info>

The performance overhead of both the tag formatting and parsing on the client side can make simpler messages more appealing for limited devices. Considering bandwidth limitations and the capabilities of the target device to parse the XML message in this example, a simpler approach might be in order. Because the device application and server have been designed to work with each other, a smaller message can be exchanged, such as the one shown here:

In this example, XML tags are replaced by the | delimiter and a fixed message order. Smaller messages are easier to parse and transmit but are less flexible than full-fledged XML.

Targeted info for users
Another simple technique that can help focus the communication between clients and servers is effectively targeting information for the end user. For example, the MSales application (see Part II of this book) presents sales information to users based on user ID. Instead of transferring sales information for all the clients in the country or a state, the mobile client can request information for only the areas in which a particular sales agent works. This greatly reduces network bandwidth between the client and server without reducing the relevant information for the end user. The same general strategy can be applied to small mobile clients in general. Transferring not only small messages but also messages targeting the end user can save on bandwidth and local storage on the device, leading to a more responsive application.


Part II: Developing Enterprise Applications for J2ME

The architecture proposed in this chapter meets the challenges faced by enterprise applications and uses an intelligent combination of a two-tier architecture and an N-tier architecture. The architecture of the client without the network connectivity is a two-tier architecture. This gives the MSales client the simplicity and flexibility to work in a disconnected mode and have a reasonably responsive performance. As we go into the details of design and implementation in subsequent chapters, you will see how the space on the device is managed effectively by only getting the slice of data that is useful for the functioning of the device for the day. The overall architecture, and especially the architecture on the server side, is an N-tier architecture. The server functionality is separated from the gateway because the server is serving the mobile clients as well as admin clients. The architecture of J2ME, with its profiles and configurations, addresses many of the limitations imposed by small devices, yet programming for these devices leads to a distinct set of performance challenges. Small devices are limited in various ways compared to server or desktop environments. It is important, therefore, to keep the limitations of these devices in mind when developing mobile applications. To support this development, this chapter examined the general steps involved in tuning an application. These steps focused on four key areas of enhancement in mobile applications: threading, processing conservation, communication optimization, and better object handling. The general pattern of optimization presented was an iterative process, with the most pressing issues for tuning involving both how to optimize and how much to optimize.

Chapter 6: Planning Development with J2ME 149

To top