Learning Center
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Techniques For Performance Optimizations


This is to certify that ‘Techniques for
Performance Optimizations’ embodies the
original work done by Ratul Upadhyay, Sunny
Dabas, Prince Sharma & Ram Govind. This
project is submitted as a partial fulfillment of
the requirement for the Presentation of 3rd
Semester of the ongoing GNIIT Course.
The satisfaction that accompanies that the successful
completion of any task would be incomplete without the
mention of people whose ceaseless cooperation made it
possible, whose constant guidance and encouragement
crown all efforts with success.

We are grateful to our presentation guide Mrs. Anita
Sharma for the guidance, inspiration and constructive
suggestions that helped us in the preparation of this

We also thank our colleagues who have helped us
tirelessly for successful completion of this project.
Hardware Requirements

   Processor Pentium II, Pentium III, Pentium IV or higher

   RAM 64 Mb or Higher

   Disk Space 12 Mb

Software Requirements

   Operating System Win-98, Win-XP, Linux or any other
   higher version.

   Software Microsoft PowerPoint ( Version 97 or higher)
Although the word "optimization" shares the same root as "optimal," it
is rare for the process of optimization to produce a truly optimal
system. The optimized system will typically only be optimal in one
application or for one audience. One might reduce the amount of time
that a program takes to perform some task at the price of making it
consume more memory. In an application where memory space is at a
premium, one might deliberately choose a slower algorithm in order to
use less memory. Often there is no “one size fits all” design which
works well in all cases, so engineers make trade-offs to optimize the
attributes of greatest interest. Additionally, the effort required to make
a piece of software completely optimal—incapable of any further
improvement— is almost always more than is reasonable for the
benefits that would be accrued; so the process of optimization may be
halted before a completely optimal solution has been reached.
Fortunately, it is often the case that the greatest improvements come
early in the process.


   Design level
At the highest level, the design may be optimized to make best use of
the available resources. The implementation of this design will benefit
from a good choice of efficient algorithms and the implementation of
these algorithms will benefit from writing good quality code. The
architectural design of a system overwhelmingly affects its
performance. The choice of algorithm affects efficiency more than any
other item of the design and, since the choice of algorithm usually is
the first thing that must be decided, arguments against early or
"premature optimization" may be hard to justify.
In some cases, however, optimization relies on using more elaborate
algorithms, making use of 'special cases' and special 'tricks' and
performing complex trade-offs. A 'fully optimized' program might be
more difficult to comprehend and hence may contain more faults than
unoptimized versions.

   Source code level
Avoiding poor quality coding can also improve performance, by
avoiding obvious 'slowdowns'. After that, however, some optimizations
are possible that actually decrease maintainability. Some, but not all,
optimizations can nowadays be performed by optimizing compilers.

   Compile level
Use of an optimizing compiler tends to ensure that the executable
program is optimized at least as much as the compiler can predict.

   Assembly level
At the lowest level, writing code using an assembly language, designed
for a particular hardware platform can produce the most efficient and
compact code if the programmer takes advantage of the full repertoire
of machine instructions. Many operating systems used on embedded
systems have been traditionally written in assembler code for this
reason; when efficiency and size are less important large parts may be
written in a high-level language.
With more modern optimizing compilers and the greater complexity of
recent CPUs, it is more difficult to write code that is optimized better
than the compiler itself generates, and few projects need resort to this
'ultimate' optimization step.
However, a large amount of code written today is still compiled with
the intent to run on the greatest percentage of machines possible. As a
consequence, programmers and compilers don't always take advantage
of the more efficient instructions provided by newer CPUs or quirks of
older models. Additionally, assembly code tuned for a particular
processor without using such instructions might still be suboptimal on a
different processor, expecting a different tuning of the code.

   Run time
Just-in-time compilers and Assembler programmers may be able to
perform run time optimization exceeding the capability of static
compilers by dynamically adjusting parameters according to the actual
input or other factors.
Code optimization can be also broadly categorized as platform-
dependent and platform-independent techniques. While the latter
ones are effective on most or all platforms, platform-dependent
techniques use specific properties of one platform, or rely on
parameters depending on the single platform or even on the single
processor. Writing or producing different versions of the same code for
different processors might therefore be needed. For instance, in the
case of compile-level optimization, platform-independent techniques
are generic techniques (such as loop unrolling, reduction in function
calls, memory efficient routines, reduction in conditions, etc.), that
impact most CPU architectures in a similar way. Generally, these serve
to reduce the total Instruction path length required to complete the
program and/or reduce total memory usage during the process. On the
other hand, platform-dependent techniques involve instruction
scheduling, instruction-level parallelism, data-level parallelism, cache
optimization techniques (i.e. parameters that differ among various
platforms) and the optimal instruction scheduling might be different
even on different processors of the same architecture.


     Web Server Optimization
     Compiler Optimization
     Java Apps Optimization
     Database Optimization


A web server can be referred to as either the hardware (the computer)
or the software (the computer application) that helps to deliver content
that can be accessed through the Internet.

A web server is what makes it possible to be able to access content like
web pages or other data from anywhere as long as it is connected to
the internet. The hardware houses the content, while the software
makes the content accessible through the internet.

Web Server Optimization can be simply put as, tweaking the settings of
your web server for the best performance.



Browser caching can help to reduce server load by reducing the number
of requests per page. For example, by setting the correct file headers
on files that don't change (static files like images, CSS, JavaScript etc.)
browsers will then cache these files on the user’s computer.

CDN is an interconnected system of computers on the Internet that
provides Web content rapidly to numerous users by duplicating the
content on multiple server s and directing the content to users based
on proximity. It helps in utilizing the distribution of traffic on a website.


HTTP compression is a publicly defined way to compress textual
content transferred from web servers to browsers. HTTP compression
uses public domain compression algorithms, like gzip and compress, to
compress XHTML, JavaScript, CSS, and other text files at the server


While a clean page means speed, you have to balance that with the fact
that strong supporting images are key to a successful website. But
there is no reason to sacrifice speed for quality. By ensuring your
images are appropriately formatted and compressed, you can help
increase your website’s speed.


 80% of the end-user response time is spent on the front-end. Most of
this time is tied up in downloading all the components in the page:
images, style sheets, scripts, Flash, etc. Reducing the number of
components reduces the number of HTTP requests required to render
the page. This is the key to faster pages.


The DNS maps hostnames to IP addresses, just as phonebooks map
people's names to their phone numbers. Reducing the number of
unique hostnames has the potential to reduce the amount of parallel
downloading that takes place in the page. Avoiding DNS lookups cuts
response times


Compiler optimization is the process of tuning the output of
a compiler to minimize or maximize some attributes of an executable
computer program.

The most common requirement is to minimize the time taken to
execute a program and the other common one is to minimize the
amount of memory occupied.

Compiler optimization is generally implemented using a sequence
of optimizing transformations, algorithms which take a program and
transform it to produce an output program that uses less resources.

On the next page we’re going to discuss on how we can implement
compiler optimization at a basic level.



Reuse results that are already computed and store them for use later,
instead of recomposing them.


Accesses to memory are more expensive for each level of the memory
hierarchy, so place the most commonly used items in registers first.

Remove unnecessary computations and intermediate values. Less work
for the CPU, cache, and memory usually results in faster execution.


Reorder operations to allow multiple computations to happen in
parallel, either at the instruction, memory, or thread level.


Less complicated code. Jumps (conditional or unconditional) interfere
with the prefetching of instructions, thus slowing down code.


These act on the statements which make up a loop. Loops can have a
significant impact as many programs spend a large inside loops.


Code and data that are accessed closely together in time should be
placed close together in memory to increase spatial locality of


The more precise the information the compiler has, the better it can
employ any or all of these optimization techniques..


Java refers to a number of computer software products and
specifications from Sun Microsystems, a subsidiary of Oracle
Corporation, that together provide a system for developing application
software and deploying it in a cross-platform environment.
Java is used in a wide variety of computing platforms.

     Embedded devices and mobile phones
     Enterprise servers and supercomputers
     Web servers and enterprise applications
     Desktop computers and java applets

Many useful techniques exist for optimizing a Java program. We will
discuss it in the next slide.



If you have a multiprocessor and a Java VM that can spread threads
across processors you can improve performance by multithreading


You should only use exceptions where you really need them--not only
do they have a high basic cost, but their presence can hurt compiler


Use clipping to reduce the amount of work done in , double buffering to
improve speed, and image strips or compression to speed up loading


Use classes from the Java API when they offer native machine
performance that you can't match using Java.

Java inlines a method if it is final, private, or static. If your code spends
lots of time calling a method, consider writing a version that is final.


Doing I/O a single byte at a time is generally too slow to be practical. so
you might get better performance by using a single "bulk" call


Calling a synchronized method is typically 10 times slower than calling
an unsynchronized method. Avoid synchronized methods if you can.


It takes a long time to create an so it's often worth updating the fields
of an old object and reusing it rather than creating a new object.


Database is a collection of information that is organized so that it can
easily be accessed, managed, and updated. In one view, databases can
be classified according to types of content: bibliographic, full-text,
numeric, and images.

Database performance focuses on tuning and optimizing the design,
parameters, and physical construction of database objects, specifically
tables and indexes, and the files in which their data is stored.

The actual composition and structure of database objects must be
monitored continually and changed accordingly if the database
becomes inefficient.
No amount of SQL tweaking or system tuning can optimize the
performance of queries run against a poorly designed or disorganized



The relational database is aptly named because it promotes the
strategy of managing data through well-defined relations. By creating
and enforcing relations, its possible to greatly reduce the possibility
inconsistencies could creep into the data. This strategy is known
as database normalization, of which there are several well-defined
states, also known as forms. Database normalization can greatly reduce
the data inconsistencies that can arise over time.


MySQL is often used to dynamically generate web pages based on
similar queries and rarely changing data. Why not cache the data
returned by these queries, thereby bypassing the need to repeatedly
retrieve it from the database?, enabling MySQL's query caching
mechanism can result in a huge performance gain over neglecting to do


Once the tables have been properly designed and normalized, you
should next take some time to think about what data will most
commonly be queried, and create special data structures known
as indexes which will dramatically improve the performance of these
query operations. Indexes are important because they organize the
   indexed data in a way that allows MySQL to retrieve the desired record
   in the fastest possible fashion.


   The reasoning behind this step should be obvious: how are you going to
   know what should be optimized if you're not actively monitoring how
   MySQL is operating? Thankfully, MySQL's developers have been
   particularly proactive in providing developers with the tools for keeping
   abreast of database performance..


   Before optimizing, you should carefully consider whether you need to
   optimize at all. Optimization in different platforms can be an elusive
   target, since the effectiveness of optimization varies from one level to
   another in different environments.

   If your code already works, optimizing it is a sure way to introduce new,
   and possibly subtle, bugs

   Optimization tends to make code harder to understand and maintain

   Some of the techniques presented increase speed by reducing the
   extensibility of the code

   Optimizing code for one platform may actually make it worse on
   another platform

   A lot of time can be spent optimizing, with little gain in performance,
   and can result in obfuscated code

        The power of executing anything
 with effectiveness is always prized much by the
possessor, with less attention to what features lie
The presentation Techniques For Performance
Optimizations is for complete overview on the various
aspects of optimizing on various different platforms and

It has been made along the lines of a quote “The power
of executing anything with effectiveness is always prized
much by the possessor, with less attention to what
features lie beneath that product.”
So by this presentation we have tried to cover everything
on what is optimization, various levels and areas of
optimization and how can we implement these
The following programs, books and code snippets helped
our cause in completing this presentaion. Without these
sources, it wouldn’t have been this much presentable.
Books :-
     All About MySQL, David Gnome

     Information Search And Analysis Skills, NIIT
Websites :-
Search Engines :-

To top