; A Portable Sampling-based Profiler for Java Virtual Machines_1_
Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

A Portable Sampling-based Profiler for Java Virtual Machines_1_

VIEWS: 4 PAGES: 43

  • pg 1
									Partial Method Compilation using Dynamic Profile Information
John Whaley Stanford University October 17, 2001

Outline
• • • • • • • Background and Overview Dynamic Compilation System Partial Method Compilation Technique Optimizations Experimental Results Related Work Conclusion

Dynamic Compilation
• We want code performance comparable to static compilation techniques However, we want to avoid long startup delays and slow responsiveness

•

•

Dynamic compiler should be fast AND good

Traditional approach
• Interpreter plus optimizing compiler • Switch from interpreter to optimizing compiler via some heuristic Problems: • Interpreter is too slow! (10x to 100x)

•

Another approach
• Simple compiler plus optimizing compiler (Jalapeno, JUDO, Microsoft) • Switch from simple to optimizing compiler via some heuristic

•

Problems: • Code from simple compiler is still too slow! (30% to 100% slower than optimizing) • Memory footprint problems (Suganuma et al., OOPSLA’01)

Yet another approach
• Multi-level compilation (Jalapeno, HotSpot) • Use multiple compiled versions to slowly “accelerate” into optimized execution Problems: • This simply increases the delay before the program runs at full speed!

•

Problem with compilation
• Compilation takes time proportional to the amount of code being compiled Many optimizations are superlinear in the size of the code

•

•

Compilation of large amounts of code is the cause of undesirably long compilation times

Methods can be large
• All of these techniques operate at method boundaries Methods can be large, especially after inlining Cutting inlining too much hurts performance considerably (Arnold et al., Dynamo’00) Even when being frugal about inlining, methods can still become very large

•
• •

Methods are poor boundaries
• • Method boundaries do not correspond very well to the code that would most benefit from optimization Even “hot” methods typically contain some code that is rarely or never executed

Example: SpecJVM db
void read_db(String fn) { int n = 0, act = 0; byte buffer[] = null; try { FileInputStream sif = new FileInputStream(fn); buffer = new byte[n]; while ((b = sif.read(buffer, act, n-act))>0) { Hot act = act + b; loop } sif.close(); if (act != n) { /* lots of error handling code, rare */ } } catch (IOException ioe) { /* lots of error handling code, rare */ } }

Example: SpecJVM db
void read_db(String fn) { int n = 0, act = 0; byte buffer[] = null; try { FileInputStream sif = new FileInputStream(fn); buffer = new byte[n]; while ((b = sif.read(buffer, act, n-act))>0) { act = act + b; } Lots of sif.close(); rare code! if (act != n) { /* lots of error handling code, rare */ } } catch (IOException ioe) { /* lots of error handling code, rare */ } }

Hot “regions”, not methods
• The regions that are important to compile have nothing to do with the method boundaries Using a method granularity causes the compiler to waste time optimizing large pieces of code that do not matter

•

Overview of our technique
Increase the precision of selective compilation to operate at a sub-method granularity 1. Collect basic block level profile data for hot methods 2. Recompile using the profile data, replacing rare code entry points with branches into the interpreter

Overview of our technique
• Takes advantage of the well-known fact that a large amount of code is rarely or never executed • Simple to understand and implement, yet highly effective • Beneficial secondary effect of improving optimization opportunities on the common paths

Overview of Dynamic Compilation System

Stage 1:

interpreted code
when execution count = t1

Stage 2:

compiled code
when execution count = t2

Stage 3:

fully optimized code

Identifying rare code
Simple technique: any basic block executed during Stage 2 is said to be hot • Effectively ignores initialization • Add instrumentation to the targets of conditional forward branches • Better techniques exist, but using this we saw no performance degradation • Enable/disable profiling is implicitly handled by stage transitions •

Method-at-a-time strategy
100.00% 80.00% Linpack JavaCUP JavaLEX SwingSet check compress jess db javac mpegaud mtrt jack

% of basic blocks

60.00% 40.00% 20.00% 0.00% 1 10 100 500 1000 2000 5000

execution threshold

Actual basic blocks executed
100.00% 80.00% Linpack JavaCUP JavaLEX SwingSet check compress jess db javac mpegaud mtrt jack

% of basic blocks

60.00% 40.00% 20.00% 0.00% 1 10 100 500 1000 2000 5000

execution threshold

Partial method compilation technique

Technique
1. Based on profile data, determine the set of rare blocks. • Use code coverage information from the first compiled version

Technique
2. Perform live variable analysis. • Determine the set of live variables at rare block entry points

live: x,y,z

Technique
3. Redirect the control flow edges that targeted rare blocks, and remove the rare blocks.

to interpreter…

Technique
4. Perform compilation normally. • Analyses treat the interpreter transfer point as an unanalyzable method call.

Technique
5. Record a map for each interpreter transfer point. • In code generation, generate a map that specifies the location, in registers or memory, of each of the live variables. • Maps are typically < 100 bytes

live: x,y,z

x: sp - 4 y: R1 z: sp - 8

Optimizations

Partial dead code elimination
• Modified dead code elimination to treat rare blocks specially • Move computation that is only live on a rare path into the rare block, saving computation in the common case

Partial dead code elimination
• Optimistic approach on SSA form

• Mark all instructions that compute essential values, recursively • Eliminate all non-essential instructions

Partial dead code elimination
• Calculate necessary code, ignoring all rare blocks • For each rare block, calculate the instructions that are necessary for that rare block, but not necessary in non-rare blocks • If these instructions are recomputable at the point of the rare block, they can be safely copied there

Partial dead code example
x = 0; if (rare branch 1) { ... z = x + y; ... } if (rare branch 2) { ... a = x + z; ... }

Partial dead code example
if (rare branch 1) { x = 0; ... z = x + y; ... } if (rare branch 2) { x = 0; ... a = x + z; ... }

Pointer and escape analysis
• Treating an entrance to the rare path as a method call is a conservative assumption • Typically does not matter because there are no merges back into the common path • However, this conservativeness hurts pointer and escape analysis because a single unanalyzed call kills all information

Pointer and escape analysis
• Stack allocate objects that don’t escape in the common blocks • Eliminate synchronization on objects that don’t escape the common blocks

• If a branch to a rare block is taken: • Copy stack-allocated objects to the heap and update pointers • Reapply eliminated synchronizations

Copying from stack to heap
Heap

stack object

copy

rewrite

stack object

Reconstructing interpreter state
• We use a runtime “glue” routine • Construct a set of interpreter stack frames, initialized with their corresponding method and bytecode pointers • Iterate through each location pair in the map, and copy the value at the location to its corresponding position in the interpreter stack frame • Branch into the interpreter, and continue execution

Experimental Results

Experimental Methodology
• Fully implemented in a proprietary system • Unfortunately, cannot publish those numbers! • Proof-of-concept implementation in the joeq virtual machine http://joeq.sourceforge.net • Unfortunately, joeq does not perform significant optimizations!

Experimental Methodology
• Also implemented as an offline step, using refactored class files • Use offline profile information to split methods into “hot” and “cold” parts • We then rely on the virtual machine’s default method-at-a-time strategy • Provides a reasonable approximation of the effectiveness of this technique • Can also be used as a standalone optimizer • Available under LGPL as part of joeq release

Experimental Methodology
• IBM JDK 1.3 cx130-20010626 on RedHat Linux 7.1 • Pentium 3 600 mhz, 512 MB RAM • Thresholds: t1 = 2000, t2 = 25000 • Benchmarks: SpecJVM, SwingSet, Linpack, JavaLex, JavaCup

Run time improvement
100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% check compress jess db javac mpegaud mtrt jack SwingSet linpack JLex JCup

First bar: original Blue: optimized execution Second bar: PMC Third bar: PMC + my opts

Related Work
Dynamic techniques
• • • • Dynamo (Bala et al., PLDI’00) Self (Chambers et al., OOPSLA’91) HotSpot (JVM’01) IBM JDK (Ishizaki et al., OOPSLA’00)

Related Work
Static techniques
• Trace scheduling (Fisher, 1981) • Superblock scheduling (IMPACT compiler) • Partial redundancy elimination with costbenefit analysis (Horspool, 1997) • Optimal compilation unit shapes (Bruening, FDDO’00) • Profile-guided code placement strategies

Conclusion
• Partial method compilation technique is simple to implement, yet very effective • Compile times reduced drastically

• Overall run times improved by an average of 10%, and up to 32% • System is available under LGPL at: http://joeq.sourceforge.net


								
To top