; Improving Application Performance Using JetBrains' dotTrace 1.1
Learning Center
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Improving Application Performance Using JetBrains' dotTrace 1.1


  • pg 1
									 Improving Application Performance
   Using JetBrains’ dotTrace 1.1
                             product review by Craig Murphy
Earlier this year I changed jobs. During February, March and April, I found myself writing a C# 2.0
application completely from scratch. I had, as they say, a blank canvas. I must say that it was good to find
myself spending time inside an IDE, Visual Studio 2005 in this instance – in my previous job I seemed to
spend more time travelling than actually doing actual work!
I was employing some of the tenets of eXtreme Programming (XP), my new application came into shape
fairly quickly. I was doing the simplest thing that could possibly work, refining and refactoring as I went
and practiced test-driven development to boot. Life was sweet.
However, at the back of my mind I knew that I would have a performance problem…in the reporting functions.
I had written the reporting functions such that they worked and were reasonably fast [can I say performant?
It’s not in the dictionary, but perhaps it should be?] but only when applied to very small datasets, i.e. less
than 5 records. As the number of records increased, say to 15, performance took a big hit; so much so, I had
to introduce the hourglass and was considering progress bars. Feeling a little defeated, I looked to finding a
solution to this problem.

Indentifying Non-Performant Code
Solving the problem, which manifested itself in this instance as a very slow report, meant that I had to
identify the poorly performing code. Without thinking too hard, I can think of three ways of identifying this
1.    Use the debugger to step through the reporting code
2.    Augment my code using manual profiling techniques, such as calls to QueryPerformanceCounter[1]
3.    Use a third-party profiling application, such as dotTrace[2], to help me pin-point code fragments that
      will give me the biggest improvement for the least work
Of course, the first option is a real time consumer and forces us to trace in, step over, etc. and I know from
experience that mixing trace in with step over can lead to even more frustration. I don’t want to come across
as someone who belittles the debugger: it has its place, but it’s not a big help when trying to weed out non-
performant code. In the absence of codified tests, the debugger is great for small localised testing.
The second option is a little better, but it does mean that you have to add extra code to your application,
which could slow it down even more and does introduce the possibility of new bugs. It can be a useful
technique if you want to gather some basic statistics about a small number of functions in your application,
perhaps for a benchmarking exercise – how fast is your code?
The third option strikes me as being the most repeatable and the best in terms of its global reach across an
entire application. I looked at a number of profilers, some free, some not free. Frankly, in terms of ease of
use, functionality and the all-important “works/results out of the box”, dotTrace worked for me. I was able
to install it, use it and identify poorly performing code within minutes of running my application through it.

What is dotTrace?
In a nutshell, it’s a standalone application that can help you identify poorly performing code, bottlenecks
and pieces of code that are in need of optimisation. Its main goal is to help you find the parts of your
application that need attention such that, after you’ve seen what’s slowing your application down, you can
then get to work doing something about it.
Users will typically ask you to “make that bit go faster”. To users, “that bit” is the result of pushing a button.
To you, “that bit” is probably a single method call that creates new class instances, calls other methods,
performs some preparation, talks to Excel via COM, etc. In other words, “that bit” does a whole lot more
than just one single method call, something that the user interface shields from the user, and rightly so. The
beauty of dotTrace is the fact that it is very granular. Granular in that you can use it to locate “that bit”, then
drill down into the various class instantiations, method calls, COM instances, etc. and identify just how long
each and every one of them took. It’s truly a godsend.

The Developers Group Magazine July/August 2006                      Page 1        return to index
Getting Started
As an application, dotTrace has a pleasing user interface that sports an appropriate mix of web-page design,
as shown by the opening screen in Figure 1, and traditional Windows controls. Personally, I am a fan of the
web-page style of application as it’s usually associated with a simplification process that sees a complex
menu-based application reduced to a minimalist application with a solid feature base yet with more
appropriate navigation structures. I’m digressing, back to dotTrace.

                                   Figure 1 – dotTrace’s opening screen
Looking again at Figure 1, dotTrace offers us the ability to profile either an application or a web application.
For the purpose of this review, I will concentrate on profiling an application. We can also open a snapshot.
Snapshots are essentially the profile logs generated by dotTrace for the application being profiled – I’ll discuss
snapshots in more detail in the next section.

How does dotTrace work?
On the premise that .net applications target the CLR, the .exe files created by Delphi 2005, 2006 and all
versions of Visual Studio, contains Intermediate Language (IL) that can be profiled. Whilst dotTrace is able
to profile a “release” .exe, i.e. one without any debug information present, by profiling the debug version,
dotTrace is able to show you the offending source code too (more about this feature in a moment).
In order that dotTrace can do its job, we have to run our application through dotTrace. This is actually a
simple step – all we need to do is point dotTrace at our application’s .exe. As soon as we’ve done that,
dotTrace starts profiling (recording) what the application is doing. Now, dotTrace won’t go off and find the
bottlenecks in your application, you have to help it by actually demonstrating the bottleneck. For me, this
was the simple process of running my application through dotTrace, choosing a Project with 15+ records
and then clicking on Reports. This allows dotTrace to monitor which methods are being called, etc. whilst
it is also recording how long each method call took.
Without dotTrace watching what was going on, this activity would normally take around 10-15 seconds,
which feels like an eternity when you’re watching an hourglass or a progress bar. However, with dotTrace
monitoring your application, things slow down significantly – this same operation took minutes to perform.
Please don’t see this as a negative selling point: code profiling is a resource intensive process, both in terms
of disk space for the snapshots and time for the actual monitoring activity. On the plus side, the information
we get back from dotTrace is very detailed and can be used to our advantage: in other words, it’s time well spent.
So, in reality, it should be totally instant. What was wrong? Well, to answer that question, we need to look
at the profiling snapshot. Figure 2 presents an initial view of the profiled output – I have deliberately opened
the treeview such that we can see the report in question (PopulateSummaryGrid), some 96.21% of the “uptime”
for the application was spent in that method. Given that the treeview is rather wide, JetBrains have provided
us with a right-click context menu that lets us move the profile information for PopulateSummaryGrid on to
a separate tab, as shown in Figure 3.
Referring to Figure 3, we can see that get_GrandTotal is taking some 34.99% of the total time spent in the
PopulateSummaryGrid method, 18.06% for get_MarketAdjustment and 17.48% for get_Total5050ModValue.
I think we might have found the source of the delays in this particular report!

The Developers Group Magazine July/August 2006 Page 2                             return to index
                            Figure 2 – Before optimisation

                            Figure 3 – After optimisation

The Developers Group Magazine July/August 2006      Page 3   return to index
Identifying The Problem

            public double CostB
                get { return GetRatioEquipmentLabourCost() + labourR + projectCostsLabour; }

            public double CostA
                get { return GetActualEquipmentCost() + bulksX + projectCostsBulks; }

            public double Commissioning
                get { return (summaryPercentages.Commissioning/100) * (CostA + CostB); }

                                                   Listing 1
Now that we have found the source of the delays, we need to work out precisely why the code is so slow.
Before I focus on the findings hinted at in Figure 3, I would like you to take a look at Listing 1. It presents a
few lines of code that reads very well and looks elegant…but there is an inherent performance problem there
too. Calculating CostB and CostA, which are the key drivers for the PopulateSummaryGrid method, does
what is says it will do.
CostA and CostB, however rely on a method and a couple of other methods and properties for their
calculations. They take some time, but not a lot, as they each have to work out costs based on querying some
220 ratios…it’s a little too complicated to explain the whole operation, you’ll have to trust me that they are
doing rather a lot, but in an acceptable amount of time. It’s only acceptable, however, if it’s not repeated.
And that’s the problem; it is repeated rather a lot.
The real problem lies within the Commissioning property: it relies on the properties CostA and CostB for its
calculation. So when PopulateSummaryGrid retrieves a value from CostA and CostB, when it then retrieves
a value for Commissioning, CostA and CostB are recalculated. Figure 4 might make things a little clearer.

                                      Figure 4 - Here lies the problem!

   public double CostB
                get { return GetRatioEquipmentLabourCost() + labourR + projectCostsLabour; }

            public double CostA
                get { return GetActualEquipmentCost() + bulksX + projectCostsBulks; }

            public double Commissioning
                get { return (summaryPercentages.Commissioning/100) * (CostA + CostB); }

The Developers Group Magazine July/August 2006 Page 4                            return to index
            public double InsuranceAndCertification
                get { return (summaryPercentages.InsuranceAndCertification / 100) *
                                   (CostA + CostB + Commissioning); }

            public double OSBL
                get { return (summaryPercentages.OSBL / 100) *
                                   (CostA + CostB + InsuranceAndCertification + Commissioning); }

            public double REACCosts
                { return (summaryPercentages.REACCosts / 100) *
                              (CostA + CostB + OSBL + InsuranceAndCertification + Commissioning);

            public double EngineeringManagement
                    return (summaryPercentages.EngineeringManagement / 100) *
                               (CostA + CostB + REACCosts + OSBL +
                                InsuranceAndCertification + Commissioning); }

            public double TotalBaseEstimate
                get { return CostA + CostB + Commissioning + InsuranceAndCertification +
                                    OSBL + REACCosts + EngineeringManagement; }

            public double Contingency
                get { return (summaryPercentages.Contingency / 100) * TotalBaseEstimate; }

            public double Total5050
                get { return Contingency + TotalBaseEstimate; }

            public double Escalation
                get { return (summaryPercentages.Escalation / 100) * Total5050; }

            public double Total5050ModValue
                get { return Total5050 + Escalation; }

            public double MarketAdjustment
                get { return (summaryPercentages.MarketAdjustment / 100) * Total5050ModValue; }

            public double GrandTotal
                get { return Total5050ModValue + otherCostTotal + MarketAdjustment; }

            Listing 2 – It works, it looks readable, it’s understandable, but it’s very slow…
Listing 2 presents all of the cost properties – you can see why it takes so long. On the positive side on things,
at least the calculations are easy to understand, albeit their performance leaves a lot to be desired. However,
using the XP ethos of the simplest thing that could possibly work, this is what I ended up with.
For the sake of completeness, you really should know what PopulateSummaryGrid looks like before any
performance improvements are made – Listing 3 does just that. The code presented in Listing 3 uses the
properties in Listing 2, which is where the performance bottleneck kicks in.

The Developers Group Magazine July/August 2006                     Page 5        return to index
           private void PopulateSummaryGrid()
               int rowNo;
               DataGridViewRow row;
               string[] row0 = new string[4];

               AddSummaryRow(“1”, “PROCUREMENT”, thisProject.CostA);
               AddSummaryRow(“2”, “CONSTRUCTION”, thisProject.CostB);
               AddSummaryRow(“3”, “COMMISSIONING”, thisProject.Commissioning);

               AddSummaryRow(“6”, “INSURANCE AND CERTIFICATION”,
               AddSummaryRow(“7”, “OSBL”, thisProject.OSBL);
               AddSummaryRow(“8”, “REAC COSTS”, thisProject.REACCosts);

               AddSummaryRow(“4”, “ENGINEERING/MANAGEMENT”, thisProject.EngineeringManagement);

               rowNo = AddSummaryRow(“1”, “TOTAL BASE ESTIMATE”, thisProject.TotalBaseEstimate);
               row = dvgSummary.Rows[rowNo];
               row.DefaultCellStyle.Font = new Font(this.Font.FontFamily, dvgSummary.Font.Size,

               AddSummaryRow(“9”, “CONTINGENCY/PROVISIONS”, thisProject.Contingency);

               rowNo = AddSummaryRow(“”, “TOTAL 50/50 ESTIMATE”, thisProject.Total5050);
               row = dvgSummary.Rows[rowNo];
               row.DefaultCellStyle.Font = new Font(this.Font.FontFamily, dvgSummary.Font.Size,

               AddSummaryRow(“X”, “ESCALATION”, thisProject.Escalation);
               AddSummaryRow(“”, “TOTAL 50/50 MOD VALUE”, thisProject.Total5050ModValue);
               AddSummaryRow(“Y”, “MARKET ADJUSTMENT”, thisProject.MarketAdjustment);
               AddSummaryRow(“”, “OTHERS”, thisProject.otherCostTotal);

               rowNo = AddSummaryRow(“”, “GRAND TOTAL”, thisProject.GrandTotal);
               row = dvgSummary.Rows[rowNo];
               row.DefaultCellStyle.Font = new Font(this.Font.FontFamily, dvgSummary.Font.Size,

                       Listing 3 – PopulateSummaryGrid, before optimisation
Also for completeness, Figure 5 presents the output of PopulateSummaryGrid. Notice the Code column: 1 is
CostA, 2 is CostB, the remaining items (3,6,7,8,4, etc.) are the properties as shown in Listing 2.

                               Figure 5 – What we’re trying to optimise

The Developers Group Magazine July/August 2006 Page 6                     return to index
Solving the Problem
Now that we’ve identified the problem, what can we do to solve it? Obviously we’re now a lot more focused
and we know which lines of code will give us the most performance gain if we can re-write them in a more
optimal fashion. There are probably a few ways of making this calculation a lot faster…my initial thoughts
were to calculate each of the properties once and once only, storing the results as we go. Listing 4 presents
the optimised version.

   public class SummaryTotals
                public   double   CostA = 0;
                public   double   CostB = 0;
                public   double   Commissioning = 0;
                public   double   InsuranceAndCertification = 0;
                public   double   OSBL = 0;
                public   double   REACCosts = 0;
                public   double   EngineeringManagement = 0;
                public   double   TotalBaseEstimate = 0;
                public   double   Contingency = 0;
                public   double   Total5050 = 0;
                public   double   Escalation = 0;
                public   double   Total5050ModValue = 0;
                public   double   MarketAdjustment = 0;
                public   double   GrandTotal = 0;

                public SummaryTotals(Project project)
                    DetailTotals detailTotals = new DetailTotals(project);

                    CostA = detailTotals.totalEPCBulks;
                    CostB = detailTotals.totalEPCLabour;

                    double AandB = CostA + CostB;

                   Commissioning       = (project.summaryPercentages.Commissioning / 100) *
                   InsuranceAndCertification =
               (project.summaryPercentages.InsuranceAndCertification / 100) * (AandB +
                   OSBL                      = (project.summaryPercentages.OSBL / 100) * (AandB +
               Commissioning + InsuranceAndCertification);
                   REACCosts =                 (project.summaryPercentages.REACCosts / 100) *
                                               (AandB + Commissioning + InsuranceAndCertification
               + OSBL);
                   EngineeringManagement =     (project.summaryPercentages.EngineeringManagement
               / 100) *
                                               (AandB + Commissioning + InsuranceAndCertification
               + OSBL + REACCosts);

                   TotalBaseEstimate = (AandB + Commissioning + InsuranceAndCertification + OSBL
               + REACCosts + EngineeringManagement);
                   Contingency       = (project.summaryPercentages.Contingency / 100) *
                   Total5050         = Contingency + TotalBaseEstimate;
                   Escalation        = (project.summaryPercentages.Escalation / 100) * Total5050;
                   Total5050ModValue = Total5050 + Escalation;
                   MarketAdjustment = (project.summaryPercentages.MarketAdjustment / 100) *
                   GrandTotal        = Total5050ModValue + project.otherCostTotal +

                 Listing 4 – after some profiling, a little refactoring, what a difference

The Developers Group Magazine July/August 2006                     Page 7     return to index
                                      Figure 6 - after optimisation
So what has changed? Well, we don’t spend 96.21% of our application’s processing time in
PopulateSummaryGrid, now we spend 0.45%. In other words, instead of it taking some 415048ms to calculate
the costs that populate the grid, in now takes just 163.83ms. Figure 6 presents the optimised snapshot;
Figure 7 presents a snapshot for PopulateSummaryGrid on a new tab.

                                      Figure 7 - after optimisation
The whole cycle from installing dotTrace, identifying the bottleneck through to implementing a solution
took less than one hour. Whilst not shown in this review, the user has the option to tweak the percentages
for the various properties (Commissioning, OSBL, etc.) which in turn force PopulateSummaryGrid to be
called each time…you can imagine how fast that is now, virtually instant!

The Developers Group Magazine July/August 2006 Page 8                       return to index
Other dotTrace Features
Snapshots can be large and possibly unwieldy beasts that make locating a particular method somewhat
difficult. Luckily, dotTrace has a Find Function feature (accessible via speed button and keyboard shortcut).
Find Function lets us start typing the name of the class or method name into an incremental search; it will
then narrow down the matches accordingly. Figure 8 depicts Find Function after it has been asked to look
for “Pop”.

                                      Figure 8 - Using Find Function
If you have been using .NET for any length of time, you’ll know just how vast it is. Vastness can be good;
however it does mean that your snapshot logs fill up rather quickly. Fortunately, dotTrace has the ability to
filter what you see. There are a few combinations that either hide or collapse items in the call tree display. I
was pleased to see that the filters accept “patterns”, e.g. System.* as shown in Figure 9.

                                             Figure 9 - Filtering

The dotTrace API
Whilst the Find Function and Filters allow us rapidly to find bottlenecks, etc. there may come a time when
you would like to integrate dotTrace into your application for tighter control of the profiling activity.
JetBrains.dotTrace.Api.dll provides us with that level of integration.
Without going into too much detail, here are the basics of the dotTrace API:

   //The code which you want to profile

The Developers Group Magazine July/August 2006                      Page 9      return to index
I’ve refrained from mentioning it so far, but dotTrace uses a Control Profiling window that lets us start the
profiling process, grab a snapshot or close an application, as shown in Figure 10. This functionality is also
available to us via the dotTrace API, so we are able to control the profiler from within the application being
profiled. If you’re wondering why you might do that, JetBrains cite an example, Listing 5, whereby a code
fragment demonstrates poor performance some of the time but not all of the time. The variable Threshold is
something we would need to define; precisely how many ticks constitute a bottleneck is down to you to determine.

                                         Figure 10 - Control Profiling

                 public void YourSlowFunction()
                     long ticks = DateTime.Now.Ticks;

                       // Your code which might be slow and might be not.
                       // This invocation has taken significant time and needs to be profiled
                       if (DateTime.Now.Ticks - ticks > Threshold)

                                                    Listing 5

What does it cost?
If you like what you’ve seen, dotTrace will cost you $249. Volume discounts are available; visit the JetBrains’
web site to learn more about that. I’m pleased to report that the dotTrace licensing agreement allows a single
developer the option to install dotTrace on more than one computer provided that, like a book, it is not used
in two places at the same time.

It pays to have tests in place
Whenever you re-write a piece of code, either as a result of profiling or after refactoring, it helps to be able to
re-test the code that you’ve just modified. After all, you want to be sure that your modifications haven’t
broken anything. If you’re not already familiar with test-driven development (TDD), now might be the time
to look into it – there’s a slightly dated article [3] in the References section of this review. Further TDD hints
can be found via my blog’s TDD category [4].
TDD allows you to build up a series of tests, typically written using the same language as your application.
The tests are small code fragments that test a piece of functionality: when your application ships, all the tests
should pass. After you have improved your application’s performance using dotTrace, you should re-run
the tests to ensure that the application still works as expected, i.e. all the tests pass.

The Developers Group Magazine July/August 2006 Page 10                            return to index
If you are looking to minimise your use of progress bars or dispense with the hourglass wherever possible,
dotTrace is the tool for you. Within minutes of installing it I was making progress, identifying bottlenecks,
homing in on specific methods and lines of code that do their job, albeit very slowly. Using dotTrace helped
me turn my application’s performance around, allowed me to turn progress bars into short-lived hourglasses,
and in some cases, remove hourglasses too. If your users are crying out for more speed, at $249, dotTrace
could be a very wise investment.

Screenshots and image management courtesy of TechSmith’s SnagIt (http://www.techsmith.com/snagit.asp).

[1] Elementary execution timing using QueryPerformanceCounter - http://www.craigmurphy.com/blog/?p=290
[2] JetBrains’ dotTrace - http://www.jetbrains.com/profiler/
[3] Test-Driven Development Using csUnit in C#Builder - http://www.prototypical.co.uk/pdf/TDD.pdf
[4] My blog’s TDD category - http://www.craigmurphy.com/blog/?cat=6

                   Craig is an author, developer, speaker, blogger, Certified ScrumMaster and Microsoft
                   Most Valuable Professional (Connected Systems). He specialises in all things XML,
                   particularly web services and XSLT. Craig is evangelical about .NET, C#, Test-Driven
                   Development, ExtremeProgramming, agile methods and Scrum. He can be reached
                   via e-mail at: ddg@craigmurphy.com, or via his web site: http://www.craigmurphy.com
                   (where you can also find the source code and PowerPoint files for all of Craig’s articles,
                   reviews and presentations).

The Developers Group Magazine July/August 2006                  Page 11       return to index

To top