Tags:
create new tag
, view all tags
OBSOLETE - AthensMarks have been discredited, but nothing has yet replaced them

Calling all TWiki Developers

Performance is a long-term thorn in TWiki's side. Unfortunately a lot of evidence in the past has been apocryphal or subjective. If we are to make meaningful improvements to performance we have to agree a measurement system and all use it. This is basic scientific method.

This topic is not about improving performance, only about measuring it.

Measuring performance of TWiki is very tricky. This is (I think) because most TWiki page views run so fast (honest!) that traditional performance measuring techniques don't work very well. So what's needed is a range of techniques, that examine different aspects of performance, that can be used in different circumstances.

There are four aspects to performance:

  1. Raw core performance
  2. End-user perceived performance of the distribution
  3. Raw core + plugins performance
  4. End-user perceived performance of the distribution + plugins
In each case we should be interested in (1) benchmarks and (2) how to analyse performance for improvement.

Raw Core Performance

Benchmarking

AthensMarks have been introduced to measure raw core performance. AthensMarks measure the performance of the core with all plugins removed (except Empty and Default). The name alludes to the arbitrary choice of Athens release performance as 100%, allowing other releases to be normalised against this performance. Generation of AthensMarks benchmarks is described in AthensMarks.

Analysis

Raw core performance can be analysed in two ways; coarse analysis can be done by putting -d:DProf at the top of the 'view' script, running a view, and running dprofpp in the bin directory. This method is coarse because it uses sampling to measure time spent in TWiki functions. Most TWiki functions run very fast, as indeed does a full page view. As a result, the sampling has a high probability of missing calls to key methods. To improve accuracy, the only thing to do is slow down the core; the obvious way to do that is to ask it to render a larger page. Unfortunately that usually just results in shifting the performance blackspots around in the code.

A more reliable technique is to use TWikiBench, which provides a simple way to instrument code and observe performance in the browser. Because this relies on absolute measures rather than sampling, it is much more reliable way of narrowing down on key code sections.

Note that if you are using dprofpp analysis, it is well worthwhile installing and using CPAN:Devel::Smallprof to get greater detail.

End-user perceived performance of the distribution

When a new user gets TWiki out of the box and runs it up, what performance do/should they see?

Looking for someone to write this bit. Some work has been done in CairoPerformanceExperiments but it requires a scientific approach to building a benckmarking standard.

Benckmarking

The benchmarks have to be able to reliably compare end-user performance and normalise it to a standard.

Analysis

Raw core + plugins performance

Benchmarking

Plugins performance can be measured and meaningfully benchmarked using AthensMarks. Of course, it's not quite as simple as raw core performance, as many plugins will simply not run in Athens. So the strategy is to:
  1. Identify a representative test page that exercises the plugin
  2. Esablish the baseline AthensMarks of the release on that page
  3. Add the plugin to the release
  4. Re-run the experiment
The plugin performance is given by subtracting the measured AthensMarks with the plugin installed from the baseline AthensMarks. Note: this approach requires standard benchmark pages for each plugin. If these test pages are changed, the historical benchmarks for the plugin are automatically invalidated and have to be run again against the new page.

Analysis

The same techniques as are used for the core (DProf and TWikiBench) work equally well with plugins.

End-user perceived performance of the distribution + plugins

The basic problems here are the same as for end-user performance of the core. It's not enough to simply measure the core performance of plugins. Plugins will often impose added load on the client, often without realising it; for example, by requiring the load of Javascript or CSS or other client-side includes, or generating large numbers of DOM objects.

Discussion

Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r6 - 2006-12-02 - CrawfordCurrie
 
  • Learn about TWiki  
  • Download TWiki
This site is powered by the TWiki collaboration platform Powered by Perl Hosted by OICcam.com Ideas, requests, problems regarding TWiki? Send feedback. Ask community in the support forum.
Copyright © 1999-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.