For the current discussion, see DakarPerformanceIssues
Dakar Performance Issues Archive 2005
I just downloaded and installed Dakar beta 3.
My first impression: Man - this is SLOW.
No matter what page I look at it takes 1.5 - 2 seconds to show the page.
In Cairo it takes a little less than 1 second.
You can really feel the difference. Cairo is not impressive in speed. Dakar is slow.
So I did some benchmarking.
CAIRO with round 10 plugins installed.
ab -n 10 http://localhost/twiki/bin/view/TWiki/WebHome
Concurrency Level: 1
Time taken for tests: 8.62187 seconds
Complete requests: 10
Failed requests: 0
Write errors: 0
Total transferred: 428150 bytes
HTML transferred: 426010 bytes
Requests per second: 1.24 [#/sec] (mean)
Time per request: 806.219 [ms] (mean)
DAKAR straight out of the box. No extra plugins.
ab -n 10 http://localhost/dakar/bin/view/TWiki/WebHome
Concurrency Level: 1
Time taken for tests: 12.622024 seconds
Complete requests: 10
Failed requests: 0
Write errors: 0
Total transferred: 208720 bytes
HTML transferred: 206350 bytes
Requests per second: 0.79 [#/sec] (mean)
Time per request: 1262.202 [ms] (mean)
I thought Dakar was supposed to be a performance release. That is what I have been waiting for so I hope this is a beta problem. Dakar must be faster than Cairo. Otherwise it is not interesting at all.
my inderstanding is that Dakar is a hell of a lot faster than Cairo, so i've got to assume there's something wrong with your setup. (but with everyone at wikiSym, i don't know)
mind you, you ar enot comparing like with like. The Dakar WebHome
(actually the TWiki web's left bar contains at least 4 extra searches)
can you bring the data from Cairo into Dakar and compare that?
Oh much more than that! Never mind the extra %INCLUDEs in the left bar. Look at the ICONs.
SD is right, you are not comparing like with like.
- Try stripping the icons from the left bar.
- Since the left bar for the TWiki web aso has a number of category searches try a comparison with Main Dot WebHome
I have copied my Motion web into Dakar. This means that the left bar and the main topic are exactly the same.
Again. The subjective feeling navigating around my Motion web in Dar compared to Cairo is still that Dakar is slower.
And the objective test:
ab -n 10 http://www.lavrsen.dk/twiki/bin/view/Main/WebHome
Time taken for tests: 7.319478 seconds
Total transferred: 277170 bytes
transferred: 275030 bytes
Requests per second: 1.37 [#/sec] (mean)
Time per request: 731.948 [ms] (mean)
ab -n 10 http://www.lavrsen.dk/dakar/bin/view/Motion/WebHome
Time taken for tests: 9.520045 seconds
Total transferred: 290310 bytes
transferred: 287940 bytes
Requests per second: 1.05 [#/sec] (mean)
Time per request: 952.005 [ms] (mean)
Time per request: 952.005 [ms] (mean, across all concurrent requests)
Transfer rate: 29.73 [Kbytes/sec] received
The difference is less. But still the time consumed by Dakar is significantly longer.
Add to this that my Cairo is not an out-of-box-cairo. It has additional plugins (15 in total) installed including the session plugin. The Dakar installation only has the default 8. I also use same type of authentication (plain Apache .htpasswd).
- 17 Oct 2005
Some things definitely did happen along the way, changing focus from performance to other areas. I.e. one of the late commits in SVN
(7081->7082) sets back performance about 50% in my case.
I'm glad you bring up the issue, it needs a set of of fresh eyes.
- 17 Oct 2005
With the last change my TWiki:Main.SvenDowideit
the performance seems to get back at where it was.
Can anyone confirm this?
this is not really true
All I did was bring the preformance back to what Kenneth reported against Beta3 - as the code I added is mighty crap for large pages / large numbers of pages -- SD
has pointed out when discussion profiling that the largest cost is compiling the perl code. Although many algorithms have been improved immensely and the perl code tightened up, there is still more code to compile. We're doing a lot more.
One of the things that CC
worked on was plugins that were stubs; they didn't load their code unless the expression they handled was actually loaded. We can see this in CommentPlugin
But there are a lof of things in the Dakar core that handle new %XXX% expressions that weren't there in Cairo. These add overhead at compile time. The are like "in-line' plugins that you can't turn off. They have added usability value in many situations, of least of all in the Pattern Skin, things like
and the equivilent in the templates processing;
. Analogues exist in the NatSkin
for these, but are implemented as a plugin - NatSkinPlugin
- and can be turned off.
Perhaps there is a way to implement as a "load on demand". Dakar with Plain Skin won't use
so that code isn't loaded. Yes, I know that in the long run this is splitting hairs. The whole point of Dakar is that it has the additional capabilities that can
do these things. Or would it be better if they were implemented in a plugin and could be retrofitted to Cairo?
18th October 2005
Yesterday I was on IRC and did a lot of testing and benchmarking. I even installed Twiki Cairo and Dakar on another machine with another distro version/kernel/perl etc. And the result was the same. Dakar is 30-40% slower than Cairo. It is also slower than Cairo when using a web with no searches in the left bar. It is slower when using the plain skin.
And on top of it. The Cairo I use has all these EXTRA plugins: DefaultPlugin, ActionTrackerPlugin, BlackListPlugin, EditTablerowPlugin, RenderListPlugin, SessionPlugin, TWikiDrawPlugin. When I turn those extras off in Cairo the difference gets even higher. The 7 extra plugins cost round 100 ms extra time.
Dakar has crossed a barrier where you as a user feel the server is slow at responding. In Cairo the delay from click to show is 0.7 second. In Dakar this is now 1.1 second for a simple page with no searches. And that can really be felt as a user. If Dakar could be brought back to same speed as Cairo (with the extra plugins incl Sessions which is essential) then I think the speed is acceptable. And I bet it is some nasty little detail that cost the extra 400 ms.
- 18 Oct 2005
Kenneth, can you please run a test on how many impact the skin actually has on performance? I looked at the structure of view.pattern.tmpl, which should be the most used one:
AU, if you compare a
, you will see that there really isn't that much difference, i.e. logic in the skin doesn't seem to matter that much (at least that's how it appears to me).
- 19 Oct 2005
So, we have two choices: squash any remaining bug in Dakar and release it with the current performance, centering Edingburg ONLY on performance (ie: cero, nil, no new features), OR we can hold off the release for another month and focus on performance (whatever that means).
What should we do?
- 19 Oct 2005
At least wait for the develop team to return home from TWiki:Codev.WikiSym2005
A few things spring to mind: See http://en.wikipedia.org/wiki/Unix_philosophy
, in particular Rob Pikes and Eric Raymond's observations. -- AJA
As it was said earlier... Cairo was not really fast... any slower and I won't be able to install it... here is the tradeoff:
- If we release now, there will be extra eyes on the system and further bugs will be ironed out. However, chances are that the new release will be far out, so it might be another year before we get a faster release which might turn people off from TWiki.
- If we focus on performance now, the release will be some further period out, but hopefully available sooner than the Edinburgh would.
My opinion is that if the performance is not adequate now, it is better to tune now...
As stated in TWiki:Codev.PerformanceImprovementsInDakar
, the focus of the DakarRelease is to tune the performance of TWiki. Customers expect faster performance than Cairo, and probably will accept if the performance is about equal. Sites with large TWiki deployments (at Motorola, Yahoo, Google, Amazon, Wind River etc) cannot upgrade if Dakar is slower than the existing TWiki simply because they are already experiencing performance issues.
Dakar has many changes under the hood that make the code more modular and easier to extend. Some changes have a performance impact, such as loading additional CPAN
modules, using parser instead of regex, increased file I/O due to increased number of modules (code refactor), etc.
We cannot release Dakar if the performance is not near Cairo. But we also cannot wait another 3 month. I suggest to spend 2 weeks on pure performance tuning and attacking the low hanging fruits only so that we can release at the end of November (or earlier if we reach the goal sooner).
Lets wait for Crawford's feedback.
For benchmarks I suggest to compare Cairo and Dakar on a machine that has no other load. Measure on three dimensions using apache benchmark tests:
- Load dimension:
- 100 requests, sequential
- 100 requests, with 10 concurrent
- Topic content dimension
- Skin dimension:
- Default Pattern skin
- Classic skin
i'm downloading solaris10 and will create a vmware virtual machine that can be used for profiling)
Crawford is on vacation in Mexico until around 27 Oct.
Looking forward to the numbers from the profiling, SD, great initiative.
Just a quick note: On a std. TWiki:Codev.ModPerl
install, Dakar still rocks fully. I see some references to large deployments above; I'm pretty sure they are mod perl'ed as of present, and will be with Dakar as well. - Some numbers to get an impression of the difference between a preloaded and non-preloaded install at present (SVN
r7128, 2.4ghz intel, apache 2 (modperl 2):
I'm not that depressed about performance, Dakar was made for ModPerl, SpeedyCGI and the like from the ground up - and it appears to be pretty solid running with them (I haven't stumbled upon any "strangeness" stemming from the preloading yet).
I have a feeling we're just reaching the point where the sheer size of our codebase "demands" a preloading .. the numbers above speak for themselves, i.e. the difference between 96ms (pre-loaded) and 728ms (non-pre-loaded) for the FAQ in the classic skin.
"Abandoning" users that doesn't have the possibility of installing a preloader is going to be a tough line to cross - but how much longer we'll be able to push it anyway, I'm uncertain of.
- 22 Oct 2005
A comment and a few questions.
I believe you assume wrong when you think current large deployment installations of TWiki run mod_perl. I run one of the Motorola installations. And it is not running mod_perl. I tried both mod_perl and Speedy CGI. And I had to give up on both because too many plugins we needed failed.
Also remember that many paid hosts do not offer mod_perl. You cannot take for granted that people have mod_perl available.
But if Dakar and new updated plugins can run even faster with mod_perl and actually work properly (remember that half the value of TWiki lies in the many plugins) then it is very interesting to get a performance boost for those that can run mod_perl.
I just tried to enable mod_perl on my test server. I do see an improvement. But I do not at all see any numbers that come even close to what you report. I see a 2:1 improvement from 1100 to 500 ms on the TimBarnersLee
simple page (Pattern) and 1000 to 490 with classic skin. Your 74 ms sounds almost too good to be true. I also wonder how your numbers for Dakar without mod_perl can be almost half of mine even though my machine is similar spec'ed. Actually mine is 2.8 GHz P4.
topic has become quite a mess and needs to be refactored (by someone that knows what is right and what is wrong). A lot of the info is only valid with mod_perl 1 and with the early betas of mod_perl 2. I run Apache 2.0.55 and mod_perl 2.02. You should share the entire setup of your mod_perl installation that brings you these excellent numbers.
- 23 Oct 2005
The plugins are a sweet spot :-). We've had our share of troubles with some of them as well; we chose to abandon some that wouldn't budge. As a lot of plugins need a rewrite now, I believe it's a good opportunity to emphasize the performance advantage in making them preload-compatible.
I can't back my view that large deployments are running mod perl, it's just a gut feeling (what I can imagine
, is that they'll have a fair share of users grumbling about performance already, if they aren't :-)). Regarding availability: I know that mod_perl isn't available to all, but I cross my fingers that the rumours are true, that "userspace" (non-root) solutions exist and can be deployed with good results.
Anyway, I also have problems seeing why my test-setup is anything out of the ordinary, and should present any special performance - I just checked the apache benchmark again, and the same numbers are there with r7135 (the 74ms in particular).
I followed (some of :-)) the instructions in TWiki:Codev.ModPerl
, and well, I feel best about this setup (I believe some of the "Upgrade" modules and more could be taken out, but again, it's not critical for performance, even mod perl setups without any explicit mentioning of modules give the same numbers, so I don't believe it's critical).
Of course I'd prefer any day that TWiki could run in decent pace without a preloader, I just don't have any clue to what kind of effort is needed with Dakar to make that situation realistic. Some of that effort might be better spent helping plugin authors keeping an eye out for preloading issues.
Regarding the setup, see the attached...
You might need to apt-get something, i.e.
apt-get install perl-Apache-Htpasswd
or similar to get everything in place.
# ab -n 10 http://<servername>/twiki/bin/view/TWiki/TimBernersLee?skin=classic
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.75 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation, http://www.apache.org/
Benchmarking <servername> (be patient).....done
Server Software: Apache/2.0.50
Server Hostname: <servername>
Server Port: 80
Document Path: /twiki/bin/view/TWiki/TimBernersLee?skin=classic
Document Length: 6014 bytes
Concurrency Level: 1
Time taken for tests: 0.745442 seconds
Complete requests: 10
Failed requests: 0
Write errors: 0
Total transferred: 62520 bytes
HTML transferred: 60140 bytes
Requests per second: 13.41 [#/sec] (mean)
Time per request: 74.544 [ms] (mean)
Time per request: 74.544 [ms] (mean, across all concurrent requests)
Transfer rate: 81.83 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 73 74 1.1 74 76
Waiting: 72 73 1.2 74 75
Total: 73 74 1.1 74 76
Percentage of the requests served within a certain time (ms)
100% 76 (longest request)
- 23 Oct 2005
Agreed with Kennth above. None of the installations I run use mod_perl, and none of the web hosting services I have seen provide it.
- 24 Oct 2005
You are going to have to do something that you have been avoiding. Incorporate compilation and caching of dynamic pages into the Twiki core. You will potentially sacricifice some write performance for incredibly fast read performance. You will need to start caching backlinks for a page.. That way when you commit a write, you can clear those pages from the cache. (Or recache them). It's the only way I can see you getting around these issues. You will need to rework many plugins to use the new caching API. I'd say that Twiki's biggest impediment to wider deployment is it's performance. Don't release until you have made Twiki fast.
- 24 Oct 2005
Brian, this is so true! See also my comments
on implementing a dependency tracker.
I'm sorry Brian, you are focusing in a not-very-useful-in-the-short-term direction, hovever useful precompilation may be in the long term.
Compared to Bejing & Cairo, Dakar already does an amazing aount of caching. Cairo would have to re-read topic over and over to verify access. I once measured TWELVE topic accesses to read access controls in settings, re-reading the home topic and preferences, to render a very simple page. Dakar has eliminated this. Hitting the disk over and over was a hog! There are quite a number of other such examples.
At one time UNIX was 'in the small' and small, linear algorithms sufficed, but it too grew and more sophisticated ones became necessary. Thar make for more code.Long gone are teh days of the 24k UNIX kernel of my youth.
With TWiki the code growth has come not only from more sophisticated, faster, capable and reliable features but from new features.
The down side to this is that because perl is an interpreted language it has to be interpreted and loaded on the fly every time it is used. And because this is a CGI system that means every time a topic is accessed.
The soliution to that is not to pre compile topics, which will require still more code and code jymnastics, but to pre-compile the code. If we were using python
instead of perl
that would be no problem.
One alternative is to strip out features and reduce the volume of code - remove capabilities, remove the cekcs that make Dakar more stable, resilient and error-free. Another, less knee-jerk reactions, is to profile the code at an increasing low level and find out why
the performance has dropped off in the last month.
It may be something like the internatinalization effort. The effort of parsing the tempalates and core topics to expand the MAKETEXT
has dragged things down - in fact that code and scanning effort is there even if no translation is needed or done. If this is the case we have to ask ourselves "is internationalization something we can give up?" We can. What we have implemented is an 'automatic' translation. It is done 'one the fly' every time the CGI runs. The language is determined at run-time. We could equally well have decided that there would be "compiled" topics. That a script is run at installation time and make it into a German, Dutch, Italian or Spanish site
. This would remove the translation overhead from the CGI and speed up rendering.
Why am I mentioning this? I am trying to make it very clear that (a) compilation of code is our biggest speed hog and (b) there are a plethora of design decisions we can revisit once
we have instrumented the code and know where the performance is leaking out.
But until that instrumentation is done and we have meaningful information as to why and where the the slowdown occurs, we simply don't know and suggestions, yours and mine, in the absence of meaningful "under the hood" measurments. Blilthly aserting this or that solution without being sure it is addressing the root cause is just offering a 'snake oil' cure.
Sheesh! Where's Crawford when you need him?
Sitting on a toilet in a motel in San Diego, that's where (only place in the room I can get wireless access)
Yes, Dakar is slower than Cairo. We knew that; I told you; 34 Athensmarks versus Cairo 50 Athensmarks. That's why I was asking for help in benchmarking and performance tuning. I don't have my Linux box here to check it, but I suspect 34 Athensmarks is still the number (unless something wierd has happened in the last 2 weeks). It has been at 34 Athensmarks for the last 8 months, but there have been no previous complaints. So why now?
_Because it is now that you are releasing beta's that we - the users - can download and test - KL
First point; the internationalisation did not (according to my benchmarks) make a huge difference. The performance hit, AFAICT, comes from compiling all the CPAN
modules (several new ones with Dakar) and the initialisation of the various code objects. I have tried a variety of ways to instrument the code -
and manual instrumentation, but there is no clear hotspot that I can find. I have been staring at the performance for many months now, and frankly, I am snow blind. So don't look to me for inspiration; if I knew, I would have fixed it.
That is very worrying. Being a representative of one of those corporate success stories I can only say that with the current performance of Dakar - we will not be migrating to Dakar and probably soon be forced to change to another Wiki.
You guys will need to tune the performance before you can even think of releasing Dakar. It cannot be slower than Cairo which is already too slow.
I tried to download one of those daily snap shots from TWiki:Codev.TWikiAlphaRelease
to see if the performance is better. No difference in performance.
By the way - those tar balls take a full day to install because plugins are missing, install scripts are missing etc. I am sure that is why so few Twiki admins have tested Dakar until now and why so few have complained about the slow speed. TWiki has always been a full day exercise to install and it is even worse with the SVN
Part of the reason I didn't follow it until now is because it seemed that Dakar was in an introverted development cycle, that was difficult to decipher without following the activities very closely. I was surprised and pleased to see it throw over the fence to us Enterprise IT folk. I was very disappointed to discover that the performance issues in Cairo had not been resolved. (I hit a wall deploying TWiki. It's too slow for an Enterprise site, but has found quite a bit of traction within a few workgroups at my site -- but the clamor to find something faster is growing). At this point I am in the same boat Kenneth is in. I can't make Twiki faster without breaking functionality.... (Unfortunately I haven't had time to learn to program, which is what attracted me to Twiki in the first place)
Peter - You say compilation of code is your biggest performance issue. I disagree, even with Mod_perl or Speedy, Twiki is still way too slow. If you look at a real commercial CMS like Vignette, one of the most important things is to cache dynamic content... and manage that cache properly. (Which means maintaining lots of indexes, as well as handling cache flushing and priming).
While I agree that TWiki has bad code compilation performance, this only comes into play when you have a cold cache, and need to dynamically generate the html. 98% of the time I access a Twiki, I am viewing content. I don't care if editing and certain plugins are a little slow, views should be fast and should be cached as flat HTML
. (With per user views and an API to make gthe caching plugin friendly)..
While we are at it we need to add the ability to horizontally scale Twiki, in a fully redundant fashion. Searches also need to be indexed in Lucene for performance. There is a long way to go.
Kenneth, I just noticed your "No plugins installed" note in the performance measure. Please, try to install those same plugins you have in Dakar and run the tests. Specially, install the TWiki:Plugins.TwistyContrib
, as Pattern relies on it an most likely you're getting at least 2 "file not found" responses from the server. I installed the default plugins for Cairo in Dakar, plus TWiki:Plugins.TwistyContrib
and the Dakar performance improved (not to Cairo levels, but an improvement anyway).
To easily install the plugins, if you're using a linux/unix box, I suggest you to run the mklink script and then running configure to activate only those plugins you want to test with.
Now, back to the performance thingy. I can validate what Crawford is saying: using whatever-profiler with twiki shows no big "hotspot". After a barrage of changing "use" for the "require/import" where they made sense (like the "use File::Copy" in RcsWrap and RcsFile), I got an improvement of... <5%. The only perceived result is that all the BEGIN blocks disappeared from the top15, but all the time was now assigned to main::BEGIN sight
Another test I did was to move the INCLUDE processing out of TWiki.pm and call it conditionally only if the tag is going to be processed. Unsurprisingly, there was NO
significant gain as pattern relies heavily includes.
Finally, I modified the INCLUDE processing to return "" at the begining. Just with that there was a ~20% difference in performance for pattern skin view.
I'll keep on working to see what else I can find.
Brian, I agree with all that you say: precompiled topics for view, caching on twiki, plucene for search, etc,etc. Caching topics in Twiki is not that easy (check TWiki:Plugins.TWikiCacheAddOn
for a discussion on that and a step in the right direction) and Plucene adds a lot of dependencies, so you hit two barriers here:
- Lack of time and muscle to properly implement topic caching in Dakar.
- Plucene add a lot of dependencies, so using it should be an option and not the default. For example, in our company we have 1s response time for complex search over 2000 topics and we're quite happy with that.
This is the second time precompilation of topics and caching has been mentioned here, and i-lost-count-how-many times on IRC. I agree that would be "cool", but let's not forget something:
As the results with and without modperl has shown, the biggest performance killer is compilation time. If for some reason TWiki.pm or some other component is called at some point (and it will, as in some intranet setups view also requires authentification) the end result will be at most 20% faster with cache.
So, even if someone pick up the task of upgrading TWiki:TWikiCacheAddOn
to work with Dakar and we include it in the main distribution (hint, hint), we should refactor the codebase for faster compilation time.
I did some more experiments last night. Instead of measuring performance for several "key" topics, I measured performance for a completely blank topic with classic, pattern and print skins. I replaced several "use" for "requires", in an attemp to remove the BEGIN blocks from appearing in the top level of the call stack in the dproff output. The net result was a worsened response time, but I managed to have a clearer view on th call stack. I'm attaching a zip file with the script and topic I used , plus some measuring results, the call stack and the Dprof output (tmon.out). When reading the results, bear in mind that the benchmark was performed on a PII 400Mhz with 256Mb RAM running TWiki under cygwin with no perl accelerator installed.
There are some interesting points that I would like to highlight:
- I benchmarked the r6558, and it was 35~50% faster than the current one
- what got added between r6558 and the r7100+ ? -- AJA
- If you remove all the .po files, leaving only the default one, there is a 10% increase in response time.
- A very small fraction of the time is actually spent in the TWiki::UI::run method (~15% of the total time) 6758
- Store is doing too much, handling attachments and topics (and is big, +1k lines) at the same time.
- Pattern skin is 15% slower than Print skin
I forgot to add, don't apply the patch supplied in the zip file unless you want to reproduce the experiment. Performance will suffer.
If the issue is code volume & compilation then perhaps we should look at no longer using perl but moving to Python, which can be precompiled, or even C++
I'm afraid I really don't like the idea of pre-compiled and cached HTML
The reason is simple: TWiki is about Dynamic HTML
not static. It may all be fine to have a static 'the same for all viewers" image such as the the Perl hlep pages (which I understnad were generated with TWiki using the old TWiki:Plugins.GenHTMLAddon
) but the kind of applciations I build using may other plugins are meant to be fully dynamic.
That's not to say that it wouldn't be possible to componetize many topics. Parts of the LeftMenuBar
that are 'menus' will be static. I can't say the same about my TopBar
though I appreciate many people can. However we get back to the code-bloat problem. We still need all of TWiki to render the topics aand components in teh first place, and we are going to need more code to determine if the static components are to be used. We already know that the real problem is not the agorithms but the code.
There's just so much of it.
And we keep wanting to add more:
So long as we are using perl, adding features is a slippery slope.
Sure, (a) caching is on a per-user basis on a personalized site, (b) some dependencies fire temporal, (c) cache entries are hashed depending
on session variables too (excluding the session id obviously), (d) there's a complete array of different types of dependencies that
fire on different events and invalidate different chunks of the cache; (e) even page fragments are worth caching and
(d) some content is not cacheable at all (as you noted already), but these fragments need not to ruin cacheablility of other fragments of the same page.
Applications and plugins are cache-friendly or cache-unfriendly. A page can be divided into independent fragments that clearly isolate
content that is not cacheable.
Anton, these are standard techniques that have been established by the big players in the
field already, believe it or not. And even Vignette and CoreMedia
are performing bad w/o a cache.
- Was TWiki:Codev.AutomaticAttachments turned on for the tests? What difference does it make?
- TWiki:Plugins.CacheAddOn replaces the view script but I imagine it could be more cleanly implemented with the BeforeSaveHandler
- I've invited TWiki:Main.LyallPearce to participate in the conversation as he published the last version of the plugin.
I think (but have not thinking it too deeply) that perhaps both TWiki:Plugins.CacheAddOn
cab be implemented more cleanly using the available handlers. It's not only an issue of generating the cached content, is also the issue of intercepting the view (or include) of a topic to serve the cached content.
Also, a used technique is to mark sections of a page as "cacheable" thus componentizing it.
But, and I need to say it again to reinforce the message, unless we make TWiki to ONLY serve cached content without using the TWiki.pm codebase at all during view, your gain will be very very minimal.
hmm.... just had a zen moment. Stay tunned for more news.
- Caching what we can into HTML at the time of save. Use this in preference to the TML version unless the TML is newer. The CSS layer would remain untouched, allowing for different users to have different skins.
- Since the template defines skins and the bulk of the HTML, there wold have to be severe changes to allow HTML to be generated without reference to a template. -- AJA
- Leave tags in the HTML for anything that cannot be cached.
- Have a meta variable (%META:Core::Cache = no) where you want to force TWiki to always render dynamically.
- Which on my sites will be everything outside of TWiki and Main, so why can't we put it in WebPreferences? -- AJA
- Ok, so that's fine for your sites. Might others need per-page control? -- MC
- Expand out all the MAKETEXT args during the build process.
- So the cache is in the language of the first-user. This makes me wonder why, in some corporate or other settings, why have the internationalization at all? -- AJA
- I meant, during build, expand all templates so that we have directories of pre-expanded templates, one for each language. This way we do a look up of which template instance to use for each template/language rather than a substitution. Thus we end up with templates/localised/en/view.pattern.template and templates/localised/fr/view.pattern.template. -- MC
- Of course, we might need to find a way to allow the admin to change the expanded messages starting from the locale directory, so this is not during build its something that would hang off configure. -- MC
- Has anyone measured the cost of MAKETEXT? -- MC
I bet the performance improvement will be minimal. At least with the current codebase.
Anyway, I reached the point where I can't see an easy way to improve the response time. I'll leave it that way. Perhaps the short-term solution actually is to upgrade the Cache* plugins to Dakar, tell people to use mod_perl, tell people not to use Session support, or to install only those po files they really need.
- How would installing only the po files makes any difference? Surely you'd still need to process each MAKETEXT at run time? -- MC
That way we can focus on releasing Dakar now, and improve performance for Edinburgh.
Reviewing plugins and PeterThoeny
's comments at TWiki:Codev.UseIsoDates
I realise that there are many situations where a CPAN
module is used for one function that could be coded locally or put in TWiki::Func to share rather than drag in the whole CPAN
module and all its unused features. I noted following up on Peter's comment about SpreadSheetPlugin
that there are actually many useful functions in it.
No doubt you can show caching working in your environment, Michael. I can see many reasons why it would work in few of mine, and hence why many of the root issues/reasons for the slowdown and performance improvements that will benefit us all need to be addressed rather than patched over by using a cache. The cache and the
are good things, if we use them as accelerators and not as band aids.
Michael, there are a number of implicit assumption you are making that are quite distinctly NOT
valid in the environment I work in, and hence I suspect in many other environments too.
I look over my applications & useage and I see that, apart from the TWiki and Main webs
- The use of the "Guest" id is rare, to use the applications the user have to log in and use their own ID, so the per-user caches are small. Caching is most effective when the pool and the sharing is large.
- A single user only revisits the "index" pages. User either read progresssively or use dynamically rendered application topics. The former just polute the cache since they are never revisited; the latter cannot usefully be cached.
- The applications use dynamic functions of plugins and of %SEARCH very heavily
- The designs rely very heavily on parameterized %INCLUDEs and are likely to make use of the nested/"recursive" capabilty in Dakar.
- Groups of users are doing the same thing but with different parameters and hence different rendered results.
- __The Wiki is being used as a collaboration engine and database, not as a blog or read-mostly web/whiteboard such as Ward Cunningham's original design. TWiki is an application builder tool. Some of use use it agressively for that.
Back in my UNIX kernel hacking days I did a lot of work on disk & file-system drivers, disk optimization and the like. I did an amazing amount of instrumentation gathering data over many weeks of operation. I put a lot of criticval thinking into why caching worked - soemtimes in some palces and not in others. I saw the improvement that simple things like better direcotry lookup caching did. I learnt that caching's real power doens't
come from walking though sequentially but from staying in one spot, doing the same thing over and over. "The Principle of Locality". The easist way to do that is have many users doing the same thing. Which means that per user caching of a personalized views is of little use in the kinds of applications I now work with and use TWiki for.
I would ask myself what is useful to cache? Actually identifying what it isa dn what is not useful to cache sometimes is. Sometimes the cost of the cache exceeds it utlity.
- We are alaready caching within a session. In the past some topics such as Web and TWiki Prefernces and the users home topic were read repeatedly to extract settings. This repetition is elimiated by caching. This showed a dramatic decrease in disk access at the cost of:
- more code to compile, which takes more memory, which is more load on the virtual memory
- more data to hold the cache, which takes more memory ....
- Increased memory load can lead to swapping. None of Kenneth's figures say anything about that.
- Caching across sessions. What should we cache?
Under the present design, that is if we don't cache rendered topics for the reasons I mention above, then we should be caching the most heavily used non-rendered parts - the view templates, the CSS files that the Apache server has to send out, anything to decrease disk activity, since that where the big delays are.
But we are already doing that! The file ssytem cache on the host is - or should be - doing that for us. Unless you are making the disk subsystem do other work that invaludates that caching. What might that be? Well, loading perl modules over and over for one! And if you are running on a shared server, who knows what else is going on. And if you don't have enough memory ....
- Caching invariants. Things you don't want to compute over and over.
Note I said compute not render. Compiling the perl to its intermediate code is an overhead. The intermediate code ought to be an invariant.
- Componetizing a rendered topic across many users is not going to be that easy. The topic doens't tell you what parts of the overall rendered result will be, and the skin only hints at that.
- The code to recognise the invariant components that will be useful to cache, useful because they apply across many users and many topics but useless otherwise, is going to add to the code bloat.
If you have ample memory and either a load profile or a disk activity profile that means disk access is poor, then memory caching wins out. In most servers, memory is comparatively cheap. More memory not only reduces swapping/paging at the process level but allows caching of the disk into memory. Keeping directories and i-nodes in memory is a major boost to file file access. This applies even if you are caching renered topics to disk. My biggest regret is that there aren't more memory slots in my laptop
Michael, you mention other 'big players'. The implicit but unstated assumption you make about them is valid for TWiki.org perhaps but not for many of my clients or installations. It is that most access is by the anonymous/guest user. In that case acceses can
be cached because different users are using the same identity as far as the system is concerned. As I keep trying to point out, on a system with identification & per topic authorization finely grained that does not hold and the agregation of caching the way you are describing it is not beneficial. We need a soluition that addresses the baseline performance of TWiki, not one that papers over its shortcomings.
The other thing about the big players, be they commercial "closed source" systems which are probably written in C++ or open systems such as MediaWiki
, they don't use perl
- Python modules can be compiled to "object code"
- PHP 4.0 compiles & executes code.
PHP 4.0 will also allow for the caching of compiled code.
Oh, one last thing. Caching at the browser, not needing to repeatedly download the style sheets & icons, reduces the load on the server and the one server's disk activity and one the server's disk cache. This is a 'real world' thing that doens't show up with many of the performance meansurement tools.
I am an IT Manager at Fortune 100 Company. I was hoping Dakar, would solve some of these performance issues, as this is the biggest holdup for more general/widespread deployment.
I have used the TWiki Cache addon with pre-Cairo releases. It is very fast. (I made it even faster by putting the cache in /tmp (swapfs). I dumped the cacheaddin when I upgraded to Cairo. The things that break that are very important to us are:
- Updating "included" topics doesn't flush "including" topics out of the cache.
- EditTable plugin breaks (Others break as well, but edittable is very useful to us).
- Custom per user Quicklinks in PatternSkin breaks
- Sessions don't work
If you are serious about the Enterprise customer you should do two things.
- Standardise on mod_perl. Require it. Make it a requirement for Dakar plugins to work with it. Sessions will always be too slow if you don't do this. You have to make a decision who your target customer is. Is it Enterprise customers have control of their platform, and can install any prerequisite required. Or is indivuduals who want to setup a wiki on their ISP hosted accounts.)
- Avoid compilation entirely when you can by using intelligent HTML caching. Since you will now be using precompiled code, you could implement caching in Perl.
Once you have done these two things, all plugins will need to be retooled to work with these new prerequisites.
Anton, there is no need to track hotspots in a TWiki cache as it is as big as your hard disk is. Yes, other kinds of caches might suffer from
repeated trashing when they try to capture very different things at the same time with limitted memory, like i-node or cpu.
But that does not apply here
. The cache is only invalidated during save and move/rename, not during different views (temporal dependencies aside).
About server caching vs browser caching: sure caching things as close to the location where you need it is a good thing. But
as I already said to you on TWiki:Plugins/TWikiCacheAddOnDev
and in numerous private emails ca a year aggo: get away from
repeated content computing, be as fast as a content management system can be, the bottom line for server side caching strategies
is to be compared to a static html being delivered by the server given it is unknown to any interim cache.
Some plugins are very bad for content caching by design like one that generates random content out of thin air. Some might
generate a page and actively indicate that this page is special and should not be cached, like for example the ban message of the BlackListPlugin
So there's a very need to interface the cache by a dedicated api. However, writing cache-friendly plugins will become harder as the
plugin author must understand the dependencies of the content being generated. That said, I'm all with Brian.
A few comments from another Enterprise user.
- It is dangerous to assume mod_perl. I know many of the TWikis in our company are installed on some Unix machine that was available for it but where they have no root access and the sys admin does not want to install mod_perl. He is already against TWiki in the first place. And hosting companies often do not offer mod_perl. I disagree with Brian that Enterprise users have control. Brian you are an IT manager. You are one of the good ones. Most IT managers are idiots that say NO to things like TWiki and it is a constant battle to get anything open source introduced. Twiki should work with mod_perl. Plugins should work with mod_perl. But TWiki must have an acceptable performance without it.
- In most large companies anything not proprietary, anything not Microsoft, is close to impossible to get accepted. TWiki is not introduced by senior management. It gets in from the grass roots. If the performance of TWiki is poor or we need special servers with special software installed we grass rooters have no chance.
- In big companies intellectual property must be protected. To avoid negative exposure I had to disable ALL guest read access. People have to authenticate to read anything. People cannot cope with more passwords so naturally I use the company LDAP server for authentication. In the beginning every dammed page had to be authenticated. That took 2-7 seconds per page because the corporate LDAP server(s) are that slow. People complained and I can understand why. It was horrorble. Then I installed SessionPlugin. And I got the response time down to 0.7 seconds on pages with no searches which people live with. With SessionPlugin the browser only needs to authenticate once. Sessions is a must have
- Note that all the measurements I did was with SessionPlugin installed in Cairo. So it is not the Session feature in Dakar that makes the big difference. It is part of both my measurements. I originally asked the developer team to ensure that the Session Plugin was kept up2date with Dakar and included with Dakar as a standard plugin. It is now in the core code. That is fine! Keep it there!!
- Browser caching. It is turned OFF. With all the dynamic content on TWiki pages and many other applications we use on our Intranet, browser caching is useless. You keep on refreshing the browser window or you sit and repeat the same thing 10 times not understanding why nothing works. When people call me for help the first thing I tell them is to turn browser caching off and that always resolves some of the problems.
- 10% my users are complete nerds that love to stretch TWiki to the maximum. The rest hardly dares touching EDIT. You cannot expect normal users to put in strange codes to enable caching of certain content.
- Locallization: Maybe a nice feature. I won't use it. In an international company all is in English. If people use local languages chances are they use only that local language. So it is a good idea as someone suggested that you can choose only what you really need and that you can turn it off completely. If is cost performance I will like to turn it off.
When I look at TWiki and its performance from a users view (not a programmers) and in context with the cache discussion.
I personally do not believe you can implement any useful caching. TWiki's main strength is dynamic content and the many applications any user can and do make with it. TWiki is much more than a stupid Wiki.
When I look at the performance issue I observe two things.
- On an older TWiki (also Cairo) the really slow speed is on those pages that contain searches and where the search is done in 1000s of pages inside a web. And as contents grows the searches get slower and slower. In one year we have so many 1000s of pages now that I find it hard to believe. Our TWiki is more popular than I would ever have thought.
- TWiki seaching is actually very fast considering how it does it with no indexes to search in. But not fast enough. Maybe maintaining an index which is updated when you save a topic is the only way to get around this.
- Subwebs will help. There is a limit to how many webs you can keep on adding to organise your data. This is part of the search problem. When you do a search you search in too many non-relevant topics. With subwebs things goes faster. We started having a web for each department. And each department had some ISO9000 topics. Some search pages looked for these pages in all webs and made a nice overview. After one year this overview took 10 seconds to get created. We had to move all the ISO docs to their own web and now the search takes 1-2 seconds again which is OK.
What I expected from Dakar was a faster Cairo with WYSIWYG
plugin that works and mod_perl compatible. All the rest is nice but if it cost performance I don't need it.
Right on, Kenneth. Its that industrial imperative that counts.
The bottom line is simple:
TWiki needs to address performance. HTML
or rendered topc cache is a band-aid that is of no use in situations that make use of dynamic content or where policy demands that users log in to use the applicatton. TWiki's great power over "dumb whiteboard wikis" is
its applications and the dynamic rendering. Mandating the use of
is not acceptable.
The 'eye candy' of all the icons is a browser-fetch issue not a rendering issue. Kenneth's tests with the old Cairo pages show that stripping the eye candy doens't make a big difference.
What disturbs me is the note from Rafael about the
files and about disabling INCLUDE. I can live with stripping out all the internationalization changes that give TWiki the ability to handle languages dynamically. After all, the bulk of the topic body is still in English. Changing policy to having complete images of the topics and templates in other languages woudl be a lot of work, but as Kenneth points out, few places are going to be switching back and forth all the time.
The comments about INCLUDE make me wonder ....
Other comments about
also make me wonder if we haven't generalized too early. The OO-ness of
offers the ability to plug in other back ends, but we seem to be paying a heavy price for that. Rafael, perhaps you might look at what happens if that was simplified and the indirection removed.
I've realised that internationalisation code does have some impact on peformance. I intend to work on it this
weekend, mainly by adding a setting to enable/disable internationalisation, and making it disabled by default.
Another thing I'm wondering is that internationalisation of templates is skin-specific, i.e., one can
write a skin
that doesn't use I18N
at all, thus achieving better performance.
- True, but the code to do all the processing is there whether or not the
MAKETEXT is there or not. And its in the topics as well as the skins. -- AJA
Before you all rush off and put another year of effort into recoding TWiki in C#, here are some things to consider about current performance:
- Dakar shifts approximately 1/2 as many bytes as Cairo to render the same page (see first benchmark above).
- While the code volume is larger than Cairo, the code used in responding to a view request is actually smaller, due to the greate modularisation.
- Dakar loads fewer topics to render the same page.
- Autoload doesn't help (only a tiny fraction faster).
- Putting all packages in a single file doesn't help.
- I tried all the available perl precompilers. None works. The failures are beyond my powers to debug.
- Precompilation requires a potentilly complex compile step on any plugin developers platform, unless you propose to ship a complete Perl interpreter with every executable.
Dakar compiles fewer
lines of code, loads fewer
topics, and shifts less HTML
, for each page request. By all that I know about computer science, it ought to be faster. So why is it slower than Cairo? I don't know; this really needs a perl expert to answer. But I believe in my heart that there is some relatively small, obvious thing that I have missed that is causing the slowdown, and I need help to find it. Some observations:
I originally proceeded this way:
- Benchmarked Cairo
- Applied mod_perlize to the cairo code
- Manually refactored the mod_perlized code
The slowdown started when I applied mod_perlize to the code. This suggests some questions:
- Is there some inherent "slowness" in using "bless"?
- Is method calling using indirection off an object responsible?
- Is the perl interpreter heavily optimised for executing bad code, at the cost of well structured code?
Detailed profiling doesn't help. The runtime is a tiny fraction of the total page return time. The vast bulk of the time spent in each request is spent compiling, AFAICT. While it is helpful to consider cacheing in the context of an overall TWiki performance hike, it doesn't help with the specific "why is Dakar slower than Cairo" question. This really must be answered
before investing in any
's last line is paramount. Caching is a band-aid over the real problem, which will just fester if we don't look for the root cause.
Right now I have more questions than answers, but agree with CC
's point about the seeming illogic.
- The "Object" stuff of perl looks like a bolt-on extra for old Smalltalk users. Thet "bless" may be just one indicator.
- All object calling is indirect,isn't it? But you are referring to things like the storage handler. I wonder if some object that are not likely to change between temporally close session invocations such as the
Store object might be better saved using CPAN:Cache::Cache than recreated every session.
- How expensive is handling an object as a parameter to a call? Is it copied? It it is, then the perl run-time is taking a big hit.
From Raf's timings:
Sorted by time spent in Child, per call
Number Total In Child Tot / Num In / Num Child / Num
TWiki::UI::run 1 1.281 0 1.282 1.2810 0.0000 1.2820
TWiki::new 1 0.689 0 0.69 0.6890 0.0000 0.6900
TWiki::UI::View::view 1 0.29 0 0.292 0.2900 0.0000 0.2920
Error::subs::try 1 0.29 0 0.29 0.2900 0.0000 0.2900
TWiki::UI::__ANON__ 1 0.29 0 0.29 0.2900 0.0000 0.2900
TWiki::Plugins::enable 1 0.238 0 0.238 0.2380 0.0000 0.2380
TWiki::finish 1 0.194 0 0.194 0.1940 0.0000 0.1940