Tags:
create new tag
view all tags

Question

I was wondering if anyone had any more tips for tweaking performance, particularly with FreeBSD? I put TWiki on FreeBSD 4.8 and configured for mod_perl to get the extra boost. The thing is, it doesn't seem to be all that great of a difference. Switching from one web to another seems to be about 1 to 3 seconds. Going from topic to topic within a web seems to be usually under a second. Before going to mod_perl, I was seeing roughly the same times on the FreeBSD installation.

I was just wondering if there might be a way to go about tweaking something. The FreeBSD installation is still the stock kernel, and I plan on recompiling it, but before doing so, I'd like to know if there is something worth tweaking in there that might get better performance for TWiki. Maybe some of the HD parms that can be changed?

Nothing else is running on this box, BTW, just TWiki. It's a Celeron, <500MHz, IIRC.

At least it is faster than TWiki on W2K box with beefier specs, but the W2K install was using CGI, and was also my workstation.

  • TWiki version:
  • Perl version:
  • Web server & version: Apache 1.3
  • Server OS: FreeBSD 4.8
  • Web browser & version:
  • Client OS:

-- SeanLeBlanc - 26 Apr 2003

Answer

Assuming you are using the Feb 2003 TWiki release, run testenv to check whether ModPerl is actually being used for your TWiki scripts, there is a specific test for this. You should normally get a very noticeable speedup with no tweaking, often as much as 5 to 10 times faster - see ModPerlUnix for discussion.

If you are on an earlier release, download CVSget:bin/testenv. Either way, I'd check your mod_perl setup before assuming it's a FreeBSD issue, which seems unlikely.

-- RichardDonkin - 27 Apr 2003

Oops. I forgot to mention in my original writeup that I did use testenv to be sure. According to other stuff I read here, if that reports that it is using mod_perl, then unless I configured something to not do it, it will run mod_perl for all Twiki scripts. Like I said, I think I see a difference, but most times are 1 second or less, so perhaps it's hard to perceive a difference. It's just the ones that are 3-5 seconds that I want to speed up.

I'm certainly not blaming FreeBSD. I specifically picked FreeBSD for its speed, and it is certainly faster than the current W2K installation I'm planning on migrating away from. I was just wondering if there were any FBSD specific tweaks I could try that others might have had success with, to make sure page refreshes were nearly always subsecond, if possible.

Also: I saw someone mentioning a caching module or plugin, but couldn't find anything related to it that pertains to speeding things up. Does Twiki already generate cached HTML representations of pages?

-- SeanLeBlanc - 27 Apr 2003

Can you link to your testenv, or attach the output here? Unless testenv confirms that mod_perl 'is used for this script', mod_perl is not working. Google:ab+twiki has some examples of using the Apache benchmark tools, which is the best way of demonstrating a performance improvement.

If most times are less than one second, I'd guess that mod_perl is working - running top while the system is working may help show if other processes are causing a slowdown, though again the ab tool is the best way to really show this is happening consistently even if intermittently. It could be a memory/swapping issue, or perhaps Apache is reaping the mod_perl pages too often - I would have a look at the mod_perl performance tuning pages.

-- RichardDonkin - 27 Apr 2003

Sean. I am surprised you get slower performance when switching between webs then when switching topics within a web. That doesn't sound right, and suggests the problem is not something mod_perl would fix. Is this 100% consistent behaviour? Just out of interest, and to narrow down possibilities, how much memory is on the server, and what TWiki skin are you using?

In terms of general speed, what browser do you use? I found Mozilla particularly slow, and just using a different browser (eg Opera) gave a significant page load speedup, particularly when using the pages generated by the Koala skin.

Regarding HD parms, some older RedHat distros for example did not optimize hard drive performance for things like DMA, and as a result ran at only 15% or so of maximum possible speed when I installed it. I don't know how FreeBSD sets hdparm settings, but it may be an area that can be tweaked. I don't expect recompiling the kernel will produce a major benefit.

Regarding caching pages, check out CacheAddOn. We have been using this for a few months on our current slow server, and it has been a lifesaver. Page load times dropped from 2s+ to 0.1 seconds (0.5s with Mozilla). There are occasional corruption problems though when the user interrupts while the cache plugin is writing the cached page to disk. Now we are about to migrate to a new server (dual 2.4GHz P4 Linux) and this machine generates and serves twiki pages so fast that the caching code no longer provides a noticable benefit.

-- MartinWatt - 28 Apr 2003

When you switch from one web to another, presumably you are going to the WebHome page? This is a somewhat slow page these days, because it includes a FormattedSearch (through the INCLUDE of the TWiki.SiteMap page). Now logged as SiteMapIsSlow.

Try going to a different page in another web, e.g. TWiki.WebSearch, and compare the performance. If it is better than for TWiki.WebHome, the CacheAddOn would greatly improve speed for WebHome and similar pages.

There is now a Perl-based version of the cache addon - since the shell-based one would normally be installed in the CGI directory, it would probably be harder to get this working under mod_perl, though some use of SelectiveModPerl might help (excluding the shell-based 'view from cache' view script from mod_perl, and including the render script).

-- RichardDonkin - 28 Apr 2003

Yep, it's the WebHome page that's slow. I'm having a heck of time getting CacheAddOn to work. The page keeps sending back nothing. I've created the .../cache/myweb dir, I've changed the paths in the cache script to point to the proper Perl binary and location of the render script, as well as the proper directories for cache and data, and nothing gets returned when I try to replace "view" with "cache" in the url. This is for the Perl version of the script. Any idea of things I can look at? Looking at my httpd-error.log, I see only this:

get s:/usr/local/www/twiki/data/Javatips/WebHome.txt c:1052788701 s:1052768185 m:336

In the .../cache/myweb directory, I only have one file created after an attempted view, and it's WebHome__. It's empty.

-- SeanLeBlanc - 13 May 2003

I haven't tried the Perl version of the cache addon, though I did get the shell version to work with some tweaking (on Cygwin). You need to comment on the CacheAddOn page to get help with that.

-- RichardDonkin - 14 May 2003

There's a few things not covered here.

First, we didn't ask quations about configuration. Never mind the CPU, memory and disk side of things, and all the nice tricks in Apache that can slow it down. What are the end points? Is Sean runnning his web browser on the the same machine? If so, is he addressing as

or

The latter uses the loopback that avoids going all the way down and all the way back up the IP stack. While the former "looks" more like the network situation, the latter is a "better" measure of timing because it uses fewer context switches.

Quite apart from loopback, running X-Windows plus a window manager can be a resource hog. I have a little machine, 750Mhz P-III with 64 Meg, that will run Windows nicely, will run Linux plus a server nicely, but dispite the 32Meg TNT2, when I run X-Windows response toe even a cursor click on a button takes a second or two. Thrash-City!

Secondly, what else is the machine doing? If its not doing (and I mean real work, not latent processes) much else then latency will dominate. Just the latency inherent in disk handling, things like directory lookup and waiting for the machine to respond will make a lot of difference. The clue here is that switching webs is slow. Apache lost to some tests against MS-IIS a while back because of directory scanning delays.

If Sean is the only user, caching may not help. Whatever, kernel caching, of inodes, of directory blocks and directory lookup mappings, will have more effect than application level caching. Caching works best when a pool of users are sharing a resource. For a single user walking through a thread of topics on a wiki, its of questionable use as the hit rate is low.

(Yes, I know that the skin template will be hit again, I know TWiki/TWikiPreferences will be hit again and again. The system is going to try to cache those disk buffers. Let it, it can do a better job)

What I'd be interested in seeing is how Sean's BSD system degrades under load. Theres a simple load test you can run

for run in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
do
     wget -O - http://youserver.yourdomain/twiki/bin/view/TWiki/WehHome |
               wget -O - -i - -force-html > /dev/null
done

One B I T C H of load simulator. It asks your TWiki server to follow every link from WebHome. It forced the load average on my 1G3 T'Bird Linux up to over 15 within 3 minutes. Adjust the count in the for loop for the YMMV thing.

Now, do a rough plot of response against the how much load -- how many instances of wget are being run.

-- AntonAylward - 15 May 2003

The slowness of WebHome is the biggest factor (2x slowdown due to SiteMapIsSlow) - once that's fixed by the caching plugin, it's reasonable to start looking elsewhere, but the fact that it's a highly predictable and repeatable slowdown indicates that it's probably the application and the CPU/memory that it uses. Please see SiteMapIsSlow for details - the cache addon will help a lot precisely because it is application-level, avoiding an expensive search in the site map included from WebHome.

Before investigating details of system configuration (apart from a quick uptime and top to check on other workload), it's important to measure and profile the single-user performance of the system, and to work out if it's CPU or memory limited in some way. The goal was to make page fetches subsecond, which is already the case for non-WebHome pages, so there's a good chance that the cache addon will solve the problem as defined by Sean.

The Apache benchmark tools are a very good way to test performance - they make it possible to see mean, min, max and other statistics for various URLs, simulating a multi-user load if necessary.

One other idea - if the CacheAddOn is too hard to get working, it might be possible to move the TWiki installation to a new port number, e.g. 8080, and have Apache act as a server-side cache on port 80, forwarding uncached requests to port 80. Or you could just install Squid, but that can be quite memory intensive I think. The cache addon is probably better, though, since it has TWiki awareness and the TWiki code doesn't really do any cache control headers as in BrowserAndProxyCacheControl.

-- RichardDonkin - 16 May 2003

Sean - how about just removing the Site Map code from your WebHome pages? That's the first thing I did with my install (once I got past the initial fear that things would break if I changed anything on that page.) I find the sitemap not very useful, and a list of webs at the top is all I need, provided there is a link to the sitemap page for those who want the more detailed view. I'll post some more ramblings along these lines at SiteMapIsSlow...

-- MartinWatt - 17 May 2003

There's some resulting discussion of getting the Perl CacheAddOn working under ModPerl - speed mavens should see CacheAddOnDev smile

-- RichardDonkin - 17 May 2003

Edit | Attach | Watch | Print version | History: r16 < r15 < r14 < r13 < r12 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r16 - 2003-07-27 - PeterThoeny
 
  • Learn about TWiki  
  • Download TWiki
This site is powered by the TWiki collaboration platform Powered by Perl Hosted by OICcam.com Ideas, requests, problems regarding TWiki? Send feedback. Ask community in the support forum.
Copyright © 1999-2026 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.