performance1Add my vote for this tag solaris1Add my vote for this tag create new tag
, view all tags

Case study - Improving performance of TWIKI using Glassfish/LRWPinJava

2008-09-29 - 21:06:46 by NagendraNagarajayya in General
A case study that shows Glassfish/LRWPinJava improving the performance of TWIKI has been published on the LRWPinJava website. Glassfish/LRWPinJava improves TWIKI performance more than 2x with 10x less load compared to Apache 1.3/cgi-bin on Solaris 10 and T2000, a Sun/SPARC CMT based system.

Glassfish is a J2EE open source Application Server while LRWPinJava is an open source implementation of the LRWP protocol in Java.

LRWPinJava is similar to fast cgi, speedy cgi and mod perl and uses a persistent connection to improve performance. LRWPinJava uses a cgi-wrapper to run Perl applications, like TWIKI unchanged.

More details of the case study and LRWPinJava can be obtained from the following URL: https://lrwpinjava.dev.java.net


Interesting reading. Thanks!

-- Rafael Alvarez - 29 Sep 2008

Yeah, interesting reading!

However, the article misses some important information, like which topics were requested. It can change the results a lot. Suppose a topic like TWikiFeatureProposals: the bottleneck there are the many embedded searches (and high number of topics within Codev). From the results with 25 users and the got improvement, I guess the topic used was Main.WebHome, that is simple and small (my empiric measures showed me that about half of processing time of this topic is the fork-and-compile phase, so no surprise there was a 2x gain wink ).

The given description of mod_perl states that it can fork a configurable number of perl processes. I really want to know how to do that. The only way I know is to limit the MaxClients configuration, that applies to the whole web server, not only to perl applications, since mod_perl embeds the aplication into the web server (no forked perl processes).

I have studied TWiki performance and performance metrics to make my under-graduation conclusion job (in Portuguese), that resulted on TWikiStandAlone, and one of the lessons I learned was that load is not so good to measure CPU usage: it measures how many processes are waiting for a processor, so we get an obvious result: using TWiki as a CGI (one forked process for each request) will lead to high loads. OTOH if you use a fixed/controlled number of persistent back-end processes, then the load simply can't be so high! (And it almost can be managed to the value you want, by controlling the number of persistent back-ends) How this magic is performed? Requests waits for an available back-end (instead of waiting for a processor). And that queue is not measured by the load (and reflects on latency observed by users. However, the higher latency is compensated by the fact that each request is processed faster).

In the beginning of the article SpeedyCGI and mod_perl are cited and they have similar approach to improve performance: eliminate fork-and-compile phase. Then, IMHO, there is another important information missing: how does Glassfish/LRWPinJava compares to those? And to FastCGI? I also could make a wrapper to use TWiki "unmodified" with FastCGI.

So, a (IMHO, more) interesting test would be to compare Glassfish/LRWPinJava, FastCGI, SpeedyCGI and mod_perl (with prefork and worker MPMs), using simple and complex topics.

But absolutely Glassfish/LRWPinJava is another interesting execution mechanism and TWikiStandAlone architecture makes it fairly easy to add direct support to Glassfish (or any other web/application server that implements the LRWP protocol) with a LRWPEngineContrib. smile

-- Gilmar Santos Jr - 30 Sep 2008


Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r2 - 2008-09-29 - RafaelAlvarez

Twitter Delicious Facebook Digg Google Bookmarks E-mail LinkedIn Reddit StumbleUpon    
  • Help
  • Learn about TWiki  
  • Download TWiki
This site is powered by the TWiki collaboration platform Powered by Perl Hosted by OICcam.com Ideas, requests, problems regarding TWiki? Send feedback. Ask community in the support forum.
Copyright © 1999-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.