Tags:
create new tag
view all tags

Introduction

A bug in Apache pre-2.0.50 revisions breaks TWiki operation in some (?) circumstances. 2.0.50 at first apears to solve this problem, but there have been a couple discrepencies noted in both this topic and in Apache's bug track link for this issue: ApacheBug:22030.

Status and Fix Requirements

Can someone please post status of this bug/fix here and what things (namely which Apache revs/patches) fix this? Namely, does Apache 2.0.52 and greater fix this problem? I see conflicting reports of the problem resolved and not resolved in this TWiki topic as well as the reference Apache bug report. Please insert the procedure here in place of this paragraph.

-- MattEngland - 13 Apr 2005

Bug Reproduction Procedures

Can someone review this section? I paraphrased/cloned ThomasWeigert's comments below.

-- MattEngland - 13 Apr 2005

For systems exhibiting the faulty behavior, the problem can be observed on topics including pages rendered using SimpleTableEntryUsingForms and on topics running the XpTrackerPlugin. There might be more.

A small plugin attached below (created by extending TablePlugin with some of the code in SimpleTableEntryUsingForms) exhibits the problem. This plugin is named ModTablePlugin, for obvious reasons. It will not interfere or overwrite any existing files, unless you happen to have another plugin of the same name installed on your system. Just unzip this plugin in your twiki installation.

Example topics which exhibit the problem (they will be installed in the Sandbox web) (NOTE: these topics exhibited the error back when TWiki.org was not patched...assuming it has been patches since then? -- MattEngland - 13 Apr 2005)

  • First load FruitDataShort. This topic should load fine.
  • Then load FruitData. You will find that the view script will hang.

If you add only one or two more lines into FruitDataShort (you need to do that in your editor directly to the file; just copy some lines from FruitData), that the topic will not load any more.

Eventually, after a long time, the script terminates, but the apache log only contains the unhelpful error message "Premature end of script headers: view".

Matt, why are you asking this question. As far as I can tell, the bug I described has been resolved (at least, it does not show up in my systems).

-- ThomasWeigert - 16 Apr 2005

Consquences of TWiki workarounds

What are the consequences of the TWiki workarounds mentioned below (including "open(STDERR, ">>/tmp/error.log");" setlib.cfg changes)? eg, Will error messages not be displayed in some scenarios when they should be? Please insert the procedure here in place of this paragraph.

-- MattEngland - 13 Apr 2005

Comments

We have recently installed a number of quick new Linux boxes, and I am eager to get TWiki running there. However, I ran into one (or more) very frustrating problems.

Short summary of symptom: TWiki appears to be running fine with no plugins installed. However, after adding some plugins I observe that on some topics TWiki will just hang there forever working on the page. The forked process is up and the progress bar in the browser is moving oh so slowly.

I have not done a systematic study, but I have observed this on topics including pages rendered using SimpleTableEntryUsingForms and on topics running the XpTrackerPlugin. There might be more.

I have produced a small plugin attached below by extending TablePlugin with some of the code in SimpleTableEntryUsingForms. This is sufficient to demonstrate the problem. This plugin is named ModTablePlugin, for obvious reasons. It will not interfere or overwrite any existing files, unless you happen to have another plugin of the same name installed on your system. Just unzip this plugin in your twiki installation.

I added two example topics which exhibit the problem (they will be installed in the Sandbox web).

  • First load FruitDataShort. This topic should load fine.
  • Then load FruitData. You will find that the view script will hang.

The interesting thing is that if you add only one or two more lines into FruitDataShort (you need to do that in your editor directly to the file; just copy some lines from FruitData), that the topic will not load any more.

I think that eventually, after a long time, the script terminates, but the apache log only contains the unhelpful error message "Premature end of script headers: view".

I have put printf (I mean TWiki::writeDebug) all over the TWiki code to trace where the problem occurs but cannot pin point the source of the difficulty.

I experience this problem with a fully customized TWiki (Athens version), a fully customized TWiki (Beijing version), a minimal Beijing system with no plugins added. All these systems run flawlessly on Windows 2000 and Solaris, both using Apache 1.3.5 though.

The Apache 2.0 installation runs on Red Hat 9. I have read through the references related to Apache 2.0 problems and updated CGI.pm to 3.01. I don't think the LANG setting affects this behavior, but my $siteLocale is set to en_US.ISO-8859-1 anyway.

The Apache server is as it came with Red Hat, except that I had to add the mod_auth_ldap to get the web server to authenticate through LDAP. This works fine.

Any advice would be greatly appreciated. My users really want to see TWiki on the new Linux boxes, as the Solaris machines are sooo slow...

-- ThomasWeigert - 15 Jan 2004

By the way, the problem does not exist with Apache 1.3 on the linux boxes either. We have downgraded all our linux-based web servers to apache 1.3 to allow twiki to run.

-- ThomasWeigert - 23 Feb 2004

a patch resolving apache 2 hanging has been posted at Owiki:ApacheTwoHangs

-- WillNorris - 16 Apr 2004

Will and Michael, this is great news. As I have downgraded all my servers, would it be possible for you to run the test attached to this topic at your server to see whether the solution also subsumes this problem?

-- ThomasWeigert - 16 Apr 2004

A simple enough fix, and not a bad solution either. Will go in CVS soonish.

-- WalterMundt - 20 Apr 2004

As a reminder, the fix should be aware of platform specifics, e.g. not just Unix.

-- PeterThoeny - 20 Apr 2004

To be more specific on my comment above (since someone was misinterpreting it), this bug only happens on Apache 2.0 on Unix. However, the proposed fix of using $logfile = "/dev/null" is Unix specific and confuses Windows installs.

This issue is listed as an Apache 2.0 bug, ApacheBug:22030, with a patch for Apache available since 15 Apr 2004 (and fix included in Apache 2.0.50 -- RD). As a workaround, they recommend to start all CGIs by re-opening STDERR to a plain file, open(STDERR, ">>/tmp/error.log")

-- PeterThoeny - 24 Apr 2004

so doesn't that mean that all that is needed to make this a clean cross platform fix is to change open(STDERR, ">/dev/null") to open(STDERR, ">>$logdir/apache-error.log") ? ( the same $logdir as is defined in TWiki.cfg).

-- MattWilkie - 27 Apr 2004

Good idea. However, $logdir is not known at the time setlib.cfg file is executed.

Above proposed fix has the drawback that cgi errors are not reported anymore to the browser via CGI::Carp.

Since this is a known 2.0 Apache bug not affecting all sites we should offer an optional workaround for sites using early Apache 2.0 versions. That way we are are not penalizing all users with the drawback.

Fix in TWikiAlphaRelease:

===================================================================
RCS file: /cvsroot/twiki/twiki/bin/setlib.cfg,v
retrieving revision 1.8
diff -r1.8 setlib.cfg
3c3
< # Copyright (C) 2002-2003 Peter Thoeny, peter@thoeny.com
---
> # Copyright (C) 2002-2004 Peter Thoeny, peter@thoeny.com
20a21,35
>
> # -------------- Only needed to work around an Apache 2.0 bug on Unix
> #
> #    If you are running TWiki on Apache 2.0 on Unix you might experience cgi
> #    scripts to hang forever. This is a known Apache 2.0 bug. A fix is
> #    available at http://issues.apache.org/bugzilla/show_bug.cgi?id=%2022030.
> #    It is recommended to patch your Apache installation.
> #
> #    As a workaround, uncomment one of the following two lines. (As a drawback,
> #    errors will not be reported anymore to the browser via CGI::Carp)
>
> # open(STDERR, ">>/dev/null");         # throw away cgi script errors, or
> # open(STDERR, ">>/tmp/error.log");    # redirect errors to a log file of choice
>

A warning for Apache 2.0 users is also added to TWikiInstallationGuide.

-- PeterThoeny - 27 Apr 2004

Sven asked on TWikiDevMailingList:

> did you test this?

As you know I usually test changes carefully before commiting to cvs. In this case I applied a recommended fix but could not test it because I do not have right environment. Help in testing is appreciated.

-- PeterThoeny - 28 Apr 2004

Peter, I have the "right" environment; if you can instruct me on how to get the version that should be tested from the repository I will take a look. Even better, as at work I need to go through a firewall which always causes problems, if you could send a zip with the release you want me to test....

-- ThomasWeigert - 28 Apr 2004

Thanks for the offer Thomas. To test the workaround, add

open(STDERR, ">>/tmp/error.log");

to your bin/setlib.cfg file before the $twikiLibPath setting. CVSget:bin/setlib.cfg is the latest version for reference.

The real fix however is to patch your Apache installation.

-- PeterThoeny - 28 Apr 2004

The twiki version I have is Beijing. Is that what you want me to test?

  • Any version from Beijing to the latest Alpha is OK since there was no change in the setlib.cfg -- PeterThoeny - 28 Apr 2004

-- ThomasWeigert - 28 Apr 2004

This version of a null device should be cross-platform.

package Dev::Null;

# Create filehandles that go nowhere.

sub TIEHANDLE { bless \my $null }
sub PRINT {}
sub PRINTF {}
sub WRITE {}
sub READLINE {''}
sub READ {''}
sub GETC {''}

1;

Use it something like thus:-

local *NULL;
tie *NULL, 'Dev::Null';
my $fh = select *NULL;  # ie. dump STDOUT;
my $t = timeit( 100 , \&main );
select $fh;

-- NicholasLee - 29 Apr 2004

fixed broken verbatim tags, removed BugResolved flag as there seems to be some question about it really being fixed

Wow, Nicholas drops in for visit. Hi Nicholas! smile

-- MattWilkie - 30 Apr 2004

I have upgraded to the current version of Apache 2 and am happy to report that this bug does not occur any more. The environment I am running is Apache 2.0.52 (Unix) and Perl 5.8.0.

I will do some further testing (there appear to be some interactions between mod_perl and some plugins) but have set the BugResolved flag in the meantime.

-- ThomasWeigert - 24 Oct 2004

ApacheBug:22030, which causes this, was fixed in Apache 2.0.50, so I suggest we mandate that version of Apache 2 in general, with the workaround suggested for people stuck with earlier 2.0 versions.

-- RichardDonkin - 29 Oct 2004

ApacheSnipsOopsmore may have been caused by one of the patches above - it's best to upgrade to latest Apache 2 version (2.0.50 or higher, currently 2.0.52).

-- RichardDonkin - 17 Jan 2005

I'm running apache 2.0.52, and this is still a problem. I'm still looking to see if any of the suggested patches work (I'm hopeful about STDOUT->blocking(0))

-- AndyBakun - 16 Feb 2005

I'm running Apache 2.0.54 and Perl 5.8.4. Older versions of TWiki (07 May 2004 Beta) work fine, but the Cairo release hangs on large pages. Like many others who have reported this problem, I'm running on a redhat (WS3.0) box. My problem seems to be the HangsSavingLargePages problem, also discussed in TimeOutSavingTWikiPreferences. It appears that the fix to oops works, but the question is: How did we get to oops in the first place? In my case, I edit a large page, it asks for my login, and when I hit preview, it hangs in oops. When it works, though, it never goes into oops! Digging into this a bit more, it appears that, when it fails, somehow $query->remote_user returns nothing, so TWiki thinks it has to validate, so it goes to oops.

-- DougClaar - 24 Jun 2005

I can reproduce this problem with Apache 2.0.49 on SuSE 9 by clicking on the relock link in bin/testenv. The problem is resolved with the setlib.cfg workaround. This strikes me as a MUCH simpler test case...

-- MartinRothbaum - 06 Oct 2005

Wow, this really sucks. Put in: use IO::Handle; STDOUT->blocking(0); to fix HangsSavingLargePages, and create the ApacheSnipsOopsmore problem. Take it out, get the HangsSavingLargePages problem. Catch-22!

For the record, I'm running Apache 2.0.54 on Redhat AS4, with the latest cairo release. Wonder if Dakar fixes this?

-- DougClaar - 06 Feb 2006

Doug - In answer to your question, I'm running Dakar on Apache 2.0.46 / Red Hat. I get the error below in my browser after a long hang. My guess is that this has the same root cause. I haven't tried the fix given yet, as I'm not clear on the implications of ApacheSnipsOopsmore problem. I will look into this a bit more when I get time, however, for now, my assumption is that Dakar doesn't fix this.

Proxy Error

The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /bin/save/TWiki/BlackListPlugin.

Reason: Error reading from remote server

Apache/2.0.46 (Red Hat) Server at www.the-data-mine.com Port 80

-- AndyPryke - 29 Mar 2006

Worse, the workaround in oops doesn't seem to work.

-- DougClaar - 12 Apr 2006

I think I've found the problem! I'm not sure yet how to fix it, though. If you go to view a page, it is a "POST .../twiki/bin/view". This means that apache wants to write the POST data to view's STDIN. But view is busy writing a Set-Cookie and the 4096 bytes, so the pipe is full. I think that modifying view to read the POST data first would solve this problem. I am not familiar enough with the code (yet) to propose a fix...

-- DougClaar - 17 Apr 2006

Are you sure "Apache .. write the POST data to view's STDIN"? As far as I understand the way apache works, then the post request is fully read before CGI is invoked, so by the time perl starts up (and the session cookie is written) the post data should all be in CGI memory. In addition, the writing of the session cookie to the browser would occupy the out pipe (STDOUT). Even if Apache does use STDIN to communicate to the CGI process, view doesn't "read the post data". It uses new CGI, which is done a long time before anything is written to STDOUT.

However switching off blocking IO certainly does seem to suggest some interaction with the comms pipes is causing the problem, so I'm maintaining an open mind. I'm very interested to hear the results of your experiments.......

-- CrawfordCurrie - 17 Apr 2006

Ya would think. But, this is what strace shows:

poll
accept -> 13
read 13 -> "POST /twiki/bin/save/..."
open .htaccess, read, close
open .../twiki/bin/view, read, close
pipe(14,15)
pipe(16,17)
pipe(18,19)
clone 
close 14,17,19                                  dup(14,0)
                                                dup(17,1)
                                                dup(19,2)
                                                exec(...twiki/bin/view)
                                                bunch-o-stuff
                                                open twiki/data/TWiki/WebPreferences.txt, read, close
write 15, "text=%23...",5540)                   write(1, "Set-Cookie...", 171)
write 15, "are%3A+%5C%0D...",1444 -> EAGAIN     write(1, "<!DOCTYPE html...", 4096)
poll(15, POLLOUT, 5 minutes)

5 minutes later...
read(13, "+%3Dmode%3D++...",8000) -> 6728       write(1, "ewBackground/preview2bg.gif);\n\t...")
...And on our merry way we go.

I might not understand the strace correctly, or I might have glossed over something in the "bunch-o-stuff" area, because that section is quite long, but the chunk above is where things end up. There is no read(0) in the child before the write(1) that hangs.


I'm assuming that the left column is the parent apache process, and the right column it the child process forked to run TWiki.

pipe(16,17) opens a pipe with read FD 16 and write FD 17. The child process dups FD 17 to 1, which the TWiki child process subsequently writes to, so we can deduce that it is STDOUT. By analogy FD 0 is STDIN in the child, which is duped from FD 14 in the parent. The parent writes the content of a POST to FD 15, but gets an EAGAIN. This is presumably because the parent process has selected non blocking IO, but the write would block. I assume the poll is the parent process backing off.

Now, why would the write block? Presumably because the child is not ready to read on STDIN. Why wouldn't the child be ready to read? Presumably because CGI thinks it has already read everything it need to read from the parent i.e. it thinks it has a complete POST, so has given up reading. CGI::init is supposed to fully read the POST, so as long as it is well formed, there should be nothing left in STDIN. One way to confirm/deny this would be to fully read STDIN immediately after the new CGI call, and see what comes out.

I wish I knew what was happening in bunch-o-stuff, especially in respect of read(0, ...) calls. frown

-- CrawfordCurrie - 18 Apr 2006

I've reproduced the problem without involving TWiki, and I've submitted a bug against apache: http://issues.apache.org/bugzilla/show_bug.cgi?id=39342

Next step is to figure out if there's a workaround for TWiki...

-- DougClaar - 18 Apr 2006

In twiki/lib/UI.pm, I added a line to read STDIN. In my tests, it was empty, except for in the POST that is done for preview. Yet, CGI still had the POST parameters, which would seem to say that it had already read STDIN, and the page saved correctly. Pretty strange. Note, I'm not advocating making this change to TWiki...yet... smile

*** /var/www/html/twiki/lib/TWiki/UI.pm 2006-04-17 16:51:00.000000000 -0700
--- /var/www/html/twiki.dakar/lib/TWiki/UI.pm   2006-04-20 14:17:30.000000000 -0700
***************
*** 62,67 ****
--- 62,69 ----
      if( $ENV{'GATEWAY_INTERFACE'} ) {
          # script is called by browser
          $query = new CGI;
+       my @foo=<STDIN>;
+
      } else {
          # script is called by cron job or user
          $scripted = 1;

-- DougClaar - 20 Apr 2006

Doug, I found mention of a similar problem in a mailing list archive, and tracked it back to ApacheBug:12068

-- CrawfordCurrie - 21 Apr 2006

Ok, other than the apache bug, I've tracked this down to a TWiki/CGI.pm interaction. I haven't found a fix; consider this an update.

twiki/bin/view
 use TWiki::UI::View
 TWiki::UI::run( \&TWiki::UI::View::view )
   use TWiki;
     use TWiki::Attach
       use TWiki::Store
         use TWiki::Meta
           use TWiki::Merge
             use CGI
             ...
             my $conflictB=CGI::b('CONFLICT')
The explicit call to CGI::b calls CGI::new, and initializes CGI. Then, in TWiki::UI, the line:
$query = new CGI
is the "official" initialization of CGI. But the CGI module believes that it already has the POST values, so it skips the part where it reads STDIN. And in fact, @QUERY_PARAM has the values [text,originalrev,skin,cover,formtemplate,templatetopic,topicparent,newtopic,cmd,sig,action_preview,referer] which appear to be the things passed in the form. I still haven't figured out how that happens, since I don't see any read(0, in the strace at that point.

Back to putting print statements into CGI.pm...

-- DougClaar - 25 Apr 2006

Ugh, that's horrible. The implication is that you can't call methods in CGI at any time during the BEGIN phase. That's a frightful restriction!

The more we look at CGI.pm, the more horrible it looks......

BTW, what rev of CGI.pm are you looking at? I am on 3.17 and don't see the problem.

-- CrawfordCurrie - 26 Apr 2006

I believe that this problem is also logged as http://issues.apache.org/bugzilla/show_bug.cgi?id=32744 which was also closed by the apache folks.

I may have spoken too soon on the CGI.pm thing. (I'm on 3.17, BTW). After I looked at the apache bug you pointed out, I had a thought to try changing from mod_cgi to mod_cgid, which uses sockets instead of pipes. Now I get the error mentioned in the apache bug 32744, but I don't hang for 5 minutes.

Anyway, don't hang CGI.pm yet!

sorry...

-- DougClaar - 26 Apr 2006

CGI.pm slurps in all of STDIN (well, it reads up to content-length bytes) the first time the init routine is invoked; thereafter it reuses that input, which is why there are no more reads on STDIN. It's working correctly.

-- DiabJerius - 27 Apr 2006

We realise that, but it is not working correctly for Doug. Read back through the dialog above.

-- CrawfordCurrie - 28 Apr 2006

There are several environmental variables used by CGI.pm which determine when it will read from stdin. It would be very useful to see a dump of the environmental variables in Doug's case. From my own experience in debugging this problem, the above behavior is consistent with how CGI.pm should operate with different REQUEST methods. For example, if a POST is redirected to a view (by using an ErrorDocument 401), the redirected view will not see a POST, and will not (and should not) read stdin. I'm seeing this cause a block on one platform, but not another. Sometimes authentication is "lost" between the edit and save operations, and this causes a hang as above. I can reproduce this at will on one platform but not another. I'll provide a summary of my results shortly.

-- DiabJerius - 29 Apr 2006

I've come up with a method of deterministically triggering a hang on a Fedora Core 3 server running apache 2.0.53. The symptoms seem to fit the pattern of previous reports.

Here are the details. A spanking brand new install of TWiki 4.0.2 on the above server was hanging when performing topic edits. Here's the log for a hung event:

| 28 Apr 2006 - 13:30 | DiabJerius | edit | TWiki.TablePlugin |  | 131.142.41.210 |
| 28 Apr 2006 - 13:30 | TWikiGuest | view | TWiki.TWikiRegistration |  Mozilla | 131.142.41.210 |
| 28 Apr 2006 - 13:33 | DiabJerius | save | TWiki.TablePlugin |  | 131.142.41.210 |

The key datum is the interposition of the view of TWiki.TWikiRegistration. Here's the pattern for non-hanging events.

| 28 Apr 2006 - 13:36 | DiabJerius | edit | TWiki.TablePlugin |  | 131.142.41.210 |
| 28 Apr 2006 - 13:36 | DiabJerius | save | TWiki.TablePlugin |  | 131.142.41.210 |

I've determined that the extra view of TWikiRegistration is due to a 401 redirect based on the ErrorDocument directive in bin/.htaccess. For some reason the browser (Firefox) isn't passing along the authentication information for the save. This is strange, as it does it for the initial edit. I've see the same problem with Galeon (also Gecko based). I haven't been able to predict when the browser will do this.

The interesting bit is why the hang occurs. The save is done via a POST, which gets redirected to view/TWiki/TWikRegistration. The interesting environmental variables for the view run look like this: (This is for a different topic)

Variable Value
CONTENT_LENGTH 25667
CONTENT_TYPE application/x-www-form-urlencoded
DOCUMENT_ROOT /data/loss/www/default/htdocs
HTTP_COOKIE TWIKISID=cd507d42c5e80cdaa0233ae63d22bf91
HTTP_COOKIE2 $Version="1"
HTTP_HOST jeeves.cfa.harvard.edu
HTTP_REFERER http://jeeves.cfa.harvard.edu/TestWiki/bin/edit/Sandbox/TestTopic0
PATH_INFO /TWiki/TWikiRegistration
PATH_TRANSLATED /data/loss/www/default/htdocs/TWiki/TWikiRegistration
QUERY_STRING  
REDIRECT_REQUEST_METHOD POST
REDIRECT_SCRIPT_URI http://jeeves.cfa.harvard.edu/TestWiki/bin/save/Sandbox/TestTopic0
REDIRECT_SCRIPT_URL /TestWiki/bin/save/Sandbox/TestTopic0
REDIRECT_STATUS 401
REDIRECT_URL /TestWiki/bin/save/Sandbox/TestTopic0
REQUEST_METHOD GET
REQUEST_URI /TestWiki/bin/save/Sandbox/TestTopic0
SCRIPT_FILENAME /data/loss/www/default/htdocs/TestWiki/bin/view
SCRIPT_NAME /TestWiki/bin/view
SCRIPT_URI http://jeeves.cfa.harvard.edu/TestWiki/bin/save/Sandbox/TestTopic0
SCRIPT_URL /TestWiki/bin/save/Sandbox/TestTopic0

The important thing to note is that REQUEST_METHOD is GET, so CGI.pm won't read STDIN (corroborated by tracing CGI.pm) so apache is stuck with all of the POST data. This fits the pattern of the earlier analyses: apache wants to write to view, but view's not listening, and vice versa. I haven't tweaked the data size to see if 4096 is a magic number, but I'm sure it will be.

To isolate the problem I set up a pristine apache and TWiki install on another host. I was able to duplicate the behavior using the POST program (part of the LWP package) and thus remove the browser dependency. Here's what happens with the POST:

% POST -S -C 'USER:PASSWORD' http://bzzzt/TestWiki/bin/save/Sandbox/TestTopic0 < long_form_data
POST http://bzzzt/TestWiki/bin/save/Sandbox/TestTopic0 --> 401 Authorization Required
POST http://bzzzt/TestWiki/bin/save/Sandbox/TestTopic0 --> 302 Moved

tcpdump indicates that the client resends the entire POST data when it receives the 302. Manually adding the authentication information to the header using LWP::UserAgent directly (or using wget which does this in one step, rather than the two done by POST) bypasses the 401 completely and works as expected.

I have been unable to duplicate this problem on a Debian Sarge box, which runs apache 2.0.54. Even with the failed authentication, everything works as expected. As I can duplicate the problem on two Fedora Core 3 boxes, both running 2.0.53, one with a complicated server setup, the other as simple as can be, I'm tending to believe that the problem may be solved in 2.0.54. As Fedora core 4 ships with 2.0.54 (and I need to upgrade the servers anyway), I'll be upgrading my test Fedora box to Core 4 shortly, and will see if the problem persists.

-- DiabJerius - 30 Apr 2006

I've applied a slightly modified version of Doug's patch to UI.pm; this seems to fix the problem on apache 2.0.53.

*** UI.pm.orig  Sat Apr  1 00:44:35 2006
--- UI.pm       Sun Apr 30 12:35:05 2006
***************
*** 62,67 ****
--- 62,83 ----
      if( $ENV{'GATEWAY_INTERFACE'} ) {
          # script is called by browser
          $query = new CGI;
+ 
+       # drain STDIN.  This may be necessary if the script is called
+       # due to a redirect and the original query was a POST. In this
+       # case the web server is waiting to write the POST data to
+       # this script's STDIN, but CGI.pm won't drain STDIN as it is
+       # seeing a GET because of the redirect, not a POST.  This script
+       # tries to write to STDOUT, which goes back to the web server,
+       # but the server isn't paying attention to that (as its waiting for
+       # the script to _read_, not _write_), and everything blocks.
+       # Some versions of apache seem to be more susceptible than others to
+       # this.
+       my $content_length = 
+         defined($ENV{'CONTENT_LENGTH'}) ? $ENV{'CONTENT_LENGTH'} : 0;
+       read(STDIN, my $buf, $content_length, 0 )
+         if $content_length;
+ 
      } else {
          # script is called by cron job or user
          $scripted = 1;

-- DiabJerius - 30 Apr 2006

Thank you! Thank you! Thank you! I kiss your feet! (yuck?!) smile I was poking around at this area, but I just couldn't quite connect the dots. Yea! (Do my happy TWiki dance...)

-- DougClaar - 02 May 2006

Diab, since Doug is busy kissing your feet, and I'm reluctant to kiss any other part of your anatomy, I'm forced to restrict myself to saying well, done, old chap! Your analysis above is incredibly helpful smile

-- CrawfordCurrie - 03 May 2006

Diab, thank you. This appears to have solved our hanging problem on Twiki4.0.2, Apache2, Debian Sarge (2.6). It was hanging randomly on topic saves and cancels.

-- RyanMarotz - 19 May 2006

Besides kissing each others feet wink should Diab's code change go into 4.0.3?

I just checked and the code has not been implemented.

-- KennethLavrsen - 20 May 2006

I'm more comfortable with the latter than the former!

-- DiabJerius - 22 May 2006

I have tested the patch. Since I do not really see the issue I mainly checked for damage and I did not find any.

So I have checked the patch into both DEVELOP and TWiki4 branches on SVN.

-- KennethLavrsen - 27 May 2006

Please see DeathByRedirect and comment.

-- CrawfordCurrie - 28 May 2006

Diab's patch just fixed my 4.0.2 install under Apache/1.3.29 under OpenBSD.

-- GeoffThe - 06 Jun 2006

I can't believe this is still an issue on apache 2.2.2. But the patch works great smile

-- YanMinHong - 25 Jul 2006

See Bugs:Item2753 and also Bugs:Item2753. And see Support/NewHPitaniumInstallHangsInRegis

This problem is nasty. We are fighting an Apache bug by making a work around and this work around creates new problems.

We either need a better fix for this issue or at least we need to make the work around conditioned. But what is then the condition?

A last resort is to enable/disable the work around code in configure with an EXPERT setting.

-- KennethLavrsen - 12 Aug 2006

Related to the apache2.0 hanging there was a notion in LocalSite.cfg.txt to redirect STDERR to /dev/null or an error.log file. As this was done in LocalSite.cfg and read in BEGIN you may have run into problems when you were using a perl accellerator - like speedy-cgi - that closes STDERR after each request. So STDERR got closed and never reopened. Whatever consequences that had you for sure have lost all error messages thereafter. The notion has been moved into TWiki::UI::run to reopen STDERR at the beginning of each request. See Bugs:Item2363.

-- MichaelDaum - 16 Aug 2006

For a quick fix at the bottom of oops we added

 close(STDIN);

and that seemed to fix the problem.

-- JimRuzewski - 31 Aug 2006

I installed dakar version 4.0.4 on HPUX 11.00 Apache 2.0.53 and i had terrible problem with saving - save script hang always.

I read this discussion and i remove your "fix" from UI.pm

Now I use this configuration :

if( $ENV{'GATEWAY_INTERFACE'} ) {
          # script is called by browser
          $query = new CGI;
      } else {
          # script is called by cron job or user
          $scripted = 1;
I do not why exactly ,but now i havent problem with haging ! Why ? Thanks http://twiki.org/cgi-bin/view/Support/SeriousInstallationProblem

-- MartinVich - 17 Nov 2006

Edit | Attach | Watch | Print version | History: r66 < r65 < r64 < r63 < r62 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r66 - 2006-11-17 - MartinVich
 
  • Learn about TWiki  
  • Download TWiki
This site is powered by the TWiki collaboration platform Powered by Perl Hosted by OICcam.com Ideas, requests, problems regarding TWiki? Send feedback. Ask community in the support forum.
Copyright © 1999-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.