Bug 29939 - http/tests/xmlhttprequest/basic-auth.html timed out on Leopard bot
Summary: http/tests/xmlhttprequest/basic-auth.html timed out on Leopard bot
Alias: None
Product: WebKit
Classification: Unclassified
Component: New Bugs (show other bugs)
Version: 528+ (Nightly build)
Hardware: PC OS X 10.5
: P2 Normal
Assignee: Nobody
Depends on:
Blocks: 51613 33292
  Show dependency treegraph
Reported: 2009-09-30 15:52 PDT by Eric Seidel (no email)
Modified: 2011-04-13 11:55 PDT (History)
4 users (show)

See Also:

Patch (1.00 KB, patch)
2010-01-21 17:20 PST, Eric Seidel (no email)
ap: review-
Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Eric Seidel (no email) 2009-09-30 15:52:38 PDT
http/tests/xmlhttprequest/basic-auth.html timed out on Leopard bot


I don't have frequency numbers for you yet.  But I'll add notes to this bug when I see it.

Not sure who to CC here, but I think ap has hacked in this code and might know.
Comment 1 Eric Seidel (no email) 2009-09-30 22:29:52 PDT
Another one this evening:
Comment 2 Eric Seidel (no email) 2010-01-09 19:38:21 PST
Just timed out on the Snow Leopard Release bot as well:

We seem to see a lot of trouble with these auth tests on all bots.  I suspect we may have a deeper problem here.
Comment 3 Eric Seidel (no email) 2010-01-21 17:16:14 PST
Timed out on Snow Leopard Release just now:
Comment 5 Eric Seidel (no email) 2010-01-21 17:20:00 PST
Created attachment 47163 [details]
Comment 7 Alexey Proskuryakov 2010-01-21 20:15:21 PST
Comment on attachment 47163 [details]

I don't want to skip this. We previously had authentication tests crash, and that helped catch a bad regression - no sense in losing regression testing for auth code.
Comment 8 Eric Seidel (no email) 2010-01-22 03:56:51 PST
The auth tests clearly are flakey, as demonstrated in the numerous bugs above.  What should we do if we don't skip them?
Comment 9 Alexey Proskuryakov 2010-01-22 08:35:52 PST
As long as there is no expectations mechanism to keep testing for crashes and further regressions, all we can do is suffer the flakiness. Assuming no one is going to try and fix it.
Comment 10 Eric Seidel (no email) 2010-01-22 12:15:10 PST
I agree that a test expectations mechanism is the right way to go.  Sadly we don't have such yet.  Leaving the bots red until some future time when we do is silly.

I also think it's silly for one test to hold 11,000 tests hostage.  Which is exactly what flakey tests do.  They reduce the usefulness of the rest of the tests, but making it harder for someone to know if their patch is correct or not.  People stop trusting the tests and bots.

By saying "but what if it catches a crasher later" you're arguing that we should exchange some current time value for some unlikely future time value.  The current value is the value of green bots catching real regressions quickly because people know that red means they broken stuff.  The unlikely future value is that this test would be the only one to catch some crasher.  We could put numbers on such a value estimate, but I assure you the future value is not worth the current cost.

I agree skipping tests is less than ideal, but it's the only tool we currently have to keep the bots green.  That or rollouts.  Since we can't rollout the change that broke this test, or realistically the one which added it, the best solution we have is to skip it with plans to mark it flakey instead when we have the technology. :)
Comment 12 Adam Barth 2010-12-27 00:47:55 PST
Eight months later and this is still failing very frequently on the bots.  Sadness.
Comment 13 Eric Seidel (no email) 2011-04-13 11:55:34 PDT
This must have been skipped or something.  We haven't seen it fail in 4 months.  Closing.