RESOLVED FIXED Bug 29939
http/tests/xmlhttprequest/basic-auth.html timed out on Leopard bot
https://bugs.webkit.org/show_bug.cgi?id=29939
Summary http/tests/xmlhttprequest/basic-auth.html timed out on Leopard bot
Eric Seidel (no email)
Reported 2009-09-30 15:52:38 PDT
http/tests/xmlhttprequest/basic-auth.html timed out on Leopard bot http://build.webkit.org/results/Leopard%20Intel%20Release%20(Tests)/r48942%20(5588)/results.html I don't have frequency numbers for you yet. But I'll add notes to this bug when I see it. Not sure who to CC here, but I think ap has hacked in this code and might know.
Attachments
Patch (1.00 KB, patch)
2010-01-21 17:20 PST, Eric Seidel (no email)
ap: review-
Eric Seidel (no email)
Comment 1 2009-09-30 22:29:52 PDT
Eric Seidel (no email)
Comment 2 2010-01-09 19:38:21 PST
Just timed out on the Snow Leopard Release bot as well: http://build.webkit.org/results/SnowLeopard%20Intel%20Release%20(Tests)/r53047%20(4086)/results.html We seem to see a lot of trouble with these auth tests on all bots. I suspect we may have a deeper problem here.
Eric Seidel (no email)
Comment 3 2010-01-21 17:16:14 PST
Eric Seidel (no email)
Comment 5 2010-01-21 17:20:00 PST
Alexey Proskuryakov
Comment 7 2010-01-21 20:15:21 PST
Comment on attachment 47163 [details] Patch I don't want to skip this. We previously had authentication tests crash, and that helped catch a bad regression - no sense in losing regression testing for auth code.
Eric Seidel (no email)
Comment 8 2010-01-22 03:56:51 PST
The auth tests clearly are flakey, as demonstrated in the numerous bugs above. What should we do if we don't skip them?
Alexey Proskuryakov
Comment 9 2010-01-22 08:35:52 PST
As long as there is no expectations mechanism to keep testing for crashes and further regressions, all we can do is suffer the flakiness. Assuming no one is going to try and fix it.
Eric Seidel (no email)
Comment 10 2010-01-22 12:15:10 PST
I agree that a test expectations mechanism is the right way to go. Sadly we don't have such yet. Leaving the bots red until some future time when we do is silly. I also think it's silly for one test to hold 11,000 tests hostage. Which is exactly what flakey tests do. They reduce the usefulness of the rest of the tests, but making it harder for someone to know if their patch is correct or not. People stop trusting the tests and bots. By saying "but what if it catches a crasher later" you're arguing that we should exchange some current time value for some unlikely future time value. The current value is the value of green bots catching real regressions quickly because people know that red means they broken stuff. The unlikely future value is that this test would be the only one to catch some crasher. We could put numbers on such a value estimate, but I assure you the future value is not worth the current cost. I agree skipping tests is less than ideal, but it's the only tool we currently have to keep the bots green. That or rollouts. Since we can't rollout the change that broke this test, or realistically the one which added it, the best solution we have is to skip it with plans to mark it flakey instead when we have the technology. :)
Adam Barth
Comment 12 2010-12-27 00:47:55 PST
Eight months later and this is still failing very frequently on the bots. Sadness.
Eric Seidel (no email)
Comment 13 2011-04-13 11:55:34 PDT
This must have been skipped or something. We haven't seen it fail in 4 months. Closing.
Note You need to log in before you can comment on or make changes to this bug.