EWS bots do clean-tree run when the test fails, in order to confirm if the test-failure is pre-existing or introduced by the patch. A lot of time, test-failure information is already available in results database (through build.webkit.org bots). We should try to get the clean-tree failure info from results database using its API, and avoid doing clean-tree runs. This would save a lot of time on EWS bots.
How can we confirm that this will be sufficient? It feels like EWS-only failures are somewhat common.
I think that doing this in addition to retries is better than "instead" as the bug suggests. We can still retry with a clean tree if the failures are unknown.
(In reply to Alexey Proskuryakov from comment #1)
> How can we confirm that this will be sufficient? It feels like EWS-only
> failures are somewhat common.
> I think that doing this in addition to retries is better than "instead" as
> the bug suggests. We can still retry with a clean tree if the failures are
We could do the retry if the failures are unknown, yes. And we might start that way.
I think the retry is generally a bad idea, though. It means that in the case where an engineer did introduce a regression, we are going to be taking significantly longer to actually report that regression than in the successful case. Given the longer turn around for layout tests, I think this pretty significantly reduces the usefulness of EWS. We're essentially delaying results in the exact circumstances those results would be most impactful.
I suppose we make the bubble orange on retry, but I feel like we might be training folks to ignore orange bubbles given how flakey things are at the moment.
If we don't retry, then we will have a lot of false positives, training engineers to ignore EWS altogether, not just in orange state.
Created attachment 383973 [details]
Comment on attachment 383973 [details]
This patch should have been uploaded to https://bugs.webkit.org/show_bug.cgi?id=204409.
(In reply to Alexey Proskuryakov from comment #3)
> If we don't retry, then we will have a lot of false positives, training
> engineers to ignore EWS altogether, not just in orange state.
I'm not entirely convinced that we need retries to get rid of false positives. There isn't a good reason that clean tree runs in EWS should differ from trunk results at all, and if we're cross-referencing trunk results, we could even expand the set of commits we're looking at to find flakey tests.
This approach hinges on us quickly fixing things like https://bugs.webkit.org/show_bug.cgi?id=204409, but I really do think that things like that are bugs, not expected behavior.
One big reason is that EWS is different hardware. We definitely have quite a few tests that are only flaky or failing on EWS.
Seems like one thing we agree on is that, if the test(s) which fails in EWS is also marked as failing in results database, then we can skip the retry/clean-tree-run.
If the test failure is new (not visible in results database), then we can continue with existing retry/clean-tree-run logic.
This would still be a significant efficiency improvement, given that most of the times test failures seen in EWS are also seen in build.webkit.org
Correct. I think that we should use a fairly strict definition of "failing in results database', something like the test having failed within 24 hours, and maybe even require the same OS/flavor.