Created attachment 134907 [details] screenshot of layout test dashboard http://build.chromium.org/p/chromium.webkit/builders/Webkit%20Win7/builds/14713/steps/webkit_tests/logs/stdio corresponds to the random green box all the way to the right in screenshot. The test wasn't run, but it shows up as a PASS on the test results dashboard. My guess is that summarize results doesn't handle tests that weren't run correctly. Although, now that I think about it, if the tests don't complete, we just shouldn't upload the results to appengine at all. Getting data for half a run is not very useful. If we really wanted to upload the data, then I think we should treat tests that weren't run however we treat skipped tests.
Curious, also in that screenshot is http://build.chromium.org/p/chromium.webkit/builders/Webkit%20Mac10.6/builds/14272 on the mac 10.6 bot. That ran to completion, but didn't run the fast/regions test and showed it as passing. Not sure what the issue is there, but it's definitely a bug (e.g. if we didn't run the test because it was skipped, we shouldn't show it as a PASS).
I don't actually know how we treat skipped tests ... I think treating tests that aren't run the same as SKIPs is certainly better than treating them as PASS. However, I'm wondering if we should have some other state to indicate 'skipped due to aborted run' or something.
(In reply to comment #2) > I don't actually know how we treat skipped tests ... I think treating tests that aren't run the same as SKIPs is certainly better than treating them as PASS. However, I'm wondering if we should have some other state to indicate 'skipped due to aborted run' or something. I thought about that. At first glance, it doesn't seem worth the extra complexity for such a rare occurrence.
The complexity I refer to is not the code so much as having yet another thing that people working with the dashboard need to understand.
(In reply to comment #3) > (In reply to comment #2) > > I don't actually know how we treat skipped tests ... I think treating tests that aren't run the same as SKIPs is certainly better than treating them as PASS. However, I'm wondering if we should have some other state to indicate 'skipped due to aborted run' or something. > > I thought about that. At first glance, it doesn't seem worth the extra complexity for such a rare occurrence ... > The complexity I refer to is not the code so much as having yet another thing that people working with the dashboard need to understand. I agree it's probably not worth the complexity in the code. Given that, the user thing becomes moot, but I'm not sure I agree with that part. I wouldn't argue too much one way or another, though.
(In reply to comment #1) > Curious, also in that screenshot is http://build.chromium.org/p/chromium.webkit/builders/Webkit%20Mac10.6/builds/14272 on the mac 10.6 bot. That ran to completion, but didn't run the fast/regions test and showed it as passing. Not sure what the issue is there, but it's definitely a bug (e.g. if we didn't run the test because it was skipped, we shouldn't show it as a PASS). Nevermind. This is the same issue. Instead of the buildbot harness killing it, NRWT is itself bailing early because of too many crashes.
As I think about this more, I think we *should* upload the results, but treat all the tests we didn't run as skipped.
Created attachment 134977 [details] Patch
Comment on attachment 134977 [details] Patch Looks reasonable.
Committed r112805: <http://trac.webkit.org/changeset/112805>
This doesn't appear to have actually fixed things. Reopening to take a look.
Created attachment 136606 [details] Patch
Committed r113813: <http://trac.webkit.org/changeset/113813>