ASSIGNED 222964
LayoutTests/webgl/2.0.0/** do not use WebGL 2 context if the test supports WebGL and WebGL2
https://bugs.webkit.org/show_bug.cgi?id=222964
Summary LayoutTests/webgl/2.0.0/** do not use WebGL 2 context if the test supports We...
Kimmo Kinnunen
Reported 2021-03-09 00:39:02 PST
LayoutTests/webgl/2.0.0/** do not use WebGL 2 context if the test supports WebGL and WebGL2 The conformance test suite runner passes webglVersion=2 to the test, layout test runner html files do not.
Attachments
Patch (10.79 MB, patch)
2021-03-11 10:46 PST, Kimmo Kinnunen
no flags
Patch (8.29 MB, patch)
2021-03-12 03:28 PST, Kimmo Kinnunen
ews-feeder: commit-queue-
Patch (8.30 MB, patch)
2021-03-12 10:55 PST, Kimmo Kinnunen
ews-feeder: commit-queue-
Patch (8.30 MB, patch)
2021-03-14 06:35 PDT, Kimmo Kinnunen
ews-feeder: commit-queue-
Patch (8.30 MB, patch)
2021-03-14 11:31 PDT, Kimmo Kinnunen
ews-feeder: commit-queue-
Patch (39.25 MB, patch)
2021-03-16 11:57 PDT, Kimmo Kinnunen
no flags
Patch (39.22 MB, patch)
2021-03-17 05:17 PDT, Kimmo Kinnunen
no flags
Kimmo Kinnunen
Comment 1 2021-03-11 10:46:35 PST
Kimmo Kinnunen
Comment 2 2021-03-12 03:28:11 PST
Kimmo Kinnunen
Comment 3 2021-03-12 10:55:30 PST
Kimmo Kinnunen
Comment 4 2021-03-14 06:35:41 PDT
Kimmo Kinnunen
Comment 5 2021-03-14 11:31:30 PDT
Radar WebKit Bug Importer
Comment 6 2021-03-16 01:39:54 PDT
Kimmo Kinnunen
Comment 7 2021-03-16 11:57:02 PDT
Kenneth Russell
Comment 8 2021-03-16 14:42:26 PDT
There were a couple of questions about tests that seemed deleted, like: webgl/2.0.0/conformance/canvas/render-after-resize-test.html webgl/2.0.0/conformance/context/constants-and-properties.html I don't think these were removed from the upstream conformance snapshot; rather, during development of WebKit's WebGL 2.0 implementation, these tests were copied from the top-of-tree conformance suite into WebKit's snapshot, which was versioned "2.0.0". Newly marked failures of tests like expando-loss.html are concerning. Do you know why those are happening? Have the tests been expanded upstream, for example? Mega-patches like this are really hard to review. If you can provide some assurances that you understand all of the changes in test passes/failures, and/or file bugs about each of the new failures, that would make it easier to approve.
Kimmo Kinnunen
Comment 9 2021-03-17 05:17:44 PDT
Kimmo Kinnunen
Comment 10 2021-03-17 05:23:26 PDT
(In reply to Kenneth Russell from comment #8) > There were a couple of questions about tests that seemed deleted, like: > > webgl/2.0.0/conformance/canvas/render-after-resize-test.html > webgl/2.0.0/conformance/context/constants-and-properties.html > > I don't think these were removed from the upstream conformance > snapshot; rather, during development of WebKit's WebGL 2.0 > implementation, these tests were copied from the top-of-tree > conformance suite into WebKit's snapshot, which was versioned "2.0.0". Changed wording in ChangeLog. > Newly marked failures of tests like expando-loss.html are concerning. > Do you know why those are happening? Have the tests been expanded > upstream, for example? The test was not run in WebGL2, it was run in WebGL 1. Maybe I broke it not knowning the tests don't test what they're supposed to test? Maybe it worked before (works with release Safari). Other reasons for difference in other cases: - The test is skipped errorneously - The test is missing - The test is run in WebGL1 > Mega-patches like this are really hard to review. Yeah.. I don't know how else to do this, though. > If you can provide > some assurances that you understand all of the changes in test > passes/failures, and/or file bugs about each of the new failures, that > would make it easier to approve. Filed the deps of weblg2conformance , referred to these in TestExpectations.
David Kilzer (:ddkilzer)
Comment 11 2021-03-17 11:20:19 PDT
(In reply to Kimmo Kinnunen from comment #10) > (In reply to Kenneth Russell from comment #8) > > Mega-patches like this are really hard to review. > > Yeah.. I don't know how else to do this, though. Some ideas (without looking closely at the mega-patch): - Only post code changes (plus a description of test changes). - Figure out how to remove all the "removed file" patches and just provide a list of removed files.
Kenneth Russell
Comment 12 2021-03-17 15:21:33 PDT
Understood Kimmo that these large updates are difficult. Good suggestions David. Kimmo, please ping for explicit review if the EWS runs are clean, since I don't think emails are sent if they all pass, only if some fail. Thanks.
Kimmo Kinnunen
Comment 13 2021-03-18 00:03:04 PDT
(In reply to Kenneth Russell from comment #12) > Understood Kimmo that these large updates are difficult. Good suggestions > David. > > Kimmo, please ping for explicit review if the EWS runs are clean, since I > don't think emails are sent if they all pass, only if some fail. Thanks. Yeah, no.. The idea was to review this and then get it landed out-of-band. The infrastructure fails in various places, bots are of no help. Maybe I then spend even some more time to get the files in place in some other way..
Kimmo Kinnunen
Comment 14 2021-03-18 00:12:34 PDT
(In reply to David Kilzer (:ddkilzer) from comment #11) > (In reply to Kimmo Kinnunen from comment #10) > > (In reply to Kenneth Russell from comment #8) > > > Mega-patches like this are really hard to review. > > > > Yeah.. I don't know how else to do this, though. > > Some ideas (without looking closely at the mega-patch): > > - Only post code changes (plus a description of test changes). I don't understand this. Depending on philosophical viewpoint, I think the options are: a) all the changes are code changes (barring few images and videos and such) b) no changes are code changes (the changes are reviewed in other place and this is just a data update) In this update, I don't try to do any code change of my unique own invention. > - Figure out how to remove all the "removed file" patches and just provide a > list of removed files. Git provides options to prune the deleted content, but it is no use since then the patch cannot apply and as such cannot land after a green EWS. None of these really help the task at hand. The objective is to a) review b) land. The review seems to be equal to "bots == green" (which shouldn't need a reviewer to inspect and then rs, but I digress) So the only objective is to get something that the patch goes through the bots. So the suggestions should not be about how to massage the data, unless that affects the primary problem. I think the tooling in WebKit will break no matter what, and as such there never is going to be a massage that gets this done. I don't think this is progressing. I try to figure out something else.
Alexey Proskuryakov
Comment 15 2021-03-18 10:38:37 PDT
I also don't understand what the blocker is. Doesn't this just make some tests function as intended, and align expectations with reality? This seems like a clear improvement.
Kenneth Russell
Comment 16 2021-03-18 13:44:31 PDT
(In reply to Alexey Proskuryakov from comment #15) > I also don't understand what the blocker is. Doesn't this just make some > tests function as intended, and align expectations with reality? This seems > like a clear improvement. It's not clear whether accidental new bugs in this test upgrade might have caused regressions in previously-passing tests. As Kimmo points out, bugs in the way some of the tests are currently run are either causing or masking failures. The concern is that the deltas are too large to examine. When there's a huge influx of new failures all at once, it's impossible to triage them and understand whether they're real, caused by a test or configuration bug, etc.
Alexey Proskuryakov
Comment 17 2021-03-18 14:32:43 PDT
Is "previsously passing" a meaningful signal when we've been running the tests entirely incorrectly? And there is a separate WebGL 1 test suite that isn't modified here. I see this a lot more like importing a whole new test suite. > When there's a huge influx of new failures all at once, it's impossible to triage them and understand whether they're real, caused by a test or configuration bug, etc. Perhaps the disagreement is not as much about this change as about what will happen next. The change is valuable in that we now have a new test suite that we can defend against regressions, and that seems like an automatic r+. But if there is the intention to work through all the failures, that's even better, and asking to make this process easier is fair. My question is, is there actually anything of this kind that cannot be done after landing just as efficiently? Is any relevant information lost once this is landed?
Kenneth Russell
Comment 18 2021-03-18 16:18:46 PDT
(In reply to Alexey Proskuryakov from comment #17) > > My question is, is there actually anything of this kind that cannot be done > after landing just as efficiently? Is any relevant information lost once > this is landed? A curated list of tests that were passing in the previous test suite and failing in the new suite would be valuable. Those should plausibly be investigated with the most urgency to understand whether they're real regressions. If there are bugs in how we're running the new test suite and those are the cause of the new failures, we should ideally track those down and fix them while landing the new test suite. I fully trust Kimmo to follow up well on these new test failures, and plan to contribute to burning down the failure list. Another significant concern is that the large number of new failures that are now suppressed will allow significant regressions to the core implementation until more of the tests are passing fully.
Kenneth Russell
Comment 19 2021-03-19 14:38:17 PDT
Just a note that it looks like there's been a significant recent regression in the video-related conformance tests in https://github.com/KhronosGroup/WebGL . It's been discovered while pulling the latest version of the tests into Chromium: https://chromium-review.googlesource.com/2775887 . We're working on a fix but you might want to hold off refreshing the conformance suite in WebKit until it is, or manually choose an older commit to sync to. It's for reasons like this that massive test suite upgrades which introduce lots of unreviewable new failures are undesirable.
David Kilzer (:ddkilzer)
Comment 20 2021-05-01 10:39:00 PDT
Is it safe to land this yet?
Kimmo Kinnunen
Comment 21 2021-05-03 05:47:58 PDT
Would be sort of safe, potentially some fixes and skips needed. Landing it pending policy decision on when we want to do it...
Note You need to log in before you can comment on or make changes to this bug.