Bug 223019
Summary: | TestExpectations should be able to set an expectation based on GPU model | ||
---|---|---|---|
Product: | WebKit | Reporter: | Kimmo Kinnunen <kkinnunen> |
Component: | Tools / Tests | Assignee: | Nobody <webkit-unassigned> |
Status: | NEW | ||
Severity: | Normal | CC: | ap, gsnedders, jbedard, webkit-bug-importer |
Priority: | P2 | Keywords: | InRadar |
Version: | Other | ||
Hardware: | Unspecified | ||
OS: | All | ||
Bug Depends on: | |||
Bug Blocks: | 243818 |
Kimmo Kinnunen
TestExpectations should be able to set an expectation based on GPU model
Current practice of listing following:
# webkit.org/b/214763 regression on some hardware
webgl/2.0.0/conformance2/textures/misc/tex-unpack-params.html [ Pass Failure ]
Disabling a test due to "regression on some hardware" is preventing meaningful progress in maintaining consistent testing results.
I'd like to have for example following:
[ Catalina GpuIntelI600 ] webkit.org/b/214763 webgl/2.0.0/conformance2/textures/misc/tex-unpack-params.html [ Failure ]
Attachments | ||
---|---|---|
Add attachment proposed patch, testcase, etc. |
Kimmo Kinnunen
If I were to propose a patch that parsed output of, say, `system_profiler SPDisplaysDataType`, where would I put the code + the tests around webkitpy? Would this be welcome?
Jonathan Bedard
This is something one would think is simple, but it's more involved and not very generalizable. The place to start is Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py.
The trouble with this change is the way this really should work is that each test run generates a set of tags which describe it's run, and any string which is not PASS/FAIL/TIMEOUT/CRASH/SKIP/etc is considered an attribute tag. This is a refactor on my list of to-dos, but it's not high-priority at the moment.
I might consider seeing if we can get better test coverage by, for example, running the effected tests on only Intel, iOS Simulator or Apple Silicon even though those differentiators are not, strictly speaking, accurate. It's a bit hackey, but may be a better use of engineering time than doing things the "right" way
Sam Sneddon [:gsnedders]
How widespread do we think this is going to be?
Alexey Proskuryakov
Quite a few WebGL tests were found to have GPU specific results last year.
I think that this would be welcome, but it's not straightforward to design. Sometimes it's just "Intel vs. AMD", other times it's "all older Intel GPUs", other times it's "all newer Intel GPUs".
Kimmo Kinnunen
(In reply to Alexey Proskuryakov from comment #4)
> Quite a few WebGL tests were found to have GPU specific results last year.
Today, we have false positives nagging every run due to stuff put to internal.
It is broken almost every day, on almost every device except those that run bots. I don't know how often the bots break, but looking at the TestExpectations they have done a lot of that too.
> I think that this would be welcome, but it's not straightforward to design.
> Sometimes it's just "Intel vs. AMD", other times it's "all older Intel
> GPUs", other times it's "all newer Intel GPUs".
Sure, but I'm not even interested in *all* possible ways this could be factored. I'm interested, primarily, to get it factored in some way for starters.
For example, I don't see big practical value in system like "I want to specify Intel vs AMD". That does not help in practice, and rarely works, in my experience.
I'm interested in these working:
* "I want to specify this particular GPU model used in our bot system this moment in time"
* "I want to specify this particular GPU model I know 70% of the developers use"
This stuff is relatively straightforward to get to work. It has been implemented in other projects like Chromium or ANGLE for ages.
Alexey Proskuryakov
Yes, it will be welcome. Expect some bikeshedding in review, of course!
Radar WebKit Bug Importer
<rdar://problem/75515886>