WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
RESOLVED FIXED
180420
WebDriver: add support for test expectations
https://bugs.webkit.org/show_bug.cgi?id=180420
Summary
WebDriver: add support for test expectations
Carlos Garcia Campos
Reported
2017-12-05 04:46:55 PST
Make it possible to set expectations for WebDriver tests.
Attachments
Patch
(36.44 KB, patch)
2017-12-05 04:59 PST
,
Carlos Garcia Campos
no flags
Details
Formatted Diff
Diff
Try to fix webkitpy EWS
(37.61 KB, patch)
2017-12-05 06:11 PST
,
Carlos Garcia Campos
no flags
Details
Formatted Diff
Diff
New patch
(45.29 KB, patch)
2017-12-14 05:36 PST
,
Carlos Garcia Campos
no flags
Details
Formatted Diff
Diff
New patch
(45.29 KB, patch)
2017-12-14 06:39 PST
,
Carlos Garcia Campos
no flags
Details
Formatted Diff
Diff
Updated patch
(36.97 KB, patch)
2018-01-15 02:51 PST
,
Carlos Garcia Campos
clopez
: review+
Details
Formatted Diff
Diff
Show Obsolete
(4)
View All
Add attachment
proposed patch, testcase, etc.
Carlos Garcia Campos
Comment 1
2017-12-05 04:59:55 PST
Created
attachment 328445
[details]
Patch
Carlos Garcia Campos
Comment 2
2017-12-05 06:11:26 PST
Created
attachment 328446
[details]
Try to fix webkitpy EWS We need to autoinstall py too, because pytest needs it
Blaze Burg
Comment 3
2017-12-08 13:59:40 PST
Comment on
attachment 328446
[details]
Try to fix webkitpy EWS Hi Carlos. In our internal test suite for safaridriver, I modified py-expect by (gsnedders) to read and write LayoutTest-style expectations files. This lets you mark a test as HANG, SKIP, FAIL and it applies the appropriate pytest annotation to each test case. So, it works at a per-test basis. Would you be interested in taking that approach instead? I'm happy to send you a patch.
Carlos Garcia Campos
Comment 4
2017-12-09 02:25:03 PST
I didn't know py-expect, I thought it was not possible to add test marks without actually updating the test code. I didn't want to use the layout tests expectations format, because I think it's too complex for WebDriver tests, so I added a format that is easy to write and read and even easier to parse, and all the information is in a single file (the chain of TestExpectation files in layout tests is a mess we don't need here at all). But, if you already have code to reuse the layout tests format and the solution is better than mine, then I'm fine adopting it. Note that this format also applies in a per test basis, except for skip, that we should only use when something is not implemented and none of the tests will pass.
Carlos Garcia Campos
Comment 5
2017-12-13 23:06:57 PST
I've taken a look at pytest-expect and I have a few comments: - It could be a good idea to mark tests individually on collection no matter what format we use for the expectations file. I personally prefer to use a simpler format than layout tests expectations, with all the expectations in a single file. - I don't think we need to fork and modify any third-party plugin, pytest-expect in this case. Most of the code is actually to parse and write expectations in a file format we don't want. So, we just need our own plugin using pytest_collection_modifyitems to mark tests on collection. - However, it's not currently possible to use this approach for w3c tests, because wptrunner records XPASS as PASSED and SKIPPED as ERROR ("In-test skip decorators are disallowed, please use WPT metadata to ignore tests."). So, we would need to use our own record plugins (or override the wptrunner one) and call pytest ourselves like we do for the selenium tests.
Carlos Garcia Campos
Comment 6
2017-12-14 03:30:53 PST
I've realized that we will have to use pytest directly for w3c tests eventually to support timeout as test result, so I've moved the selenium code to a common place to be used also by w3c tests. See
bug #180800
.
Carlos Garcia Campos
Comment 7
2017-12-14 05:36:13 PST
Created
attachment 329347
[details]
New patch This now applies on top of patch attached to
bug #180800
. It still uses the json file for the expectations, but marks tests instead of comparing the results.
Carlos Garcia Campos
Comment 8
2017-12-14 06:39:58 PST
Created
attachment 329351
[details]
New patch Same patch for EWS, it should apply now
Carlos Garcia Campos
Comment 9
2018-01-08 07:19:14 PST
Ping Brian :-)
Carlos Garcia Campos
Comment 10
2018-01-15 02:51:56 PST
Created
attachment 331326
[details]
Updated patch Simplified since we now share the test expectations class with the unit tests.
Carlos Alberto Lopez Perez
Comment 11
2018-01-16 03:37:36 PST
Comment on
attachment 331326
[details]
Updated patch I'm not a fan of non-raw-plain-texts formats (JSON/XML/etc) for files that are supposed to be edited directly by Humans. It will be a good idea if some pytest can be added that at least checks that the expectations.json file is valid and the script won't break when trying to load it.
Blaze Burg
Comment 12
2018-01-16 08:49:58 PST
(In reply to Carlos Garcia Campos from
comment #5
)
> I've taken a look at pytest-expect and I have a few comments: > > - It could be a good idea to mark tests individually on collection no > matter what format we use for the expectations file. I personally prefer to > use a simpler format than layout tests expectations, with all the > expectations in a single file. > > - I don't think we need to fork and modify any third-party plugin, > pytest-expect in this case. Most of the code is actually to parse and write > expectations in a file format we don't want. So, we just need our own plugin > using pytest_collection_modifyitems to mark tests on collection. > > - However, it's not currently possible to use this approach for w3c tests, > because wptrunner records XPASS as PASSED and SKIPPED as ERROR ("In-test > skip decorators are disallowed, please use WPT metadata to ignore tests."). > So, we would need to use our own record plugins (or override the wptrunner > one) and call pytest ourselves like we do for the selenium tests.
I agree on your points. I'm currently pursuing the last task myself, with mixed results so far. I'll let you know if I figure it all out!
Carlos Garcia Campos
Comment 13
2018-01-16 22:36:32 PST
(In reply to Brian Burg from
comment #12
)
> (In reply to Carlos Garcia Campos from
comment #5
) > > I've taken a look at pytest-expect and I have a few comments: > > > > - It could be a good idea to mark tests individually on collection no > > matter what format we use for the expectations file. I personally prefer to > > use a simpler format than layout tests expectations, with all the > > expectations in a single file. > > > > - I don't think we need to fork and modify any third-party plugin, > > pytest-expect in this case. Most of the code is actually to parse and write > > expectations in a file format we don't want. So, we just need our own plugin > > using pytest_collection_modifyitems to mark tests on collection. > > > > - However, it's not currently possible to use this approach for w3c tests, > > because wptrunner records XPASS as PASSED and SKIPPED as ERROR ("In-test > > skip decorators are disallowed, please use WPT metadata to ignore tests."). > > So, we would need to use our own record plugins (or override the wptrunner > > one) and call pytest ourselves like we do for the selenium tests. > > I agree on your points. I'm currently pursuing the last task myself, with > mixed results so far. I'll let you know if I figure it all out!
We no longer use wpt pytestrunner since
r225902
, now we use our own, that allowed us to properly mark and handle expectations, including skipped subtests, and simplified the whole thing because we use that also for selenium tests.
Carlos Garcia Campos
Comment 14
2018-01-17 03:24:11 PST
Committed
r227064
: <
https://trac.webkit.org/changeset/227064
>
Radar WebKit Bug Importer
Comment 15
2018-01-17 03:31:11 PST
<
rdar://problem/36579076
>
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug