Bug 180420 - WebDriver: add support for test expectations
Summary: WebDriver: add support for test expectations
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebDriver (show other bugs)
Version: WebKit Nightly Build
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Nobody
URL:
Keywords: InRadar
Depends on: 180800
Blocks:
  Show dependency treegraph
 
Reported: 2017-12-05 04:46 PST by Carlos Garcia Campos
Modified: 2018-01-17 03:31 PST (History)
6 users (show)

See Also:


Attachments
Patch (36.44 KB, patch)
2017-12-05 04:59 PST, Carlos Garcia Campos
no flags Details | Formatted Diff | Diff
Try to fix webkitpy EWS (37.61 KB, patch)
2017-12-05 06:11 PST, Carlos Garcia Campos
no flags Details | Formatted Diff | Diff
New patch (45.29 KB, patch)
2017-12-14 05:36 PST, Carlos Garcia Campos
no flags Details | Formatted Diff | Diff
New patch (45.29 KB, patch)
2017-12-14 06:39 PST, Carlos Garcia Campos
no flags Details | Formatted Diff | Diff
Updated patch (36.97 KB, patch)
2018-01-15 02:51 PST, Carlos Garcia Campos
clopez: review+
Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Carlos Garcia Campos 2017-12-05 04:46:55 PST
Make it possible to set expectations for WebDriver tests.
Comment 1 Carlos Garcia Campos 2017-12-05 04:59:55 PST
Created attachment 328445 [details]
Patch
Comment 2 Carlos Garcia Campos 2017-12-05 06:11:26 PST
Created attachment 328446 [details]
Try to fix webkitpy EWS

We need to autoinstall py too, because pytest needs it
Comment 3 BJ Burg 2017-12-08 13:59:40 PST
Comment on attachment 328446 [details]
Try to fix webkitpy EWS

Hi Carlos. In our internal test suite for safaridriver, I modified py-expect by (gsnedders) to read and write LayoutTest-style expectations files. This lets you mark a test as HANG, SKIP, FAIL and it applies the appropriate pytest annotation to each test case. So, it works at a per-test basis. Would you be interested in taking that approach instead? I'm happy to send you a patch.
Comment 4 Carlos Garcia Campos 2017-12-09 02:25:03 PST
I didn't know py-expect, I thought it was not possible to add test marks without actually updating the test code. I didn't want to use the layout tests expectations format, because I think it's too complex for WebDriver tests, so I added a format that is easy to write and read and even easier to parse, and all the information is in a single file (the chain of TestExpectation files in layout tests is a mess we don't need here at all). But, if you already have code to reuse the layout tests format and the solution is better than mine, then I'm fine adopting it. Note that this format also applies in a per test basis, except for skip, that we should only use when something is not implemented and none of the tests will pass.
Comment 5 Carlos Garcia Campos 2017-12-13 23:06:57 PST
I've taken a look at pytest-expect and I have a few comments:

 - It could be a good idea to mark tests individually on collection no matter what format we use for the expectations file. I personally prefer to use a simpler format than layout tests expectations, with all the expectations in a single file.

 - I don't think we need to fork and modify any third-party plugin, pytest-expect in this case. Most of the code is actually to parse and write expectations in a file format we don't want. So, we just need our own plugin using pytest_collection_modifyitems to mark tests on collection.

 - However, it's not currently possible to use this approach for w3c tests, because wptrunner records XPASS as PASSED and SKIPPED as ERROR ("In-test skip decorators are disallowed, please use WPT metadata to ignore tests."). So, we would need to use our own record plugins (or override the wptrunner one) and call pytest ourselves like we do for the selenium tests.
Comment 6 Carlos Garcia Campos 2017-12-14 03:30:53 PST
I've realized that we will have to use pytest directly for w3c tests eventually to support timeout as test result, so I've moved the selenium code to a common place to be used also by w3c tests. See bug #180800.
Comment 7 Carlos Garcia Campos 2017-12-14 05:36:13 PST
Created attachment 329347 [details]
New patch

This now applies on top of patch attached to bug #180800. It still uses the json file for the expectations, but marks tests instead of comparing the results.
Comment 8 Carlos Garcia Campos 2017-12-14 06:39:58 PST
Created attachment 329351 [details]
New patch

Same patch for EWS, it should apply now
Comment 9 Carlos Garcia Campos 2018-01-08 07:19:14 PST
Ping Brian :-)
Comment 10 Carlos Garcia Campos 2018-01-15 02:51:56 PST
Created attachment 331326 [details]
Updated patch

Simplified since we now share the test expectations class with the unit tests.
Comment 11 Carlos Alberto Lopez Perez 2018-01-16 03:37:36 PST
Comment on attachment 331326 [details]
Updated patch

I'm not a fan of non-raw-plain-texts formats (JSON/XML/etc) for files that are supposed to be edited directly by Humans. It will be a good idea if some pytest can be added that at least checks that the expectations.json file is valid and the script won't break when trying to load it.
Comment 12 BJ Burg 2018-01-16 08:49:58 PST
(In reply to Carlos Garcia Campos from comment #5)
> I've taken a look at pytest-expect and I have a few comments:
> 
>  - It could be a good idea to mark tests individually on collection no
> matter what format we use for the expectations file. I personally prefer to
> use a simpler format than layout tests expectations, with all the
> expectations in a single file.
> 
>  - I don't think we need to fork and modify any third-party plugin,
> pytest-expect in this case. Most of the code is actually to parse and write
> expectations in a file format we don't want. So, we just need our own plugin
> using pytest_collection_modifyitems to mark tests on collection.
> 
>  - However, it's not currently possible to use this approach for w3c tests,
> because wptrunner records XPASS as PASSED and SKIPPED as ERROR ("In-test
> skip decorators are disallowed, please use WPT metadata to ignore tests.").
> So, we would need to use our own record plugins (or override the wptrunner
> one) and call pytest ourselves like we do for the selenium tests.

I agree on your points. I'm currently pursuing the last task myself, with mixed results so far. I'll let you know if I figure it all out!
Comment 13 Carlos Garcia Campos 2018-01-16 22:36:32 PST
(In reply to Brian Burg from comment #12)
> (In reply to Carlos Garcia Campos from comment #5)
> > I've taken a look at pytest-expect and I have a few comments:
> > 
> >  - It could be a good idea to mark tests individually on collection no
> > matter what format we use for the expectations file. I personally prefer to
> > use a simpler format than layout tests expectations, with all the
> > expectations in a single file.
> > 
> >  - I don't think we need to fork and modify any third-party plugin,
> > pytest-expect in this case. Most of the code is actually to parse and write
> > expectations in a file format we don't want. So, we just need our own plugin
> > using pytest_collection_modifyitems to mark tests on collection.
> > 
> >  - However, it's not currently possible to use this approach for w3c tests,
> > because wptrunner records XPASS as PASSED and SKIPPED as ERROR ("In-test
> > skip decorators are disallowed, please use WPT metadata to ignore tests.").
> > So, we would need to use our own record plugins (or override the wptrunner
> > one) and call pytest ourselves like we do for the selenium tests.
> 
> I agree on your points. I'm currently pursuing the last task myself, with
> mixed results so far. I'll let you know if I figure it all out!

We no longer use wpt pytestrunner since r225902, now we use our own, that allowed us to properly mark and handle expectations, including skipped subtests, and simplified the whole thing because we use that also for selenium tests.
Comment 14 Carlos Garcia Campos 2018-01-17 03:24:11 PST
Committed r227064: <https://trac.webkit.org/changeset/227064>
Comment 15 Radar WebKit Bug Importer 2018-01-17 03:31:11 PST
<rdar://problem/36579076>