Bug 105003 - Whitelist a subset of tests to be ran on run-perf-tests by default
Summary: Whitelist a subset of tests to be ran on run-perf-tests by default
Status: NEW
Alias: None
Product: WebKit
Classification: Unclassified
Component: Tools / Tests (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Nobody
URL:
Keywords:
Depends on: 97510
Blocks: 77037
  Show dependency treegraph
 
Reported: 2012-12-14 02:28 PST by Ryosuke Niwa
Modified: 2013-01-24 20:08 PST (History)
7 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Ryosuke Niwa 2012-12-14 02:28:03 PST
Right now, we run all tests in PerformanceTests by default but this isn't really helpful because some tests generate results with really high variance while others test really specific feature of the browser that it's not worth running as regression tests.

By whitelisting tests that are known to be good indicative of WebKit performance, we can start using these tests to see if a patch cause a real regression or not, paving our way to eventually add pref. EWS bots.
Comment 1 Eric Seidel (no email) 2012-12-14 02:34:15 PST
Huzzah!
Comment 2 Zoltan Horvath 2012-12-14 10:12:37 PST
Sounds reasonable. It would be good to check out the used code coverage after we have the whitelist.
Comment 3 Ryosuke Niwa 2012-12-18 01:21:48 PST
What should be the criteria for a test to be whitelisted?
Comment 4 Stephanie Lewis 2012-12-18 01:31:47 PST
When calculating a tests value I usually look at reproducibility, coverage/sensitivity, external interest, and length of time to run/difficulty to set up.  If a test's results are not consistent then tracking it's progress creates a burden as opposed to being helpful.  If a test is testing some obscure technologies or doesn't pick up major regressions in what it does test it may not be valuable.  Tests that are run by the media externally are good to keep an eye on.  Length of time to run may not matter here, but if a test breaks a lot if may also impose a burden.

I think a good first step would be figuring out which tests have less than a 2% difference over a significant number of runs of the same source and go from there.
Comment 5 Ryosuke Niwa 2012-12-18 14:56:30 PST
(In reply to comment #4)
> When calculating a tests value I usually look at reproducibility, coverage/sensitivity, external interest, and length of time to run/difficulty to set up.  If a test's results are not consistent then tracking it's progress creates a burden as opposed to being helpful.  If a test is testing some obscure technologies or doesn't pick up major regressions in what it does test it may not be valuable.  Tests that are run by the media externally are good to keep an eye on.  Length of time to run may not matter here, but if a test breaks a lot if may also impose a burden.

That sounds sensible... except that

> I think a good first step would be figuring out which tests have less than a 2% difference over a significant number of runs of the same source and go from there.

Almost all tests have more than 2% variance :( We should probably fix https://bugs.webkit.org/show_bug.cgi?id=97510 first then.