Bug 105003

Summary: Whitelist a subset of tests to be ran on run-perf-tests by default
Product: WebKit Reporter: Ryosuke Niwa <rniwa>
Component: Tools / TestsAssignee: Nobody <webkit-unassigned>
Status: NEW ---    
Severity: Normal CC: barraclough, eric, haraken, mjs, slewis, syoichi, zoltan
Priority: P2    
Version: 528+ (Nightly build)   
Hardware: Unspecified   
OS: Unspecified   
Bug Depends on: 97510    
Bug Blocks: 77037    

Description Ryosuke Niwa 2012-12-14 02:28:03 PST
Right now, we run all tests in PerformanceTests by default but this isn't really helpful because some tests generate results with really high variance while others test really specific feature of the browser that it's not worth running as regression tests.

By whitelisting tests that are known to be good indicative of WebKit performance, we can start using these tests to see if a patch cause a real regression or not, paving our way to eventually add pref. EWS bots.
Comment 1 Eric Seidel (no email) 2012-12-14 02:34:15 PST
Huzzah!
Comment 2 Zoltan Horvath 2012-12-14 10:12:37 PST
Sounds reasonable. It would be good to check out the used code coverage after we have the whitelist.
Comment 3 Ryosuke Niwa 2012-12-18 01:21:48 PST
What should be the criteria for a test to be whitelisted?
Comment 4 Stephanie Lewis 2012-12-18 01:31:47 PST
When calculating a tests value I usually look at reproducibility, coverage/sensitivity, external interest, and length of time to run/difficulty to set up.  If a test's results are not consistent then tracking it's progress creates a burden as opposed to being helpful.  If a test is testing some obscure technologies or doesn't pick up major regressions in what it does test it may not be valuable.  Tests that are run by the media externally are good to keep an eye on.  Length of time to run may not matter here, but if a test breaks a lot if may also impose a burden.

I think a good first step would be figuring out which tests have less than a 2% difference over a significant number of runs of the same source and go from there.
Comment 5 Ryosuke Niwa 2012-12-18 14:56:30 PST
(In reply to comment #4)
> When calculating a tests value I usually look at reproducibility, coverage/sensitivity, external interest, and length of time to run/difficulty to set up.  If a test's results are not consistent then tracking it's progress creates a burden as opposed to being helpful.  If a test is testing some obscure technologies or doesn't pick up major regressions in what it does test it may not be valuable.  Tests that are run by the media externally are good to keep an eye on.  Length of time to run may not matter here, but if a test breaks a lot if may also impose a burden.

That sounds sensible... except that

> I think a good first step would be figuring out which tests have less than a 2% difference over a significant number of runs of the same source and go from there.

Almost all tests have more than 2% variance :( We should probably fix https://bugs.webkit.org/show_bug.cgi?id=97510 first then.