WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
NEW
105003
Whitelist a subset of tests to be ran on run-perf-tests by default
https://bugs.webkit.org/show_bug.cgi?id=105003
Summary
Whitelist a subset of tests to be ran on run-perf-tests by default
Ryosuke Niwa
Reported
2012-12-14 02:28:03 PST
Right now, we run all tests in PerformanceTests by default but this isn't really helpful because some tests generate results with really high variance while others test really specific feature of the browser that it's not worth running as regression tests. By whitelisting tests that are known to be good indicative of WebKit performance, we can start using these tests to see if a patch cause a real regression or not, paving our way to eventually add pref. EWS bots.
Attachments
Add attachment
proposed patch, testcase, etc.
Eric Seidel (no email)
Comment 1
2012-12-14 02:34:15 PST
Huzzah!
Zoltan Horvath
Comment 2
2012-12-14 10:12:37 PST
Sounds reasonable. It would be good to check out the used code coverage after we have the whitelist.
Ryosuke Niwa
Comment 3
2012-12-18 01:21:48 PST
What should be the criteria for a test to be whitelisted?
Stephanie Lewis
Comment 4
2012-12-18 01:31:47 PST
When calculating a tests value I usually look at reproducibility, coverage/sensitivity, external interest, and length of time to run/difficulty to set up. If a test's results are not consistent then tracking it's progress creates a burden as opposed to being helpful. If a test is testing some obscure technologies or doesn't pick up major regressions in what it does test it may not be valuable. Tests that are run by the media externally are good to keep an eye on. Length of time to run may not matter here, but if a test breaks a lot if may also impose a burden. I think a good first step would be figuring out which tests have less than a 2% difference over a significant number of runs of the same source and go from there.
Ryosuke Niwa
Comment 5
2012-12-18 14:56:30 PST
(In reply to
comment #4
)
> When calculating a tests value I usually look at reproducibility, coverage/sensitivity, external interest, and length of time to run/difficulty to set up. If a test's results are not consistent then tracking it's progress creates a burden as opposed to being helpful. If a test is testing some obscure technologies or doesn't pick up major regressions in what it does test it may not be valuable. Tests that are run by the media externally are good to keep an eye on. Length of time to run may not matter here, but if a test breaks a lot if may also impose a burden.
That sounds sensible... except that
> I think a good first step would be figuring out which tests have less than a 2% difference over a significant number of runs of the same source and go from there.
Almost all tests have more than 2% variance :( We should probably fix
https://bugs.webkit.org/show_bug.cgi?id=97510
first then.
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug