Created attachment 165486 [details] Two consecutive runs of perf. tests at r129387 floats_20_100 has really small in-run variances but has a large between-run variance: http://webkit-perf.appspot.com/graph.html#tests=[[477032,2001,3001]]&sel=1347931595661,1348536395661,408.9015180414373,486.55303319295246&displayrange=7&datatype=running Bindings/scroll-top has the same problem: http://webkit-perf.appspot.com/graph.html#tests=[[2932950,2001,3001],[2932950,2001,963028],[2932950,2001,32196]]&sel=1347932196853.843,1348536395661,24.128788890260637,166.97916767813942&displayrange=7&datatype=running So does Dromaeo/jslib-attr-prototype: http://webkit-perf.appspot.com/graph.html#tests=[[45011,2001,3001],[45011,2001,963028],[45011,2001,32196]]&sel=1347932196853.843,1348536395661&displayrange=7&datatype=running All these tests result in false positive on results page (see attachment).
I've considered the following two approaches to solve this problem: 1. Increase the number of samples we take in each test (js code change). 2. Reduce the number of samples taken in each test by a factor of roughly 4, and run the same in 4 different instances of DumpRenderTree. I'm going to post a whole bunch of results pages now but the results appear to indicate that we should do take the approach 2.
Created attachment 165487 [details] layout_20_100 (original)
Created attachment 165488 [details] lyaout_20_100 (approach 1)
Created attachment 165489 [details] layout_20_100 (approach 2)
Created attachment 165490 [details] scroll-top (original)
Created attachment 165491 [details] scroll-top (approach 1)
Created attachment 165492 [details] scroll-top (approach 2)
Created attachment 165493 [details] js-attr-prototype (original)
Created attachment 165494 [details] js-attr-prototype (approach 1)
Created attachment 165495 [details] js-attr-prototype (approach 2)
To elaborate more on the two approaches, let me give you an example. Suppose we have a sample test that has 20 iterations. Approach 1 increases the number of iterations to, say, 100. Approach 2 reduces the number of iterations to 5, but then runs it 4 times, each using a different instance of DumpRenderTree.
General comment: In my experience, an average value is strongly affected by a couple of outliers. How about calculating the average value after discarding one or two outliers (i.e. discarding one or two largest values)? Or how about using a median instead of an average? What we are interested in is not the distribution of execution times but the execution time in "common cases". In that sense, in order to observe what we want to observe, it might make more sense to observe a median or an average that ignores outliers than observe a pure average of all values.
(In reply to comment #12) > General comment: In my experience, an average value is strongly affected by a couple of outliers. How about calculating the average value after discarding one or two outliers (i.e. discarding one or two largest values)? Or how about using a median instead of an average? You can take a look at each graph (click on the test to show the graph, and then click on the graph to adjust the y-axis), but I don't think discarding one or two extrema or using median wouldn't help here because some of these tests have bi-modal distributions.
Also, take a look at the graph on https://bug-97510-attachments.webkit.org/attachment.cgi?id=165488 (layout_20_100 with 100 iterations). There, values are not only multi-modal but both means and medians are centered at different values in different runs.
IMO you want to run enough iterations so that issues in repeating code are found (i.e. if the javascript heap size increases each time slowing stuff down), but running multiple instances does improve variance and it reduces the risk of an entire run being an outlier.
(In reply to comment #15) > IMO you want to run enough iterations so that issues in repeating code are found (i.e. if the javascript heap size increases each time slowing stuff down), but running multiple instances does improve variance and it reduces the risk of an entire run being an outlier. In V8 at least, there are some global variables that are initialized at startup, which is then used to compute hashes, etc... So just increasing the number of iterations doesn't help. See results labeled "(approach 1)",
Created attachment 189027 [details] Work in progress
Created attachment 190681 [details] Work in progress 2
Created attachment 190829 [details] Patch
Created attachment 190831 [details] Fixed a harness test
Comment on attachment 190831 [details] Fixed a harness test View in context: https://bugs.webkit.org/attachment.cgi?id=190831&action=review I can't wait to see the result on the bots. > PerformanceTests/Dromaeo/resources/dromaeorunner.js:9 > setup: function(testName) { > - PerfTestRunner.prepareToMeasureValuesAsync({iterationCount: 5, doNotMeasureMemoryUsage: true, doNotIgnoreInitialRun: true, unit: 'runs/s'}); > + PerfTestRunner.prepareToMeasureValuesAsync({dromaeoIterationCount: 5, doNotMeasureMemoryUsage: true, doNotIgnoreInitialRun: true, unit: 'runs/s'}); > > var iframe = document.createElement("iframe"); > - var url = DRT.baseURL + "?" + testName + '&numTests=' + PerfTestRunner.iterationCount(); > + var url = DRT.baseURL + "?" + testName + '&numTests=' + 5; var dromaeoIterationCount; PerfTestRunner.prepareToMeasureValuesAsync({dromaeoIterationCount: dromaeoIterationCount, Foobar) [...] var url = DRT.baseURL + "?" + testName + '&numTests=' + dromaeoIterationCount; > PerformanceTests/resources/runner.js:158 > + iterationCount = test.dromaeoIterationCount || (window.testRunner ? 5 : 20); Damn, JavaScript is ugly :-D > Tools/Scripts/webkitpy/performance_tests/perftest.py:110 > + def __init__(self, port, test_name, test_path, process_count=4): process_count -> process_run_count or something alike? > Tools/Scripts/webkitpy/performance_tests/perftest.py:134 > + for _ in range(0, self._process_count): xrange? Gosh I hate python sometime :) > Tools/Scripts/webkitpy/performance_tests/perftest.py:138 > + if not self._run_with_driver(driver, time_out_ms): > + return None You may have 3 run with results and one that failed?
Created attachment 191091 [details] Updated per comment and introduced iteration groups
Comment on attachment 191091 [details] Updated per comment and introduced iteration groups View in context: https://bugs.webkit.org/attachment.cgi?id=191091&action=review > Tools/Scripts/webkitpy/performance_tests/perftest.py:383 > - for i in range(0, 20): > + for i in range(0, 6): What does this do?
Comment on attachment 191091 [details] Updated per comment and introduced iteration groups I would have split the iteration change half out into a separate patch. That would ahve reduced the size by half, and made the two independent perf results changes separate.
Comment on attachment 191091 [details] Updated per comment and introduced iteration groups View in context: https://bugs.webkit.org/attachment.cgi?id=191091&action=review >> Tools/Scripts/webkitpy/performance_tests/perftest.py:383 >> + for i in range(0, 6): > > What does this do? It'll do 6 runs instead of 20 for a given driver. 20 should have been 21 since we ignore the first run. It was a bug :(
Comment on attachment 191091 [details] Updated per comment and introduced iteration groups Clearing flags on attachment: 191091 Committed r144583: <http://trac.webkit.org/changeset/144583>
All reviewed patches have been landed. Closing bug.