Summary: | Add performance tests for the getters/setters of HTMLElement attributes | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | WebKit | Reporter: | Kentaro Hara <haraken> | ||||||||
Component: | Tools / Tests | Assignee: | Kentaro Hara <haraken> | ||||||||
Status: | RESOLVED WONTFIX | ||||||||||
Severity: | Normal | CC: | abarth, rniwa | ||||||||
Priority: | P2 | ||||||||||
Version: | 528+ (Nightly build) | ||||||||||
Hardware: | Unspecified | ||||||||||
OS: | Unspecified | ||||||||||
Bug Depends on: | |||||||||||
Bug Blocks: | 79208 | ||||||||||
Attachments: |
|
Description
Kentaro Hara
2012-02-22 05:29:19 PST
Created attachment 128186 [details]
Patch
Created attachment 128194 [details]
Patch
Created attachment 128196 [details]
Patch
Same comment as the bug 79231. I don't think we want to add hundreds of these tests. We need to make sure performance tests we're adding are useful on their own. e.g. a test that takes 100ms to run would end up consuming at least 2s in total since we get 20 samples by default. If we're adding 30 of those, then we'll be increasing the bot cycle time by a minute. Also note that Chromium Mac Perf bots are Mac minis and will be significantly slower than your Linux machines. (In reply to comment #4) > Same comment as the bug 79231. I don't think we want to add hundreds of these tests. We need to make sure performance tests we're adding are useful on their own. OK, let me reduce the number of tests. > e.g. a test that takes 100ms to run would end up consuming at least 2s in total since we get 20 samples by default. If we're adding 30 of those, then we'll be increasing the bot cycle time by a minute. Also I want to reduce the time of each test, keeping the test results "reliable". Do you have any rough condition between median and stddev which would guarantee the "reliability"? The objectives of these tests are as follows: - catch the performance regression in WebCore implementation - catch the performance regression in DOM bindings - compare the performance between JSC bindings and V8 bindings (My primary focus is here. Currently V8 bindings are slower than JSC bindings, and I want to improve it.) By the way, what is a good way to describe multiple tests in one html file? It seems wasteful to load one html just for a ~100ms test. (In reply to comment #6) > The objectives of these tests are as follows: > - catch the performance regression in WebCore implementation > - catch the performance regression in DOM bindings > - compare the performance between JSC bindings and V8 bindings (My primary focus is here. Currently V8 bindings are slower than JSC bindings, and I want to improve it.) Do we really need to add new tests for these? I suspect DOM and Dromaeo tests would catch these regressions. (In reply to comment #8) > Do we really need to add new tests for these? I suspect DOM and Dromaeo tests would catch these regressions. rniwa: Dromaeo lacks coverage and simplicity. In case of Dromaeo's dom-attr.html, it just tests HTMLElement.id. In case of Dromaeo's dom-traverse.html, it just tests a small part of Node attributes. In addition, Dromaeo's dom-traverse.html traverses a large HTML, which can be affected by a cache performance (at the JavaScript level or the CPU level) and thus makes it difficult to capture a _pure_ performance of DOM bindings and WebCore implementation. In fact, performance issues seem to have been happening in various DOM attributes (https://docs.google.com/a/google.com/spreadsheet/ccc?key=0ArGPzKNdEGeQdEttZTAxWXJwMVdWeGxjbDgxMExYSEE#gid=0). Thus I would like to add a micro-benchmark that _directly_ captures the _pure_ performance of each DOM attribute getter/setter. That being said, I do agree that we do not need to add micro-benchmarks for all DOM attributes. We can select DOM attributes which would practically important (i.e. attributes in HTMLElement, Element and Node) and would have a different call-path from another DOM attribute (i.e. if we have a test for div.scrollTop, we do not need a test for div.scrollLeft). |