Bug 79235 - Add performance tests for the getters/setters of Node attributes
Summary: Add performance tests for the getters/setters of Node attributes
Status: RESOLVED WONTFIX
Alias: None
Product: WebKit
Classification: Unclassified
Component: Tools / Tests (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Kentaro Hara
URL:
Keywords:
Depends on:
Blocks: 79208
  Show dependency treegraph
 
Reported: 2012-02-22 06:15 PST by Kentaro Hara
Modified: 2012-02-23 01:19 PST (History)
3 users (show)

See Also:


Attachments
Patch (17.27 KB, patch)
2012-02-22 06:21 PST, Kentaro Hara
no flags Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Kentaro Hara 2012-02-22 06:15:53 PST
We can add micro-benchmark performance tests for the getters/setters of Node attributes.
Comment 1 Kentaro Hara 2012-02-22 06:21:18 PST
Created attachment 128201 [details]
Patch
Comment 2 Hajime Morrita 2012-02-22 17:13:33 PST
I'm a bit worry about to have these too-micro-benchmarks 
because they could prevent code enhancement in unreasonable way - 
degrading the speed of non-hot attribute accesses.

How about to make these tests into single test and
provide individual score in FYI manner intead?
For example, we could make them visible when you run it on the browsers or DRT
but make the perfbot use only its accumulated value.
Comment 3 Ryosuke Niwa 2012-02-22 17:16:20 PST
(In reply to comment #2)
> I'm a bit worry about to have these too-micro-benchmarks 
> because they could prevent code enhancement in unreasonable way - 
> degrading the speed of non-hot attribute accesses.

I have the same concern as well.

> How about to make these tests into single test and
> provide individual score in FYI manner intead?
> For example, we could make them visible when you run it on the browsers or DRT
> but make the perfbot use only its accumulated value.

That sounds reasonable. Presumably, we just need a new regex to the list of lines to ignore in performance_tests.py.
Comment 4 Kentaro Hara 2012-02-22 17:36:39 PST
(In reply to comment #3)
> > How about to make these tests into single test and
> > provide individual score in FYI manner intead?
> > For example, we could make them visible when you run it on the browsers or DRT
> > but make the perfbot use only its accumulated value.
> 
> That sounds reasonable. Presumably, we just need a new regex to the list of lines to ignore in performance_tests.py.

Sounds good. I'll take the approach.

> > I'm a bit worry about to have these too-micro-benchmarks 
> > because they could prevent code enhancement in unreasonable way - 
> > degrading the speed of non-hot attribute accesses.

Personally I do not think so, (as I mentioned in https://bugs.webkit.org/show_bug.cgi?id=79228#c9m). Micro benchmarks would be useful as long as they are intended to test hot call-paths and they are providing "reliable" results. As far as I see the current DOM getter/setter performance, it seems that we have "enhanced" code without knowing whether it changes a hot call-path or a non-hot call-path. This is never a code enhancement. Having micro benchmarks detect performance degradation would avoid that. After we learned the detected performance degradation is not a hot call-path, we can just ignore it or remove the test.

That being said, I am not sure how much these micro benchmarks would work well as I expected. So for now I am happy to add it as "FYI" as morrita@ suggested.


rniwa: Would you tell me what kind of output would be helpful for the bot?
Comment 5 Ryosuke Niwa 2012-02-22 17:47:23 PST
> Personally I do not think so, (as I mentioned in https://bugs.webkit.org/show_bug.cgi?id=79228#c9m). Micro benchmarks would be useful as long as they are intended to test hot call-paths and they are providing "reliable" results.

Not necessarily. Since adding new perf. tests will increase cycle time, it'll widen the regression window when we have a regression. And detecting performance regressions that don't affect user experience isn't useful.

>As far as I see the current DOM getter/setter performance, it seems that we have "enhanced" code without knowing whether it changes a hot call-path or a non-hot call-path. This is never a code enhancement. Having micro benchmarks detect performance degradation would avoid that. After we learned the detected performance degradation is not a hot call-path, we can just ignore it or remove the test.

I do agree that having a perf. test is good but we need to be careful not to overwhelm ourselves with data. e.g. slowing down the setter for Document.title by 100% might be okay because people rarely reset title. On the other hand, slowing down Element.style by even 5% will have quite disastrous consequences.

> rniwa: Would you tell me what kind of output would be helpful for the bot?

The bot doesn't really care what's in stdout. All you need to do is to modify _process_parser_test_result in perftestsrunner.py to ignore those "FYI" lines. We should probably add some generic method on PerfTestRunner (runner.js) that outputs information in certain format.