Bug 79235

Summary: Add performance tests for the getters/setters of Node attributes
Product: WebKit Reporter: Kentaro Hara <haraken>
Component: Tools / TestsAssignee: Kentaro Hara <haraken>
Status: RESOLVED WONTFIX    
Severity: Normal CC: abarth, morrita, rniwa
Priority: P2    
Version: 528+ (Nightly build)   
Hardware: Unspecified   
OS: Unspecified   
Bug Depends on:    
Bug Blocks: 79208    
Attachments:
Description Flags
Patch none

Description Kentaro Hara 2012-02-22 06:15:53 PST
We can add micro-benchmark performance tests for the getters/setters of Node attributes.
Comment 1 Kentaro Hara 2012-02-22 06:21:18 PST
Created attachment 128201 [details]
Patch
Comment 2 Hajime Morrita 2012-02-22 17:13:33 PST
I'm a bit worry about to have these too-micro-benchmarks 
because they could prevent code enhancement in unreasonable way - 
degrading the speed of non-hot attribute accesses.

How about to make these tests into single test and
provide individual score in FYI manner intead?
For example, we could make them visible when you run it on the browsers or DRT
but make the perfbot use only its accumulated value.
Comment 3 Ryosuke Niwa 2012-02-22 17:16:20 PST
(In reply to comment #2)
> I'm a bit worry about to have these too-micro-benchmarks 
> because they could prevent code enhancement in unreasonable way - 
> degrading the speed of non-hot attribute accesses.

I have the same concern as well.

> How about to make these tests into single test and
> provide individual score in FYI manner intead?
> For example, we could make them visible when you run it on the browsers or DRT
> but make the perfbot use only its accumulated value.

That sounds reasonable. Presumably, we just need a new regex to the list of lines to ignore in performance_tests.py.
Comment 4 Kentaro Hara 2012-02-22 17:36:39 PST
(In reply to comment #3)
> > How about to make these tests into single test and
> > provide individual score in FYI manner intead?
> > For example, we could make them visible when you run it on the browsers or DRT
> > but make the perfbot use only its accumulated value.
> 
> That sounds reasonable. Presumably, we just need a new regex to the list of lines to ignore in performance_tests.py.

Sounds good. I'll take the approach.

> > I'm a bit worry about to have these too-micro-benchmarks 
> > because they could prevent code enhancement in unreasonable way - 
> > degrading the speed of non-hot attribute accesses.

Personally I do not think so, (as I mentioned in https://bugs.webkit.org/show_bug.cgi?id=79228#c9m). Micro benchmarks would be useful as long as they are intended to test hot call-paths and they are providing "reliable" results. As far as I see the current DOM getter/setter performance, it seems that we have "enhanced" code without knowing whether it changes a hot call-path or a non-hot call-path. This is never a code enhancement. Having micro benchmarks detect performance degradation would avoid that. After we learned the detected performance degradation is not a hot call-path, we can just ignore it or remove the test.

That being said, I am not sure how much these micro benchmarks would work well as I expected. So for now I am happy to add it as "FYI" as morrita@ suggested.


rniwa: Would you tell me what kind of output would be helpful for the bot?
Comment 5 Ryosuke Niwa 2012-02-22 17:47:23 PST
> Personally I do not think so, (as I mentioned in https://bugs.webkit.org/show_bug.cgi?id=79228#c9m). Micro benchmarks would be useful as long as they are intended to test hot call-paths and they are providing "reliable" results.

Not necessarily. Since adding new perf. tests will increase cycle time, it'll widen the regression window when we have a regression. And detecting performance regressions that don't affect user experience isn't useful.

>As far as I see the current DOM getter/setter performance, it seems that we have "enhanced" code without knowing whether it changes a hot call-path or a non-hot call-path. This is never a code enhancement. Having micro benchmarks detect performance degradation would avoid that. After we learned the detected performance degradation is not a hot call-path, we can just ignore it or remove the test.

I do agree that having a perf. test is good but we need to be careful not to overwhelm ourselves with data. e.g. slowing down the setter for Document.title by 100% might be okay because people rarely reset title. On the other hand, slowing down Element.style by even 5% will have quite disastrous consequences.

> rniwa: Would you tell me what kind of output would be helpful for the bot?

The bot doesn't really care what's in stdout. All you need to do is to modify _process_parser_test_result in perftestsrunner.py to ignore those "FYI" lines. We should probably add some generic method on PerfTestRunner (runner.js) that outputs information in certain format.