Summary: | Add an option to show raw numbers of run-benchmark results. | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | WebKit | Reporter: | dewei_zhu | ||||||
Component: | New Bugs | Assignee: | Nobody <webkit-unassigned> | ||||||
Status: | NEW --- | ||||||||
Severity: | Normal | CC: | buildbot, dewei_zhu, glenn, mjs, rniwa | ||||||
Priority: | P2 | ||||||||
Version: | WebKit Nightly Build | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Attachments: |
|
Description
dewei_zhu
2017-10-08 01:16:14 PDT
Created attachment 323127 [details]
Patch
Comment on attachment 323127 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=323127&action=review > Tools/ChangeLog:8 > + Showing result for a iteration right after iteration would help user to get more details about each iteration. I don't think this is what Maciej is asking. He's asking an option to show the list of values for each iteration at the end. Comment on attachment 323127 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=323127&action=review >> Tools/ChangeLog:8 >> + Showing result for a iteration right after iteration would help user to get more details about each iteration. > > I don't think this is what Maciej is asking. He's asking an option to show the list of values for each iteration at the end. Woud it still be useful when tests fails after some iterations? This at least would provide some infomation. And this is what page load test do. But I don't think that's what people are asking for. Perhaps we could consider this as a separate feature but I don't think it addresses any of the complains Maciej raised. OK, I'm working on a patch that shows raw numbers for each iteration. I think we need to do that before the result gets merged. (In reply to dewei_zhu from comment #5) > OK, I'm working on a patch that shows raw numbers for each iteration. I > think we need to do that before the result gets merged. No, quite the opposite. We have to do this post aggregation of the results. There's a code path for --show-results where we accumulate values, that's the code which should be taught how to show individual iteration values. For the merge function, I mean BenchmarkRunner._merge. This function will merge lists of numbers from each iteration to a single list that contains all the numbers and similar logic for dictionary. BenchmarkResults._flatten_list does similar thing to list. However, after merge, we lose the information about which iteration a value comes from. That's why I think we should do it before all the numbers get 'merged' if we want to show the raw values with iteration info. May be not showing iteration info for raw value is ok? (In reply to dewei_zhu from comment #7) > For the merge function, I mean BenchmarkRunner._merge. This function will > merge lists of numbers from each iteration to a single list that contains > all the numbers and similar logic for dictionary. > BenchmarkResults._flatten_list does similar thing to list. > > However, after merge, we lose the information about which iteration a value > comes from. That's why I think we should do it before all the numbers get > 'merged' if we want to show the raw values with iteration info. May be not > showing iteration info for raw value is ok? Yeah, we don't have to show the original raw values. In fact, for most tests, the raw values of top-level results aren't even reported. So all you have to do is to modify _format_values called by _format_tests in BenchmarkResults and show the value for each iteration Created attachment 323159 [details]
Patch
Comment on attachment 323159 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=323159&action=review We need unit tests for this. > Tools/ChangeLog:9 > + Adding '--show-raw-values' option allows showing raw numbers for run-benchmark results. Why don't we make this the default option?? > Tools/Scripts/webkitpy/benchmark_runner/benchmark_results.py:81 > + if is_a_list_of_int: > + formated_value_str = ', '.join(map(str, values)) > + else: > + formated_value_str = ', '.join(map('{:.3f}'.format, values)) I guess we don't want to show decimal points if they're whole number? Comment on attachment 323159 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=323159&action=review >> Tools/ChangeLog:9 >> + Adding '--show-raw-values' option allows showing raw numbers for run-benchmark results. > > Why don't we make this the default option?? Showing the raw number may end up with a very long line in output, I'm not sure whether all the user want to see that. Also using '--read-results-json' with '--show-raw-values' can 'replay' the numbers. >> Tools/Scripts/webkitpy/benchmark_runner/benchmark_results.py:81 >> + formated_value_str = ', '.join(map('{:.3f}'.format, values)) > > I guess we don't want to show decimal points if they're whole number? OK, I can add a check which uses value.is_integer() to determine the way to format the number. I don’t think this relates to anything I was asking for. The script already has a lot of options so I am not sure we should add more. The JSON output already contains values for each iteration. Th problem is it only includes subtesy scores, not th total score. I’m interested in having totals in there to use it as input to other analysis scripts. This sounds like a change in the output JSON format. And it's more like a conversion operation, and we may want to have a separate script to do it. In order to make it compatible to other analysis script, we first need to know the input format of those analysis script. (In reply to Maciej Stachowiak from comment #13) > The JSON output already contains values for each iteration. Th problem is it > only includes subtesy scores, not th total score. I’m interested in having > totals in there to use it as input to other analysis scripts. That would be a big change to the way run-benchmark works. We can add an option to do that but it obviously can't be turned by default. To be clear, I’m not asking anyone to change the JSON output, at least not at this time. Tomorrow I will file bugs or send email on some possibly useful improvements to run-benchmark. |