WebKit Bugzilla
Attachment 341978 Details for
Bug 186293
: run-testmem should have a dry run option and an option to parse the stdout of executing the dry run
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
patch
b-backup.diff (text/plain), 8.51 KB, created by
Saam Barati
on 2018-06-05 11:08:07 PDT
(
hide
)
Description:
patch
Filename:
MIME Type:
Creator:
Saam Barati
Created:
2018-06-05 11:08:07 PDT
Size:
8.51 KB
patch
obsolete
>Index: Tools/ChangeLog >=================================================================== >--- Tools/ChangeLog (revision 232513) >+++ Tools/ChangeLog (working copy) >@@ -1,3 +1,20 @@ >+2018-06-05 Saam Barati <sbarati@apple.com> >+ >+ run-testmem should have a dry run option and an option to parse the stdout of executing the dry run >+ https://bugs.webkit.org/show_bug.cgi?id=186293 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ This makes it easier to execute run-testmem on a device that doesn't have ruby >+ on it. run-testmem now has a --dry option that will output commands to execute >+ in bash. To run on a device that doesn't have ruby, just put its output into >+ a bash script, and copy the PerformanceTests/testmem directory onto that device, >+ and execute the bash script. Running that bash script will give you raw output. >+ Make a file with that output and pass it into run-testmem using the --parse option. >+ run-testmem will parse the raw output and compute the benchmark score. >+ >+ * Scripts/run-testmem: >+ > 2018-06-04 Frederic Wang <fwang@igalia.com> > > import-w3c-tests should rely on <meta name="flags"> to detect CSS manual tests >Index: Tools/Scripts/run-testmem >=================================================================== >--- Tools/Scripts/run-testmem (revision 232486) >+++ Tools/Scripts/run-testmem (working copy) >@@ -31,10 +31,13 @@ require 'getoptlong' > > def usage > puts "run-testmem [options]" >- puts "--build-dir (-b) pass in a path to your build directory, e.g, WebKitBuild/Release" >- puts "--verbose (-v) print more information as the benchmark runs" >- puts "--count (-c) number of outer iterations to run the benchmark for" >- puts "--help (-h) print this message" >+ puts "--build-dir (-b) Pass in a path to your build directory, e.g, WebKitBuild/Release" >+ puts "--verbose (-v) Print more information as the benchmark runs" >+ puts "--count (-c) Number of outer iterations to run the benchmark for" >+ puts "--dry (-d) Print shell output that can be run as a bash script on a different device. When using this option, provide the --script-path and --build-dir options" >+ puts "--script-path (-s) The path to the directory where you expect the testmem tests to live. Use this when doing a dry run with --dry" >+ puts "--parse (-p) After executing the dry run, capture its stdout and write it to a file. Pass the path to that file for this option and run-testmem will compute the results of the benchmark run" >+ puts "--help (-h) Print this message" > end > > THIS_SCRIPT_PATH = Pathname.new(__FILE__).realpath >@@ -43,10 +46,16 @@ SCRIPTS_PATH = THIS_SCRIPT_PATH.dirname > $buildDir = nil > $verbose = false > $outerIterations = 3 >+$dryRun = false >+$scriptPath = nil >+$parsePath = nil > > GetoptLong.new(["--build-dir", "-b", GetoptLong::REQUIRED_ARGUMENT], > ["--verbose", "-v", GetoptLong::NO_ARGUMENT], > ["--count", "-c", GetoptLong::REQUIRED_ARGUMENT], >+ ["--dry", "-d", GetoptLong::NO_ARGUMENT], >+ ["--script-path", "-s", GetoptLong::REQUIRED_ARGUMENT], >+ ["--parse", "-p", GetoptLong::REQUIRED_ARGUMENT], > ["--help", "-h", GetoptLong::NO_ARGUMENT], > ).each { > | opt, arg | >@@ -61,12 +70,23 @@ GetoptLong.new(["--build-dir", "-b", Get > puts "--count must be > 0" > exit 1 > end >+ when "--dry" >+ $dryRun = true >+ when "--script-path" >+ $scriptPath = arg >+ when "--parse" >+ $parsePath = arg > when "--help" > usage > exit 1 > end > } > >+if $scriptPath && !$dryRun >+ puts "--script-path is only supported when you are doing a --dry run" >+ exit 1 >+end >+ > def getBuildDirectory > if $buildDir != nil > return $buildDir >@@ -89,7 +109,7 @@ end > > def getTestmemPath > path = Pathname.new(getBuildDirectory).join("testmem").to_s >- if !File.exists?(path) >+ if !File.exists?(path) && !$dryRun > puts "Error: no testmem binary found in <build>/Release" > exit 1 > end >@@ -115,20 +135,30 @@ def getTests > | filename | > next unless filename =~ /\.js$/ > filePath = dirPath.join(filename).to_s >+ filePath = Pathname.new($scriptPath).join(filename).to_s if $scriptPath > files.push([filePath, iterationCount(filePath)]) > } > > files.sort_by { | (path) | File.basename(path) } > end > >-def geomean(arr) >- score = arr.inject(1.0, :*) >- score ** (1.0 / arr.length) >-end >+def processRunOutput(stdout, path) >+ time, peakFootprint, footprintAtEnd = stdout.split("\n") >+ raise unless time.slice!("time:") >+ raise unless peakFootprint.slice!("peak footprint:") >+ raise unless footprintAtEnd.slice!("footprint at end:") >+ time = time.to_f >+ peakFootprint = peakFootprint.to_f >+ footprintAtEnd = footprintAtEnd.to_f > >-def mean(arr) >- sum = arr.inject(0.0, :+) >- sum / arr.length >+ if $verbose >+ puts path >+ puts "time: #{time}" >+ puts "peak footprint: #{peakFootprint/1024/1024} MB" >+ puts "end footprint: #{footprintAtEnd/1024/1024} MB\n" >+ end >+ >+ {"time"=>time, "peak"=>peakFootprint, "end"=>footprintAtEnd} > end > > def runTest(path, iters) >@@ -138,6 +168,16 @@ def runTest(path, iters) > "JSC_useJIT" => "false", > "JSC_useRegExpJIT" => "false", > } >+ >+ if $dryRun >+ environment.each { | key, value | >+ command = "#{key}=#{value} #{command}" >+ } >+ puts "echo \"#{command}\"" >+ puts command >+ return >+ end >+ > stdout, stderr, exitCode = Open3.capture3(environment, command) > > if $verbose >@@ -152,39 +192,20 @@ def runTest(path, iters) > exit 1 > end > >- time, peakFootprint, footprintAtEnd = stdout.split("\n") >- raise unless time.slice!("time:") >- raise unless peakFootprint.slice!("peak footprint:") >- raise unless footprintAtEnd.slice!("footprint at end:") >- time = time.to_f >- peakFootprint = peakFootprint.to_f >- footprintAtEnd = footprintAtEnd.to_f >- >- if $verbose >- puts path >- puts "time: #{time}" >- puts "peak footprint: #{peakFootprint/1024/1024} MB" >- puts "end footprint: #{footprintAtEnd/1024/1024} MB\n" >- end >- >- {"time"=>time, "peak"=>peakFootprint, "end"=>footprintAtEnd} >+ processRunOutput(stdout, path) > end > >-def run >- tests = getTests >- scores = {} >- tests.each { | (path) | scores[path] = [] } >- count = $outerIterations >- (0..(count-1)).each { | currentIter | >- tests.each { | (path, iters) | >- statusToPrint = "iteration #{currentIter + 1}: #{File.basename(path, ".js")}" >- print "#{statusToPrint}\r" >- scores[path].push(runTest(path, iters)) >- print "#{" ".rjust(statusToPrint.length)}\r" >+def geomean(arr) >+ score = arr.inject(1.0, :*) >+ score ** (1.0 / arr.length) >+end > >- } >- } >+def mean(arr) >+ sum = arr.inject(0.0, :+) >+ sum / arr.length >+end > >+def processScores(scores) > peakScore = [] > endScore = [] > timeScore = [] >@@ -216,4 +237,55 @@ def run > puts JSON.pretty_generate(scores) if $verbose > end > >-run >+def run >+ tests = getTests >+ scores = {} >+ tests.each { | (path) | scores[path] = [] } >+ count = $outerIterations >+ >+ if $dryRun >+ (0..(count-1)).each { | currentIter | >+ tests.each { | (path, iters) | >+ runTest(path, iters) >+ } >+ } >+ return >+ end >+ >+ (0..(count-1)).each { | currentIter | >+ tests.each { | (path, iters) | >+ statusToPrint = "iteration #{currentIter + 1}: #{File.basename(path, ".js")}" >+ print "#{statusToPrint}\r" >+ scores[path].push(runTest(path, iters)) >+ print "#{" ".rjust(statusToPrint.length)}\r" >+ } >+ } >+ >+ processScores(scores) >+end >+ >+def parseResultOfDryRun(path) >+ contents = IO.read(path).split("\n") >+ if !contents.length || contents.length % 4 != 0 >+ puts "Bad input, expect multiple of 4 number of lines from output of running the result of --dry" >+ exit 1 >+ end >+ >+ scores = {} >+ i = 0 >+ while i < contents.length >+ path = contents[i + 0].split(" ")[-2] >+ scores[path] = [] if !scores[path] >+ stdout = [contents[i + 1], contents[i + 2], contents[i + 3]].join("\n") >+ scores[path].push(processRunOutput(stdout, path)) >+ i += 4 >+ end >+ >+ processScores(scores) >+end >+ >+if $parsePath >+ parseResultOfDryRun($parsePath) >+else >+ run >+end
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 186293
:
341976
| 341978