[JSC] implement runtime for async functions
Created attachment 292267 [details] Patch
Attachment 292267 [details] did not pass style-queue: ERROR: Source/JavaScriptCore/ChangeLog:28: Line contains tab character. [whitespace/tab] [5] Total errors found: 1 in 42 files If any of these errors are false positives, please file a bug against check-webkit-style.
Created attachment 292346 [details] Patch
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review > Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp:3426 > + case SourceParseMode::AsyncFunctionMode: > + case SourceParseMode::AsyncMethodMode: > + case SourceParseMode::AsyncArrowFunctionMode: { Can you write a helper function to share some code with SourceParseMode::GeneratorWrapperFunctionMode? I see some duplication here.
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review Great! Saam is now looking deeply. So just small nits. > Source/JavaScriptCore/builtins/AsyncFunctionPrototype.js:33 > + throw new @TypeError("Async function illegally resumed"); Let's use `@throwTypeError("");` > Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:573 > + emitMove(m_generatorRegister, &m_calleeRegister); Is this correct? Even in AsyncArrowFunctionMode, we still need to create a generator with emitCreateThis. > Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp:3479 > + generator.emitReturn(result.get()); It seems we have a chance to use tail call here.
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review > Source/JavaScriptCore/ChangeLog:32 > + In this bug, async arrow functions are only partially implemented, as super bindings in async functions > + do not yet work correctly. Is there a bug open for this? What doesn't work about super? > Source/JavaScriptCore/bytecode/UnlinkedFunctionExecutable.h:140 > + return SourceParseModeSet( > + SourceParseMode::ArrowFunctionMode, > + SourceParseMode::AsyncArrowFunctionMode, > + SourceParseMode::AsyncArrowFunctionBodyMode).contains(parseMode()); Even though this function is simple, since you also repeat it above, I'd recomment just creating a helper function that both implementations call. Like: bool isArrowFunction(SourceParseMode) { ... } Maybe defined in the file where SourceParseMode is defined. > Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:562 > + if (functionNode->usesThis() || codeBlock->usesEval()) { I'm not sure this condition covers all the use cases of this. If you look under the default case, you'll see that the normal functions check for inner arrow functions that use |this| or |super|. Maybe it's worth just writing a helper function that you can call here, in the GeneratorWrapperFunctionMode, and in the normal function mode. > Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:659 > + TryData* tryFormalParametersData = nullptr; > + if (isAsyncFunctionWrapperParseMode(parseMode) && !isSimpleParameterList) { Should normal generator functions also be doing this? > Source/JavaScriptCore/runtime/JSGlobalObject.cpp:591 > + m_functionConstructor.set(vm, this, (FunctionConstructor*)functionConstructor); You need a visitChildren rule for this. > Source/JavaScriptCore/runtime/JSGlobalObject.cpp:639 > + m_asyncFunctionStructure.initLater( And I think you need a visit children for this too? > Source/JavaScriptCore/runtime/JSGlobalObject.cpp:644 > + init.setConstructor(PropertyName(nullptr), AsyncFunctionConstructor::create(init.vm, AsyncFunctionConstructor::createStructure(init.vm, init.global, init.global->m_functionConstructor.get()), asyncFunctionPrototype)); Style nit: Please use the version that doesn't require a PropertyName as the first parameter. > Source/JavaScriptCore/runtime/JSGlobalObject.h:540 > + return const_cast<const LazyClassStructure&>(const_cast<JSGlobalObject*>(this)->lazyAsyncFunctionStructure()); I think it'd be cleaner to just return m_asyncFunctionStructure here
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review >> Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:573 >> + emitMove(m_generatorRegister, &m_calleeRegister); > > Is this correct? Even in AsyncArrowFunctionMode, we still need to create a generator with emitCreateThis. Or, I think you can just create a plain object instead. Generator uses emitCreateThis to setup the prototype correctly. But it is not necessary for async functions because async function does not expose the internal generator.
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review >> Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:562 >> + if (functionNode->usesThis() || codeBlock->usesEval()) { > > I'm not sure this condition covers all the use cases of this. If you look under the default case, you'll see that the normal functions check for inner arrow functions that use |this| or |super|. Maybe it's worth just writing a helper function that you can call here, in the GeneratorWrapperFunctionMode, and in the normal function mode. Yusuke Suzuki: I had originally copy/pasted this from the generator wrapper code, WDYT? Is there test coverage for deeply nested arrows inside generator bodies which use this/eval? >> Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:573 >> + emitMove(m_generatorRegister, &m_calleeRegister); > > Is this correct? Even in AsyncArrowFunctionMode, we still need to create a generator with emitCreateThis. You're right. I think this results in storing generator fields on the wrong object, which is potentially observable. I'm not sure exactly how to test that it is observable though.
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review > JSTests/stress/async-await-basic.js:8 > + if (msg === void 0) > + msg = ""; > + else > + msg = " for " + msg; Nit: You could just make this a default parameter value. > JSTests/stress/async-await-mozilla.js:8 > +function shouldBe(expected, actual, msg) { Nit: Since you use these helpers in many of your tests, you could create a file in some async-helpers directory like JSTestes/stress/async-helpers and then load the helpers at the top of these files. > JSTests/stress/async_arrow_functions_lexical_arguments_binding.js:1 > +// This test requires ENABLE_ES2017_ASYNCFUNCTION_SYNTAX to be enabled at build time. Nit: I think we tend to use dash instead of underscore for test file names. > JSTests/stress/async_arrow_functions_lexical_arguments_binding.js:47 > +shouldBeAsync("[1,2,3]", () => (function() { return (async () => JSON.stringify([...arguments]))(); })(1, 2, 3)); > +shouldBeAsync("[4,5,6]", () => (function() { return (async () => { return JSON.stringify([...await arguments]) })(); })(4, 5, 6)); Can you also add a test that tests that these are the same binding? > JSTests/stress/async_arrow_functions_lexical_new.target_binding.js:1 > +// This test requires ENABLE_ES2017_ASYNCFUNCTION_SYNTAX to be enabled at build time. Nit: I think we tend to use dash instead of underscore for test file names. > JSTests/stress/async_arrow_functions_lexical_super_binding.js:1 > +// This test requires ENABLE_ES2017_ASYNCFUNCTION_SYNTAX to be enabled at build time. Nit: I think we tend to use dash instead of underscore for test file names. > JSTests/stress/async_arrow_functions_lexical_this_binding.js:1 > +// This test requires ENABLE_ES2017_ASYNCFUNCTION_SYNTAX to be enabled at build time. Nit: I think we tend to use dash instead of underscore for test file names.
Patch LGTM, however, I'll let you address some of our comments before I r+.
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review >>> Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:562 >>> + if (functionNode->usesThis() || codeBlock->usesEval()) { >> >> I'm not sure this condition covers all the use cases of this. If you look under the default case, you'll see that the normal functions check for inner arrow functions that use |this| or |super|. Maybe it's worth just writing a helper function that you can call here, in the GeneratorWrapperFunctionMode, and in the normal function mode. > > Yusuke Suzuki: I had originally copy/pasted this from the generator wrapper code, WDYT? Is there test coverage for deeply nested arrows inside generator bodies which use this/eval? I think GeneratorWrapperFunctionMode currently always emits to_this and pass it to the generator, right?
Created attachment 292386 [details] First round of comments
Created attachment 292387 [details] First round of comments + delete that commented out op_to_this
Created attachment 292390 [details] All those other fixes + a rebase
Comment on attachment 292346 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292346&action=review >> Source/JavaScriptCore/ChangeLog:32 >> + do not yet work correctly. > > Is there a bug open for this? What doesn't work about super? As an example from the test: ``` Exception: TypeError: undefined is not an object (evaluating 'super.baseClassValue') JSTests/stress/async-arrow-functions-lexical-super-binding.js:38:32 asyncFunctionResume@[native code] shouldBeAsync@JSTests/stress/async-arrow-functions-lexical-super-binding.js:17:8 global code@JSTests/stress/async-arrow-functions-lexical-super-binding.js:46:14 ``` This happens, even though the lexical arguments, new.target and this tests pass. I believe the issue is the inner arrow function features aren't being passed back to the wrapper function, or aren't used correctly, or loading super in the inner arrow function is just implemented incorrectly. If I can't figure out the cause of this before landing this, I will file a bug and add it to the FIXME. >> Source/JavaScriptCore/runtime/JSGlobalObject.cpp:639 >> + m_asyncFunctionStructure.initLater( > > And I think you need a visit children for this too? Done for both additions >> Source/JavaScriptCore/runtime/JSGlobalObject.cpp:644 >> + init.setConstructor(PropertyName(nullptr), AsyncFunctionConstructor::create(init.vm, AsyncFunctionConstructor::createStructure(init.vm, init.global, init.global->m_functionConstructor.get()), asyncFunctionPrototype)); > > Style nit: Please use the version that doesn't require a PropertyName as the first parameter. That version will take the function name from the function and install it as a property on the global object. We don't want that. Similarly, the GeneratorFunction constructor is also not installed on the global object. >> Source/JavaScriptCore/runtime/JSGlobalObject.h:540 >> + return const_cast<const LazyClassStructure&>(const_cast<JSGlobalObject*>(this)->lazyAsyncFunctionStructure()); > > I think it'd be cleaner to just return m_asyncFunctionStructure here I agree. I assume there was some reason for not doing that in other similar code in this file, but I'll try it I guess >> JSTests/stress/async-await-mozilla.js:8 >> +function shouldBe(expected, actual, msg) { > > Nit: Since you use these helpers in many of your tests, you could create a file in some async-helpers directory like JSTestes/stress/async-helpers > and then load the helpers at the top of these files. Since these tests are all skipped from the test runner, it's simpler to run them the with the helpers installed in each test file, because you're running them manually. Maybe something like that could wait until the feature is properly upstream? Note: there's also a lot of precedence, many of the tests in JSTests/stress define their own assertion helpers.
Created attachment 292424 [details] fix the FunctionConstructor.cpp unreachable code
Comment on attachment 292424 [details] fix the FunctionConstructor.cpp unreachable code View in context: https://bugs.webkit.org/attachment.cgi?id=292424&action=review > Source/JavaScriptCore/builtins/AsyncFunctionPrototype.js:33 > + throw new @TypeError(""); Ah, no. Sorry for my misleading comment. There is special builtin, @throwTypeError. So instead of using `throw new @TypeError()`, use `@throwTypeError("with your error message")`. > Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:560 > + And, is there any need to emit to_this here? In the generator wrapper function mode code, we always emit to_this to set up the |this|. > Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:561 > + emitMove(m_generatorRegister, &m_calleeRegister); I think it's not necessary. m_generatorRegister will be filled by emitNewObject.
Comment on attachment 292424 [details] fix the FunctionConstructor.cpp unreachable code View in context: https://bugs.webkit.org/attachment.cgi?id=292424&action=review >> Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:560 >> + > > And, is there any need to emit to_this here? In the generator wrapper function mode code, we always emit to_this to set up the |this|. in my test, this seems to perform ToObject() on the receiver, which breaks when F.p.call or apply is used with an undefined receiver in strict mode. I'm not sure why that doesn't break generators in the same way, but AFAIK there is no need for it >> Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:561 >> + emitMove(m_generatorRegister, &m_calleeRegister); > > I think it's not necessary. m_generatorRegister will be filled by emitNewObject. ok
Comment on attachment 292424 [details] fix the FunctionConstructor.cpp unreachable code View in context: https://bugs.webkit.org/attachment.cgi?id=292424&action=review >>> Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp:560 >>> + >> >> And, is there any need to emit to_this here? In the generator wrapper function mode code, we always emit to_this to set up the |this|. > > in my test, this seems to perform ToObject() on the receiver, which breaks when F.p.call or apply is used with an undefined receiver in strict mode. I'm not sure why that doesn't break generators in the same way, but AFAIK there is no need for it actually I'm wrong, I will re-add this.
Created attachment 292466 [details] Re-add op_to_this
Created attachment 292470 [details] fix async-arrow-functions-lexical-arguments-binding.js
Comment on attachment 292470 [details] fix async-arrow-functions-lexical-arguments-binding.js View in context: https://bugs.webkit.org/attachment.cgi?id=292470&action=review > Source/JavaScriptCore/builtins/AsyncFunctionPrototype.js:33 > + throw new @TypeError(""); Ah, no. There is special builtin intrinsic, @throwTypeError. So instead of using `throw new @TypeError()`, use `@throwTypeError("with your error message")`.
Created attachment 292752 [details] Patch Most everything is working now --- there is room for improvement (same issue as Generators re: always performing op_to_th, and async arrow functions always store the home object on the function body fn, and some misc failures like lazily reporting errors for invalid LHS (which for await could be a web compat problem in the future\!), various issues with F.p.toString()... but apart from those which are not entirely specific to async functions, everything is passing.
Created attachment 292754 [details] Patch
Comment on attachment 292754 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292754&action=review Nice! Great, I can set r+ once you address Saam's comments :) > Source/JavaScriptCore/bytecode/BytecodeList.json:109 > + { "name" : "op_new_async_func_exp", "length" : 4 }, Could you file a bug implementing these opcodes in DFG? > Source/JavaScriptCore/parser/Parser.h:1234 > + } Can you add some comment here? It is a bit tricky. > Source/JavaScriptCore/runtime/FunctionConstructor.cpp:157 > + UNREACHABLE_FOR_PLATFORM(); Use ASSERT_NOT_REACHED(). > Source/JavaScriptCore/runtime/JSGlobalObject.cpp:1120 > + visitor.append(&thisObject->m_functionPrototype); This should be `m_functionConstructor`.
Comment on attachment 292754 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292754&action=review >> Source/JavaScriptCore/bytecode/BytecodeList.json:109 >> + { "name" : "op_new_async_func_exp", "length" : 4 }, > > Could you file a bug implementing these opcodes in DFG? I have a prototype implementation for optimizing backends based on https://trac.webkit.org/changeset/194216, should I attach that as part of this bug, or create a new bug for it? It's passing variations of all the tests added for the generator version >> Source/JavaScriptCore/parser/Parser.h:1234 >> + } > > Can you add some comment here? It is a bit tricky. Lets address that in https://bugs.webkit.org/show_bug.cgi?id=163930, which should be easy to land before this >> Source/JavaScriptCore/runtime/FunctionConstructor.cpp:157 >> + UNREACHABLE_FOR_PLATFORM(); > > Use ASSERT_NOT_REACHED(). Done >> Source/JavaScriptCore/runtime/JSGlobalObject.cpp:1120 >> + visitor.append(&thisObject->m_functionPrototype); > > This should be `m_functionConstructor`. oop, yeah. Done.
Comment on attachment 292754 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=292754&action=review >>> Source/JavaScriptCore/bytecode/BytecodeList.json:109 >>> + { "name" : "op_new_async_func_exp", "length" : 4 }, >> >> Could you file a bug implementing these opcodes in DFG? > > I have a prototype implementation for optimizing backends based on https://trac.webkit.org/changeset/194216, should I attach that as part of this bug, or create a new bug for it? It's passing variations of all the tests added for the generator version I like the separated one :) >>> Source/JavaScriptCore/parser/Parser.h:1234 >>> + } >> >> Can you add some comment here? It is a bit tricky. > > Lets address that in https://bugs.webkit.org/show_bug.cgi?id=163930, which should be easy to land before this Nice.
Created attachment 292961 [details] Couple comments addressed
Created attachment 293097 [details] Patch Rebase and update changelogs
some Octane runs look pretty neutral, (I expect there is a very slight parsing hit, but very little). Multiple runs yield different variations of winners/losers, so it looks neutral to me. I'll do some more perf testing, but I don't think this should show the same regressions as last time ``` With patch: Without patch: Octane Score: 36347 36816 Richards: 41254 42666 Deltablue: 50997 50765 Crypto: 39711 38458 Raytrace: 96643 96051 EarleyBoyer: 57098 57083 Regexp: 5450 5231 Splay: 22228 23103 SplayLatency: 21059 22475 NavierStokes: 28703 29562 pdf.js: 25237 24480 Mandreel: 38559 39258 MandreelLatency: 25565 26517 GB Emulator: 85638 88147 CodeLoad: 22789 23316 Box2DWeb: 54762 57669 zlib: 61569 59881 Typescript: 61037 63215 ```
Comment on attachment 293097 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=293097&action=review r=me with nits. Once you validate the performance, you can land your patch just applying the changes locally. (And use Tools/Scripts/webkit-patch land). :) > Source/JavaScriptCore/ChangeLog:36 > + not, for the same reason as #1. Could you open two bugs for these issues and paste the URLs for this ChangeLog? We usually open bugs and paste URLs for FIXME and TODO things.
Some select benchmarks seem to indicate that while the runtime changes are mostly neutral, parsing has regressed a bit when building with --asyncfunction-syntax. The largest difference is between "Flag" and "NoFlag" variations, as the "PatchNoFlag" vs "NoPatchNoFlag" results are very close to identical. I have some ideas which might improve the parser a bit, relating to how different bits are represented in Scopes --- particularly, all of the different function types, many of which are treated as equivalent. I think I can deal with those separately, because without the flag enabled, the regression isn't visible ``` Collected 50 samples per benchmark/VM, with 10 VM invocations per benchmark. Emitted a call to gc() between sample measurements. Used 1 benchmark iteration per VM invocation for warm-up. Used the jsc-specific preciseTime() function to get microsecond-level timing. Reporting benchmark execution times with 95% confidence intervals in milliseconds. NoPatchNoFlag PatchNoFlag PatchAndFlag PatchAndFlag v. NoPatchNoFlag closure 0.51644+-0.00439 ? 0.51708+-0.00403 0.51160+-0.00405 jquery 6.92018+-0.06363 ? 6.93134+-0.04141 ? 7.01164+-0.06870 ? might be 1.0132x slower <geometric> 1.88996+-0.01085 ? 1.89278+-0.00754 ? 1.89352+-0.01179 ? might be 1.0019x slower Collected 50 samples per benchmark/VM, with 10 VM invocations per benchmark. Emitted a call to gc() between sample measurements. Used 1 benchmark iteration per VM invocation for warm-up. Used the jsc-specific preciseTime() function to get microsecond-level timing. Reporting benchmark execution times with 95% confidence intervals in milliseconds. NoPatchNoFlag PatchNoFlag PatchAndFlag PatchAndFlag v. NoPatchNoFlag richards x2 0.08420+-0.00059 0.08349+-0.00049 ? 0.08364+-0.00051 splay x2 0.32718+-0.00135 ! 0.33182+-0.00313 ? 0.33193+-0.00192 ! definitely 1.0145x slower gbemu x2 23.83091+-0.85187 23.80783+-0.83061 23.74875+-0.82743 <geometric> 0.86772+-0.01036 ? 0.86909+-0.01051 0.86904+-0.01043 ? might be 1.0015x slower ```
a more complete set of octane benchmark runs, just for the record: ``` Collected 50 samples per benchmark/VM, with 10 VM invocations per benchmark. Emitted a call to gc() between sample measurements. Used 1 benchmark iteration per VM invocation for warm-up. Used the jsc-specific preciseTime() function to get microsecond-level timing. Reporting benchmark execution times with 95% confidence intervals in milliseconds. NoPatchNoFlag PatchNoFlag PatchAndFlag PatchAndFlag v. NoPatchNoFlag encrypt 0.15988+-0.00085 ? 0.16156+-0.00084 ? 0.16317+-0.00099 ! definitely 1.0206x slower decrypt 2.77154+-0.01438 ! 2.81940+-0.02208 ^ 2.77487+-0.01817 ? deltablue x2 0.12881+-0.00111 ? 0.13376+-0.00456 0.13324+-0.00186 ! definitely 1.0344x slower earley 0.25353+-0.00204 ! 0.25949+-0.00198 ^ 0.25449+-0.00169 ? boyer 4.23287+-0.02680 ! 4.32207+-0.02431 4.31110+-0.03326 ! definitely 1.0185x slower navier-stokes x2 5.05927+-0.03385 ? 5.06362+-0.03607 5.05819+-0.03039 raytrace x2 0.70197+-0.00383 0.69629+-0.00408 ? 0.69891+-0.00617 richards x2 0.08549+-0.00074 ? 0.08562+-0.00060 0.08520+-0.00059 splay x2 0.33274+-0.00229 ! 0.33720+-0.00190 0.33662+-0.00271 ? might be 1.0117x slower regexp x2 17.80696+-0.33081 ? 17.81317+-0.26333 ? 17.90212+-0.32226 ? pdfjs x2 36.85407+-0.61248 ? 37.35442+-0.62312 37.14721+-0.55147 ? mandreel x2 38.89018+-0.65043 38.74223+-0.66118 ? 38.76411+-0.64468 gbemu x2 24.22962+-0.92197 24.05966+-0.86813 ? 24.15709+-0.86114 closure 0.52217+-0.00462 ? 0.52625+-0.00620 0.51911+-0.00486 jquery 6.94829+-0.05423 ? 6.95970+-0.05021 ? 6.99213+-0.04834 ? box2d x2 8.26966+-0.16941 ? 8.34642+-0.17403 ? 8.38814+-0.18631 ? might be 1.0143x slower zlib x2 235.10652+-17.01248 232.86443+-16.51304 ? 237.63515+-16.85768 ? might be 1.0108x slower typescript x2 409.22673+-30.05127 ? 412.86264+-29.61373 ? 413.84910+-29.59527 ? might be 1.0113x slower <geometric> 4.57190+-0.06679 ? 4.60048+-0.06416 ? 4.60367+-0.06531 ? might be 1.0069x slower ``` I'm skeptical that the typescript and zlib differences are as substantial as they look, so I still think this does point to mostly parser regressions when build with the flag.
Committed r208052: <http://trac.webkit.org/changeset/208052>
Reopening to attach new patch.
Created attachment 293170 [details] broken patch meant for https://bugs.webkit.org/show_bug.cgi?id=164037
(In reply to comment #30) > some Octane runs look pretty neutral, (I expect there is a very slight > parsing hit, but very little). Multiple runs yield different variations of > winners/losers, so it looks neutral to me. I'll do some more perf testing, > but I don't think this should show the same regressions as last time > > ``` > With patch: Without patch: > > Octane Score: 36347 36816 > > Richards: 41254 42666 > Deltablue: 50997 50765 > Crypto: 39711 38458 > Raytrace: 96643 96051 > EarleyBoyer: 57098 57083 > Regexp: 5450 5231 > Splay: 22228 23103 > SplayLatency: 21059 22475 > NavierStokes: 28703 29562 > pdf.js: 25237 24480 > Mandreel: 38559 39258 > MandreelLatency: 25565 26517 > GB Emulator: 85638 88147 > CodeLoad: 22789 23316 > Box2DWeb: 54762 57669 > zlib: 61569 59881 > Typescript: 61037 63215 > > ``` This indicates a 1.2% regression. If that's real, then it's a serious issue. You should do more runs, maybe running Octane via run-jsc-benchmarks. It doesn't give a genuine Octane score, but the execution time geometric mean is very well correlated to 1/Octane_score.
(In reply to comment #37) > (In reply to comment #30) > > some Octane runs look pretty neutral, (I expect there is a very slight > > parsing hit, but very little). Multiple runs yield different variations of > > winners/losers, so it looks neutral to me. I'll do some more perf testing, > > but I don't think this should show the same regressions as last time > > > > ``` > > With patch: Without patch: > > > > Octane Score: 36347 36816 > > > > Richards: 41254 42666 > > Deltablue: 50997 50765 > > Crypto: 39711 38458 > > Raytrace: 96643 96051 > > EarleyBoyer: 57098 57083 > > Regexp: 5450 5231 > > Splay: 22228 23103 > > SplayLatency: 21059 22475 > > NavierStokes: 28703 29562 > > pdf.js: 25237 24480 > > Mandreel: 38559 39258 > > MandreelLatency: 25565 26517 > > GB Emulator: 85638 88147 > > CodeLoad: 22789 23316 > > Box2DWeb: 54762 57669 > > zlib: 61569 59881 > > Typescript: 61037 63215 > > > > ``` > > This indicates a 1.2% regression. If that's real, then it's a serious > issue. You should do more runs, maybe running Octane via > run-jsc-benchmarks. It doesn't give a genuine Octane score, but the > execution time geometric mean is very well correlated to 1/Octane_score. (In reply to comment #37) > (In reply to comment #30) > > some Octane runs look pretty neutral, (I expect there is a very slight > > parsing hit, but very little). Multiple runs yield different variations of > > winners/losers, so it looks neutral to me. I'll do some more perf testing, > > but I don't think this should show the same regressions as last time > > > > ``` > > With patch: Without patch: > > > > Octane Score: 36347 36816 > > > > Richards: 41254 42666 > > Deltablue: 50997 50765 > > Crypto: 39711 38458 > > Raytrace: 96643 96051 > > EarleyBoyer: 57098 57083 > > Regexp: 5450 5231 > > Splay: 22228 23103 > > SplayLatency: 21059 22475 > > NavierStokes: 28703 29562 > > pdf.js: 25237 24480 > > Mandreel: 38559 39258 > > MandreelLatency: 25565 26517 > > GB Emulator: 85638 88147 > > CodeLoad: 22789 23316 > > Box2DWeb: 54762 57669 > > zlib: 61569 59881 > > Typescript: 61037 63215 > > > > ``` > > This indicates a 1.2% regression. If that's real, then it's a serious > issue. You should do more runs, maybe running Octane via > run-jsc-benchmarks. It doesn't give a genuine Octane score, but the > execution time geometric mean is very well correlated to 1/Octane_score. I've been giving this a lot of runs, and my read of them is that the regressions come from enabling the flag (which only affects parsing). Startup and GC time should be close to neutral for the rest of the package, and that's roughly what the other benchmark runs I've posted are showing... that the biggest differences occur when the parsing of async functions is enabled. AFAICT, landing this should be close to neutral so long as the parser flag is not enabled. I will see if we can improve parse-time at all, separately
(In reply to comment #37) > > This indicates a 1.2% regression. If that's real, then it's a serious > issue. You should do more runs, maybe running Octane via > run-jsc-benchmarks. It doesn't give a genuine Octane score, but the > execution time geometric mean is very well correlated to 1/Octane_score. It seems that the validated results reported by Caitlin are neutral. And performance tracking agrees the validated results :) https://arewefastyet.com/#machine=29&view=breakdown&suite=octane