Steps to Reproduce: 1. Load http://bokeh.pydata.org/en/dev/docs/gallery/unemployment.html 2. Move mouse over the chart. Eventually, you'll crash: (lldb) bt * thread #14: tid = 0x71d1aa, 0x00007fff8b8485ac JavaScriptCore`JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit(unsigned int) + 620, name = 'DFG Worklist Worker Thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x18) * frame #0: 0x00007fff8b8485ac JavaScriptCore`JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit(unsigned int) + 620 frame #1: 0x00007fff8b83b52b JavaScriptCore`JSC::DFG::ByteCodeParser::handleCall(int, JSC::DFG::NodeType, JSC::CallMode, unsigned int, JSC::DFG::Node*, int, int, JSC::CallLinkStatus) + 379 frame #2: 0x00007fff8b83acad JavaScriptCore`JSC::DFG::ByteCodeParser::handleCall(int, JSC::DFG::NodeType, JSC::CallMode, unsigned int, int, int, int) + 429 frame #3: 0x00007fff8b4d0d5a JavaScriptCore`JSC::DFG::ByteCodeParser::parseBlock(unsigned int) + 6682 frame #4: 0x00007fff8b4cf0cb JavaScriptCore`JSC::DFG::ByteCodeParser::parseCodeBlock() + 1243 frame #5: 0x00007fff8b84ee2a JavaScriptCore`void JSC::DFG::ByteCodeParser::inlineCall<JSC::DFG::ByteCodeParser::handleInlining(JSC::DFG::Node*, int, JSC::CallLinkStatus const&, int, JSC::VirtualRegister, JSC::VirtualRegister, unsigned int, int, unsigned int, JSC::DFG::NodeType, JSC::InlineCallFrame::Kind, unsigned long long)::$_0>(JSC::DFG::Node*, int, JSC::CallVariant, int, int, unsigned int, JSC::InlineCallFrame::Kind, JSC::DFG::ByteCodeParser::CallerLinkability, JSC::DFG::ByteCodeParser::handleInlining(JSC::DFG::Node*, int, JSC::CallLinkStatus const&, int, JSC::VirtualRegister, JSC::VirtualRegister, unsigned int, int, unsigned int, JSC::DFG::NodeType, JSC::InlineCallFrame::Kind, unsigned long long)::$_0 const&) + 2042 frame #6: 0x00007fff8b83e3b9 JavaScriptCore`JSC::DFG::ByteCodeParser::handleInlining(JSC::DFG::Node*, int, JSC::CallLinkStatus const&, int, JSC::VirtualRegister, JSC::VirtualRegister, unsigned int, int, unsigned int, JSC::DFG::NodeType, JSC::InlineCallFrame::Kind, unsigned long long) + 10873 frame #7: 0x00007fff8b83b78b JavaScriptCore`JSC::DFG::ByteCodeParser::handleCall(int, JSC::DFG::NodeType, JSC::InlineCallFrame::Kind, unsigned int, JSC::DFG::Node*, int, int, JSC::CallLinkStatus, unsigned long long) + 315 frame #8: 0x00007fff8b8458d8 JavaScriptCore`JSC::DFG::ByteCodeParser::handlePutById(JSC::DFG::Node*, unsigned int, JSC::DFG::Node*, JSC::PutByIdStatus const&, bool) + 4120 frame #9: 0x00007fff8b4d08b9 JavaScriptCore`JSC::DFG::ByteCodeParser::parseBlock(unsigned int) + 5497 frame #10: 0x00007fff8b4cf0cb JavaScriptCore`JSC::DFG::ByteCodeParser::parseCodeBlock() + 1243 frame #11: 0x00007fff8b4cea57 JavaScriptCore`JSC::DFG::ByteCodeParser::parse() + 263 frame #12: 0x00007fff8b848322 JavaScriptCore`JSC::DFG::parse(JSC::DFG::Graph&) + 402 frame #13: 0x00007fff8b985300 JavaScriptCore`JSC::DFG::Plan::compileInThreadImpl(JSC::DFG::LongLivedState&) + 272 frame #14: 0x00007fff8b984c4b JavaScriptCore`JSC::DFG::Plan::compileInThread(JSC::DFG::LongLivedState&, JSC::DFG::ThreadData*) + 603 frame #15: 0x00007fff8ba2a696 JavaScriptCore`JSC::DFG::Worklist::runThread(JSC::DFG::ThreadData*) + 998 frame #16: 0x00007fff8b434332 JavaScriptCore`WTF::threadEntryPoint(void*) + 178 frame #17: 0x00007fff8b43425f JavaScriptCore`WTF::wtfThreadEntryPoint(void*) + 15 frame #18: 0x00007fff9e816aab libsystem_pthread.dylib`_pthread_body + 180 frame #19: 0x00007fff9e8169f7 libsystem_pthread.dylib`_pthread_start + 286 frame #20: 0x00007fff9e8161fd libsystem_pthread.dylib`thread_start + 13
I'm getting different crash traces every time.
(In reply to comment #1) > I'm getting different crash traces every time. Nevermind, disabling concurrent JIT produces the same crash.
Looks like it might be tail call related. It doesn't crash when disabling tail calls.
I've got a reduced repro: ``` let o = { set foo(x) { 'use strict'; return bar(); } }; function bar() { return 20; } for (let i = 0; i < 100000; i++) o.foo = 20; ``` The problem is we blindly assume that walking up the call chain to find the first non-inlined callee will leave us at a value producing bytecode operation. This is not true in the case of a setter.
Part of me wants to say the solution should simply be that CodeBlock::valueProfilePredictionForBytecodeOffset behaves well if it doesn't find a value profile. Right now, it assumes it finds a non-null profile.
Created attachment 293664 [details] patch
Comment on attachment 293664 [details] patch r=me
Comment on attachment 293664 [details] patch View in context: https://bugs.webkit.org/attachment.cgi?id=293664&action=review > Source/JavaScriptCore/bytecode/CodeBlock.h:419 > + return SpecFullTop; Top means that you have a value profile and this value profile saw values of every possible type. Please use Bottom, which means that you have not seen any values.
(In reply to comment #8) > Comment on attachment 293664 [details] > patch > > View in context: > https://bugs.webkit.org/attachment.cgi?id=293664&action=review > > > Source/JavaScriptCore/bytecode/CodeBlock.h:419 > > + return SpecFullTop; > > Top means that you have a value profile and this value profile saw values of > every possible type. > > Please use Bottom, which means that you have not seen any values. Interesting. /Bottom/SpecNone/
(In reply to comment #9) > (In reply to comment #8) > > Comment on attachment 293664 [details] > > patch > > > > View in context: > > https://bugs.webkit.org/attachment.cgi?id=293664&action=review > > > > > Source/JavaScriptCore/bytecode/CodeBlock.h:419 > > > + return SpecFullTop; > > > > Top means that you have a value profile and this value profile saw values of > > every possible type. > > > > Please use Bottom, which means that you have not seen any values. > > Interesting. /Bottom/SpecNone/ Right! When you ask for a predicted type, you're asking what types we have seen. So, anytime we return Top for the predicted type, that's just wrong. Also, SpecFullTop is the wrong Top to use anyway - that includes bytecode-only types like Empty and DFG-only types like Int52. No value profile would see that. So, probably SpecHeapTop would be the thing that you would want if Top was right, which it isn't.
I just want to make sure y'all understand how important this is. The DFG uses predicted types to bootstrap forward flow. Top is like a deadly virus for forward flow: because flow fixpoints merge types, anything that Top flows into will also become Top. So, even one dose of Top has a chance to infect everything with Top. We don't usually see Top everywhere because programmers don't program that way. Even very polymorphic variables have some bound, and that bound is almost never Top. When a Node predicts its outcome to be Top, this means that every transitive user of that Node will have to deactivate all of their optimizations. An inline cache, even if it knew from its own profiling that it always took a fast path, will now become pessimistic because it will think it has to handle non-cell cases. All math will go slow. All math will use snippets instead of fast code. Basically all of our other optimizations will deactivate because FixupPhase will be forced to use Untyped for all transitive uses of Top. So, our profiling code should never pretend to have seen a type if there is no evidence that it has actually seen it. Type prediction should not be sound in the traditional way, where in cases of doubt you overestimate the set of types. Type prediction should be sound in the *opposite* way: if in doubt it must underestimate. It's not a proof that these are all the types you will ever see. Quite the opposite: it's a statement about what types we have definitely seen in the past. It's OK to say that you don't think you have seen some type, if you have doubt about whether or not you have seen it. If you say that you have seen it, then you better be able to prove that the program really has seen that type at least once. So, it's never correct to say that we have definitely seen Top in the past. It's always correct to say that we have definitely seen Bottom (i.e. SpecEmpty) in the past.
(In reply to comment #11) > I just want to make sure y'all understand how important this is. The DFG > uses predicted types to bootstrap forward flow. Top is like a deadly virus > for forward flow: because flow fixpoints merge types, anything that Top > flows into will also become Top. So, even one dose of Top has a chance to > infect everything with Top. > > We don't usually see Top everywhere because programmers don't program that > way. Even very polymorphic variables have some bound, and that bound is > almost never Top. > > When a Node predicts its outcome to be Top, this means that every transitive > user of that Node will have to deactivate all of their optimizations. > > An inline cache, even if it knew from its own profiling that it always took > a fast path, will now become pessimistic because it will think it has to > handle non-cell cases. > > All math will go slow. All math will use snippets instead of fast code. > > Basically all of our other optimizations will deactivate because FixupPhase > will be forced to use Untyped for all transitive uses of Top. > > So, our profiling code should never pretend to have seen a type if there is > no evidence that it has actually seen it. Type prediction should not be > sound in the traditional way, where in cases of doubt you overestimate the > set of types. Type prediction should be sound in the *opposite* way: if in > doubt it must underestimate. It's not a proof that these are all the types > you will ever see. Quite the opposite: it's a statement about what types we > have definitely seen in the past. It's OK to say that you don't think you > have seen some type, if you have doubt about whether or not you have seen > it. If you say that you have seen it, then you better be able to prove that > the program really has seen that type at least once. So, it's never correct > to say that we have definitely seen Top in the past. It's always correct to > say that we have definitely seen Bottom (i.e. SpecEmpty) in the past. I feel like I have seen this mistake happen many times. Maybe we should add this comment (or something like it) in SpeculatedType.h
Just FYI I applied a fix to Bokeh, and will be pushing a new dev build soon. The original link in the "Steps to Reproduce" will be updated with a working version. It doesn't seem like you'd need the original anymore since there is a minimal test case to repro in a comment, but if the original file is needed for any reason please let me know and I can provide it.
Also: INCREDIBLE THANKS for helping to sort this out.
Created attachment 293690 [details] patch New patch that uses bottom both in the CodeBlock function, and inside that function's caller inside the DFG.
Comment on attachment 293690 [details] patch r=me
Comment on attachment 293690 [details] patch Tests are breaking because of this change.
Created attachment 293707 [details] patch A patch that now passes JSC tests.
Comment on attachment 293707 [details] patch View in context: https://bugs.webkit.org/attachment.cgi?id=293707&action=review r=me > Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp:896 > prediction = profiledBlock->valueProfilePredictionForBytecodeOffset(locker, bytecodeIndex); > + if (prediction != SpecNone) > + return prediction; > + return SpecFullTop; The old code does not convert SpecNone to SpecFullTop here. I think you can just return the prediction here.
Created attachment 293722 [details] patch for landing
Comment on attachment 293722 [details] patch for landing Clearing flags on attachment: 293722 Committed r208326: <http://trac.webkit.org/changeset/208326>
All reviewed patches have been landed. Closing bug.
Committed r208341: <http://trac.webkit.org/changeset/208341>
(In reply to comment #23) > Committed r208341: <http://trac.webkit.org/changeset/208341> My bad. git rebase messed up the changelog so my landing used the wrong bug... :(
*** Bug 164327 has been marked as a duplicate of this bug. ***
<rdar://problem/29054076>