Currently, the DFG Fixup Phase also checks for a bad exit before it fix up some node operands to be Untyped. We should not have to do this. If the baseline profiling data tells us to expect Untyped operands, then we should trust it and just do so.
Created attachment 268791 [details] proposed patch. Let's run some benchmark numbers first.
Created attachment 268810 [details] x86_64 benchmark result.
Created attachment 268811 [details] x86 benchmark result.
The 64-bit x86_64 bench results seems to be a wash, but ... The 32-bit x86 bench results have some notable (and repeatable) differences: base32 new32 JSRegress: int-or-other-div-then-get-by-val 11.5779+-1.3119 ^ 8.8859+-0.7558 ^ definitely 1.3029x faster int-or-other-mul-then-get-by-val 4.9188+-0.6033 ! 7.3859+-0.2823 ! definitely 1.5016x slower string-repeat-arith 106.8525+-3.1640 ^ 86.5845+-3.4652 ^ definitely 1.2341x faster The 2 speed ups don't manifest on 64-bit. The int-or-other-mul-then-get-by-val slow down, however, does seem to be repeatable and manifest on both 64-bit and 32-bit on re-test. The fact that it didn't show up in the x86_64 bench results, could be due to noise wiping it out. At minimum, I should investigate why int-or-other-mul-then-get-by-val is slowing down so much.
After rebasing to ToT r196092 which uses B3, x86_64 is now showing regressions with this patch: base64 new64 JSRegress: arguments-out-of-bounds 11.0405+-0.3241 ! 13.8526+-0.4265 ! definitely 1.2547x slower int-or-other-mul-then-get-by-val 4.1402+-0.0788 ! 6.7275+-0.5579 ! definitely 1.6249x slower int-or-other-sub-then-get-by-val 5.0074+-0.1249 ! 7.3338+-0.2195 ! definitely 1.4646x slower int-or-other-sub 3.8862+-0.1220 ! 4.9586+-0.0986 ! definitely 1.2759x slower string-out-of-bounds 10.8937+-0.3605 ! 13.6555+-0.4237 ! definitely 1.2535x slower