WebKit Bugzilla
Attachment 340584 Details for
Bug 185730
: [JSC] Use AssemblyHelpers' type checking functions as much as possible
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
Patch
bug-185730-20180518002641.patch (text/plain), 105.75 KB, created by
Yusuke Suzuki
on 2018-05-17 08:26:42 PDT
(
hide
)
Description:
Patch
Filename:
MIME Type:
Creator:
Yusuke Suzuki
Created:
2018-05-17 08:26:42 PDT
Size:
105.75 KB
patch
obsolete
>Subversion Revision: 231891 >diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog >index ff66a34cc9ff5b46380b0e06eebdc5e2eef3bffd..0d296418eba2c72b4f45d3fd1c4a00a05850f7f7 100644 >--- a/Source/JavaScriptCore/ChangeLog >+++ b/Source/JavaScriptCore/ChangeLog >@@ -1,3 +1,163 @@ >+2018-05-17 Yusuke Suzuki <utatane.tea@gmail.com> >+ >+ [JSC] Use AssemblyHelpers' type checking functions as much as possible >+ https://bugs.webkit.org/show_bug.cgi?id=185730 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ Let's use AssemblyHelpers' type checking functions as much as possible. This hides the complex >+ bit and register operations for type tagging of JSValue. It is really useful when we would like >+ to tweak type tagging representation since the code is collected into AssemblyHelpers. And >+ the named function is more readable than some branching operations. >+ >+ We also remove unnecessary branching functions in JIT / JSInterfaceJIT. Some of them are duplicate >+ to AssemblyHelpers' one. >+ >+ We add several new type checking functions to AssemblyHelpers. Moreover, we add branchIfXXX(GPRReg) >+ functions even for 32bit environment. In 32bit environment, this function takes tag register. This >+ semantics is aligned to the existing branchIfCell / branchIfNotCell. >+ >+ * bytecode/AccessCase.cpp: >+ (JSC::AccessCase::generateWithGuard): >+ * dfg/DFGSpeculativeJIT.cpp: >+ (JSC::DFG::SpeculativeJIT::compileValueToInt32): >+ (JSC::DFG::SpeculativeJIT::compileDoubleRep): >+ (JSC::DFG::SpeculativeJIT::compileInstanceOfForObject): >+ (JSC::DFG::SpeculativeJIT::compileSpread): >+ (JSC::DFG::SpeculativeJIT::speculateNumber): >+ (JSC::DFG::SpeculativeJIT::speculateMisc): >+ (JSC::DFG::SpeculativeJIT::compileExtractValueFromWeakMapGet): >+ (JSC::DFG::SpeculativeJIT::compileGetPrototypeOf): >+ (JSC::DFG::SpeculativeJIT::compileHasIndexedProperty): >+ * dfg/DFGSpeculativeJIT32_64.cpp: >+ (JSC::DFG::SpeculativeJIT::emitCall): >+ (JSC::DFG::SpeculativeJIT::fillSpeculateInt32Internal): >+ (JSC::DFG::SpeculativeJIT::fillSpeculateBoolean): >+ (JSC::DFG::SpeculativeJIT::compile): >+ * dfg/DFGSpeculativeJIT64.cpp: >+ (JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeStrictEq): >+ (JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeStrictEq): >+ (JSC::DFG::SpeculativeJIT::emitCall): >+ (JSC::DFG::SpeculativeJIT::fillSpeculateInt32Internal): >+ (JSC::DFG::SpeculativeJIT::compile): >+ (JSC::DFG::SpeculativeJIT::convertAnyInt): >+ * ftl/FTLLowerDFGToB3.cpp: >+ (JSC::FTL::DFG::LowerDFGToB3::compileAssertNotEmpty): >+ * jit/AssemblyHelpers.h: >+ (JSC::AssemblyHelpers::branchIfInt32): >+ (JSC::AssemblyHelpers::branchIfNotInt32): >+ (JSC::AssemblyHelpers::branchIfNumber): >+ (JSC::AssemblyHelpers::branchIfNotNumber): >+ (JSC::AssemblyHelpers::branchIfBoolean): >+ (JSC::AssemblyHelpers::branchIfNotBoolean): >+ (JSC::AssemblyHelpers::branchIfEmpty): >+ (JSC::AssemblyHelpers::branchIfNotEmpty): >+ (JSC::AssemblyHelpers::branchIfUndefined): >+ (JSC::AssemblyHelpers::branchIfNotUndefined): >+ (JSC::AssemblyHelpers::branchIfNull): >+ (JSC::AssemblyHelpers::branchIfNotNull): >+ * jit/JIT.h: >+ * jit/JITArithmetic.cpp: >+ (JSC::JIT::emit_compareAndJump): >+ (JSC::JIT::emit_compareAndJumpSlow): >+ * jit/JITArithmetic32_64.cpp: >+ (JSC::JIT::emit_compareAndJump): >+ (JSC::JIT::emit_op_unsigned): >+ (JSC::JIT::emit_op_inc): >+ (JSC::JIT::emit_op_dec): >+ (JSC::JIT::emitBinaryDoubleOp): >+ (JSC::JIT::emit_op_mod): >+ * jit/JITCall.cpp: >+ (JSC::JIT::compileCallEval): >+ (JSC::JIT::compileOpCall): >+ * jit/JITCall32_64.cpp: >+ (JSC::JIT::compileCallEval): >+ (JSC::JIT::compileOpCall): >+ * jit/JITInlines.h: >+ (JSC::JIT::emitJumpSlowCaseIfNotJSCell): >+ (JSC::JIT::emitJumpIfBothJSCells): >+ (JSC::JIT::emitJumpSlowCaseIfJSCell): >+ (JSC::JIT::emitJumpIfNotInt): >+ (JSC::JIT::emitJumpSlowCaseIfNotInt): >+ (JSC::JIT::emitJumpSlowCaseIfNotNumber): >+ (JSC::JIT::emitJumpIfCellObject): Deleted. >+ (JSC::JIT::emitJumpIfCellNotObject): Deleted. >+ (JSC::JIT::emitJumpIfJSCell): Deleted. >+ (JSC::JIT::emitJumpIfInt): Deleted. >+ * jit/JITOpcodes.cpp: >+ (JSC::JIT::emit_op_instanceof): >+ (JSC::JIT::emit_op_is_undefined): >+ (JSC::JIT::emit_op_is_cell_with_type): >+ (JSC::JIT::emit_op_is_object): >+ (JSC::JIT::emit_op_to_primitive): >+ (JSC::JIT::emit_op_jeq_null): >+ (JSC::JIT::emit_op_jneq_null): >+ (JSC::JIT::compileOpStrictEq): >+ (JSC::JIT::compileOpStrictEqJump): >+ (JSC::JIT::emit_op_to_number): >+ (JSC::JIT::emit_op_to_string): >+ (JSC::JIT::emit_op_to_object): >+ (JSC::JIT::emit_op_eq_null): >+ (JSC::JIT::emit_op_neq_null): >+ (JSC::JIT::emit_op_check_tdz): >+ (JSC::JIT::emitNewFuncExprCommon): >+ (JSC::JIT::emit_op_profile_type): >+ * jit/JITOpcodes32_64.cpp: >+ (JSC::JIT::emit_op_instanceof): >+ (JSC::JIT::emit_op_is_undefined): >+ (JSC::JIT::emit_op_is_cell_with_type): >+ (JSC::JIT::emit_op_is_object): >+ (JSC::JIT::emit_op_to_primitive): >+ (JSC::JIT::emit_op_not): >+ (JSC::JIT::emit_op_jeq_null): >+ (JSC::JIT::emit_op_jneq_null): >+ (JSC::JIT::emit_op_jneq_ptr): >+ (JSC::JIT::emit_op_eq): >+ (JSC::JIT::emit_op_jeq): >+ (JSC::JIT::emit_op_neq): >+ (JSC::JIT::emit_op_jneq): >+ (JSC::JIT::compileOpStrictEq): >+ (JSC::JIT::compileOpStrictEqJump): >+ (JSC::JIT::emit_op_eq_null): >+ (JSC::JIT::emit_op_neq_null): >+ (JSC::JIT::emit_op_to_number): >+ (JSC::JIT::emit_op_to_string): >+ (JSC::JIT::emit_op_to_object): >+ (JSC::JIT::emit_op_to_this): >+ (JSC::JIT::emit_op_check_tdz): >+ (JSC::JIT::emit_op_profile_type): >+ * jit/JITPropertyAccess.cpp: >+ (JSC::JIT::emit_op_get_by_val): >+ (JSC::JIT::emitGetByValWithCachedId): >+ (JSC::JIT::emitGenericContiguousPutByVal): >+ (JSC::JIT::emitPutByValWithCachedId): >+ (JSC::JIT::emit_op_get_from_scope): >+ (JSC::JIT::emit_op_put_to_scope): >+ (JSC::JIT::emitWriteBarrier): >+ (JSC::JIT::emitIntTypedArrayPutByVal): >+ (JSC::JIT::emitFloatTypedArrayPutByVal): >+ * jit/JITPropertyAccess32_64.cpp: >+ (JSC::JIT::emit_op_get_by_val): >+ (JSC::JIT::emitContiguousLoad): >+ (JSC::JIT::emitArrayStorageLoad): >+ (JSC::JIT::emitGetByValWithCachedId): >+ (JSC::JIT::emitGenericContiguousPutByVal): >+ (JSC::JIT::emitPutByValWithCachedId): >+ (JSC::JIT::emit_op_get_from_scope): >+ (JSC::JIT::emit_op_put_to_scope): >+ * jit/JSInterfaceJIT.h: >+ (JSC::JSInterfaceJIT::emitLoadJSCell): >+ (JSC::JSInterfaceJIT::emitLoadInt32): >+ (JSC::JSInterfaceJIT::emitLoadDouble): >+ (JSC::JSInterfaceJIT::emitJumpIfNumber): Deleted. >+ (JSC::JSInterfaceJIT::emitJumpIfNotNumber): Deleted. >+ (JSC::JSInterfaceJIT::emitJumpIfNotType): Deleted. >+ * jit/Repatch.cpp: >+ (JSC::linkPolymorphicCall): >+ * jit/ThunkGenerators.cpp: >+ (JSC::virtualThunkFor): >+ (JSC::absThunkGenerator): >+ > 2018-05-16 Saam Barati <sbarati@apple.com> > > UnlinkedFunctionExecutable doesn't need a parent source override field since it's only used for default class constructors >diff --git a/Source/JavaScriptCore/bytecode/AccessCase.cpp b/Source/JavaScriptCore/bytecode/AccessCase.cpp >index 2cb8376f9797169bbd2a16b13f9e6df5d8756bee..49c8a1624113b36e99c55b350cb6650b58649606 100644 >--- a/Source/JavaScriptCore/bytecode/AccessCase.cpp >+++ b/Source/JavaScriptCore/bytecode/AccessCase.cpp >@@ -448,7 +448,7 @@ void AccessCase::generateWithGuard( > // has the property. > #if USE(JSVALUE64) > jit.load64(MacroAssembler::Address(baseForAccessGPR, offsetRelativeToBase(knownPolyProtoOffset)), baseForAccessGPR); >- fallThrough.append(jit.branch64(CCallHelpers::NotEqual, baseForAccessGPR, CCallHelpers::TrustedImm64(ValueNull))); >+ fallThrough.append(jit.branchIfNotNull(baseForAccessGPR)); > #else > jit.load32(MacroAssembler::Address(baseForAccessGPR, offsetRelativeToBase(knownPolyProtoOffset) + PayloadOffset), baseForAccessGPR); > fallThrough.append(jit.branchTestPtr(CCallHelpers::NonZero, baseForAccessGPR)); >@@ -463,7 +463,7 @@ void AccessCase::generateWithGuard( > RELEASE_ASSERT(structure->isObject()); // Primitives must have a stored prototype. We use prototypeForLookup for them. > #if USE(JSVALUE64) > jit.load64(MacroAssembler::Address(baseForAccessGPR, offsetRelativeToBase(knownPolyProtoOffset)), baseForAccessGPR); >- fallThrough.append(jit.branch64(CCallHelpers::Equal, baseForAccessGPR, CCallHelpers::TrustedImm64(ValueNull))); >+ fallThrough.append(jit.branchIfNull(baseForAccessGPR)); > #else > jit.load32(MacroAssembler::Address(baseForAccessGPR, offsetRelativeToBase(knownPolyProtoOffset) + PayloadOffset), baseForAccessGPR); > fallThrough.append(jit.branchTestPtr(CCallHelpers::Zero, baseForAccessGPR)); >diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >index b901318663edacb2c65de8f61f9ed0cce7f1af38..a2ca344b5dabb6aa347dbffdb51807c3b7d0472f 100644 >--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >@@ -2375,16 +2375,15 @@ void SpeculativeJIT::compileValueToInt32(Node* node) > FPRTemporary tempFpr(this); > FPRReg fpr = tempFpr.fpr(); > >- JITCompiler::Jump isInteger = m_jit.branch64(MacroAssembler::AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister); >+ JITCompiler::Jump isInteger = m_jit.branchIfInt32(gpr); > JITCompiler::JumpList converted; > > if (node->child1().useKind() == NumberUse) { > DFG_TYPE_CHECK( > JSValueRegs(gpr), node->child1(), SpecBytecodeNumber, >- m_jit.branchTest64( >- MacroAssembler::Zero, gpr, GPRInfo::tagTypeNumberRegister)); >+ m_jit.branchIfNotNumber(gpr)); > } else { >- JITCompiler::Jump isNumber = m_jit.branchTest64(MacroAssembler::NonZero, gpr, GPRInfo::tagTypeNumberRegister); >+ JITCompiler::Jump isNumber = m_jit.branchIfNumber(gpr); > > DFG_TYPE_CHECK( > JSValueRegs(gpr), node->child1(), ~SpecCellCheck, m_jit.branchIfCell(JSValueRegs(gpr))); >@@ -2429,7 +2428,7 @@ void SpeculativeJIT::compileValueToInt32(Node* node) > FPRReg fpr = tempFpr.fpr(); > FPRTemporary scratch(this); > >- JITCompiler::Jump isInteger = m_jit.branch32(MacroAssembler::Equal, tagGPR, TrustedImm32(JSValue::Int32Tag)); >+ JITCompiler::Jump isInteger = m_jit.branchIfInt32(tagGPR); > > if (node->child1().useKind() == NumberUse) { > DFG_TYPE_CHECK( >@@ -2445,7 +2444,7 @@ void SpeculativeJIT::compileValueToInt32(Node* node) > m_jit.branchIfCell(op1.jsValueRegs())); > > // It's not a cell: so true turns into 1 and all else turns into 0. >- JITCompiler::Jump isBoolean = m_jit.branch32(JITCompiler::Equal, tagGPR, TrustedImm32(JSValue::BooleanTag)); >+ JITCompiler::Jump isBoolean = m_jit.branchIfBoolean(tagGPR, InvalidGPRReg); > m_jit.move(TrustedImm32(0), resultGpr); > converted.append(m_jit.jump()); > >@@ -2602,17 +2601,16 @@ void SpeculativeJIT::compileDoubleRep(Node* node) > FPRReg resultFPR = result.fpr(); > JITCompiler::JumpList done; > >- JITCompiler::Jump isInteger = m_jit.branch64( >- MacroAssembler::AboveOrEqual, op1GPR, GPRInfo::tagTypeNumberRegister); >+ JITCompiler::Jump isInteger = m_jit.branchIfInt32(op1GPR); > > if (node->child1().useKind() == NotCellUse) { >- JITCompiler::Jump isNumber = m_jit.branchTest64(MacroAssembler::NonZero, op1GPR, GPRInfo::tagTypeNumberRegister); >- JITCompiler::Jump isUndefined = m_jit.branch64(JITCompiler::Equal, op1GPR, TrustedImm64(ValueUndefined)); >+ JITCompiler::Jump isNumber = m_jit.branchIfNumber(op1GPR); >+ JITCompiler::Jump isUndefined = m_jit.branchIfUndefined(op1GPR); > > static const double zero = 0; > m_jit.loadDouble(TrustedImmPtr(&zero), resultFPR); > >- JITCompiler::Jump isNull = m_jit.branch64(JITCompiler::Equal, op1GPR, TrustedImm64(ValueNull)); >+ JITCompiler::Jump isNull = m_jit.branchIfNull(op1GPR); > done.append(isNull); > > DFG_TYPE_CHECK(JSValueRegs(op1GPR), node->child1(), ~SpecCellCheck, >@@ -2633,7 +2631,7 @@ void SpeculativeJIT::compileDoubleRep(Node* node) > } else if (needsTypeCheck(node->child1(), SpecBytecodeNumber)) { > typeCheck( > JSValueRegs(op1GPR), node->child1(), SpecBytecodeNumber, >- m_jit.branchTest64(MacroAssembler::Zero, op1GPR, GPRInfo::tagTypeNumberRegister)); >+ m_jit.branchIfNotNumber(op1GPR)); > } > > unboxDouble(op1GPR, tempGPR, resultFPR); >@@ -2651,20 +2649,19 @@ void SpeculativeJIT::compileDoubleRep(Node* node) > FPRReg resultFPR = result.fpr(); > JITCompiler::JumpList done; > >- JITCompiler::Jump isInteger = m_jit.branch32( >- MacroAssembler::Equal, op1TagGPR, TrustedImm32(JSValue::Int32Tag)); >+ JITCompiler::Jump isInteger = m_jit.branchIfInt32(op1TagGPR); > > if (node->child1().useKind() == NotCellUse) { > JITCompiler::Jump isNumber = m_jit.branch32(JITCompiler::Below, op1TagGPR, JITCompiler::TrustedImm32(JSValue::LowestTag + 1)); >- JITCompiler::Jump isUndefined = m_jit.branch32(JITCompiler::Equal, op1TagGPR, TrustedImm32(JSValue::UndefinedTag)); >+ JITCompiler::Jump isUndefined = m_jit.branchIfUndefined(op1TagGPR); > > static const double zero = 0; > m_jit.loadDouble(TrustedImmPtr(&zero), resultFPR); > >- JITCompiler::Jump isNull = m_jit.branch32(JITCompiler::Equal, op1TagGPR, TrustedImm32(JSValue::NullTag)); >+ JITCompiler::Jump isNull = m_jit.branchIfNull(op1TagGPR); > done.append(isNull); > >- DFG_TYPE_CHECK(JSValueRegs(op1TagGPR, op1PayloadGPR), node->child1(), ~SpecCell, m_jit.branch32(JITCompiler::NotEqual, op1TagGPR, TrustedImm32(JSValue::BooleanTag))); >+ DFG_TYPE_CHECK(JSValueRegs(op1TagGPR, op1PayloadGPR), node->child1(), ~SpecCell, m_jit.branchIfNotBoolean(op1TagGPR, InvalidGPRReg)); > > JITCompiler::Jump isFalse = m_jit.branchTest32(JITCompiler::Zero, op1PayloadGPR, TrustedImm32(1)); > static const double one = 1; >@@ -2679,6 +2676,7 @@ void SpeculativeJIT::compileDoubleRep(Node* node) > > isNumber.link(&m_jit); > } else if (needsTypeCheck(node->child1(), SpecBytecodeNumber)) { >+ // This check fails with Int32Tag, but it is OK since Int32 case is already excluded. > typeCheck( > JSValueRegs(op1TagGPR, op1PayloadGPR), node->child1(), SpecBytecodeNumber, > m_jit.branch32(MacroAssembler::AboveOrEqual, op1TagGPR, TrustedImm32(JSValue::LowestTag))); >@@ -3334,14 +3332,14 @@ void SpeculativeJIT::compileInstanceOfForObject(Node*, GPRReg valueReg, GPRReg p > m_jit.emitLoadStructure(*m_jit.vm(), scratchReg, scratch3Reg, scratch2Reg); > #if USE(JSVALUE64) > m_jit.load64(MacroAssembler::Address(scratch3Reg, Structure::prototypeOffset()), scratch3Reg); >- auto hasMonoProto = m_jit.branchTest64(JITCompiler::NonZero, scratch3Reg); >+ auto hasMonoProto = m_jit.branchIfNotEmpty(scratch3Reg); > m_jit.load64(JITCompiler::Address(scratchReg, offsetRelativeToBase(knownPolyProtoOffset)), scratch3Reg); > hasMonoProto.link(&m_jit); > m_jit.move(scratch3Reg, scratchReg); > #else > m_jit.load32(MacroAssembler::Address(scratch3Reg, Structure::prototypeOffset() + TagOffset), scratch2Reg); > m_jit.load32(MacroAssembler::Address(scratch3Reg, Structure::prototypeOffset() + PayloadOffset), scratch3Reg); >- auto hasMonoProto = m_jit.branch32(CCallHelpers::NotEqual, scratch2Reg, TrustedImm32(JSValue::EmptyValueTag)); >+ auto hasMonoProto = m_jit.branchIfNotEmpty(scratch2Reg); > m_jit.load32(JITCompiler::Address(scratchReg, offsetRelativeToBase(knownPolyProtoOffset) + PayloadOffset), scratch3Reg); > hasMonoProto.link(&m_jit); > m_jit.move(scratch3Reg, scratchReg); >@@ -7660,7 +7658,7 @@ void SpeculativeJIT::compileSpread(Node* node) > auto loopStart = m_jit.label(); > m_jit.sub32(TrustedImm32(1), lengthGPR); > m_jit.load64(MacroAssembler::BaseIndex(scratch1GPR, lengthGPR, MacroAssembler::TimesEight), scratch2GPR); >- auto notEmpty = m_jit.branchTest64(MacroAssembler::NonZero, scratch2GPR); >+ auto notEmpty = m_jit.branchIfNotEmpty(scratch2GPR); > m_jit.move(TrustedImm64(JSValue::encode(jsUndefined())), scratch2GPR); > notEmpty.link(&m_jit); > m_jit.store64(scratch2GPR, MacroAssembler::BaseIndex(resultGPR, lengthGPR, MacroAssembler::TimesEight, JSFixedArray::offsetOfData())); >@@ -9699,12 +9697,13 @@ void SpeculativeJIT::speculateNumber(Edge edge) > GPRReg gpr = value.gpr(); > typeCheck( > JSValueRegs(gpr), edge, SpecBytecodeNumber, >- m_jit.branchTest64(MacroAssembler::Zero, gpr, GPRInfo::tagTypeNumberRegister)); >+ m_jit.branchIfNotNumber(gpr)); > #else >+ static_assert(JSValue::Int32Tag >= JSValue::LowestTag, "Int32Tag is included in >= JSValue::LowestTag range."); > GPRReg tagGPR = value.tagGPR(); > DFG_TYPE_CHECK( > value.jsValueRegs(), edge, ~SpecInt32Only, >- m_jit.branch32(MacroAssembler::Equal, tagGPR, TrustedImm32(JSValue::Int32Tag))); >+ m_jit.branchIfInt32(tagGPR)); > DFG_TYPE_CHECK( > value.jsValueRegs(), edge, SpecBytecodeNumber, > m_jit.branch32(MacroAssembler::AboveOrEqual, tagGPR, TrustedImm32(JSValue::LowestTag))); >@@ -10193,9 +10192,10 @@ void SpeculativeJIT::speculateMisc(Edge edge, JSValueRegs regs) > regs, edge, SpecMisc, > m_jit.branch64(MacroAssembler::Above, regs.gpr(), MacroAssembler::TrustedImm64(TagBitTypeOther | TagBitBool | TagBitUndefined))); > #else >+ static_assert(JSValue::Int32Tag >= JSValue::UndefinedTag, "Int32Tag is included in >= JSValue::UndefinedTag range."); > DFG_TYPE_CHECK( > regs, edge, ~SpecInt32Only, >- m_jit.branch32(MacroAssembler::Equal, regs.tagGPR(), MacroAssembler::TrustedImm32(JSValue::Int32Tag))); >+ m_jit.branchIfInt32(regs.tagGPR())); > DFG_TYPE_CHECK( > regs, edge, SpecMisc, > m_jit.branch32(MacroAssembler::Below, regs.tagGPR(), MacroAssembler::TrustedImm32(JSValue::UndefinedTag))); >@@ -11705,7 +11705,7 @@ void SpeculativeJIT::compileExtractValueFromWeakMapGet(Node* node) > m_jit.moveValue(jsUndefined(), resultRegs); > done.link(&m_jit); > #else >- auto isEmpty = m_jit.branch32(JITCompiler::Equal, valueRegs.tagGPR(), TrustedImm32(JSValue::EmptyValueTag)); >+ auto isEmpty = m_jit.branchIfEmpty(valueRegs.tagGPR()); > m_jit.moveValueRegs(valueRegs, resultRegs); > auto done = m_jit.jump(); > >@@ -12604,14 +12604,14 @@ void SpeculativeJIT::compileGetPrototypeOf(Node* node) > > #if USE(JSVALUE64) > m_jit.load64(MacroAssembler::Address(tempGPR, Structure::prototypeOffset()), tempGPR); >- auto hasMonoProto = m_jit.branchTest64(JITCompiler::NonZero, tempGPR); >+ auto hasMonoProto = m_jit.branchIfNotEmpty(tempGPR); > m_jit.load64(JITCompiler::Address(objectGPR, offsetRelativeToBase(knownPolyProtoOffset)), tempGPR); > hasMonoProto.link(&m_jit); > jsValueResult(tempGPR, node); > #else > m_jit.load32(MacroAssembler::Address(tempGPR, Structure::prototypeOffset() + TagOffset), temp2GPR); > m_jit.load32(MacroAssembler::Address(tempGPR, Structure::prototypeOffset() + PayloadOffset), tempGPR); >- auto hasMonoProto = m_jit.branch32(CCallHelpers::NotEqual, temp2GPR, TrustedImm32(JSValue::EmptyValueTag)); >+ auto hasMonoProto = m_jit.branchIfNotEmpty(temp2GPR); > m_jit.load32(JITCompiler::Address(objectGPR, offsetRelativeToBase(knownPolyProtoOffset) + TagOffset), temp2GPR); > m_jit.load32(JITCompiler::Address(objectGPR, offsetRelativeToBase(knownPolyProtoOffset) + PayloadOffset), tempGPR); > hasMonoProto.link(&m_jit); >@@ -12796,10 +12796,10 @@ void SpeculativeJIT::compileHasIndexedProperty(Node* node) > > #if USE(JSVALUE64) > m_jit.load64(MacroAssembler::BaseIndex(storageGPR, indexGPR, MacroAssembler::TimesEight), scratchGPR); >- slowCases.append(m_jit.branchTest64(MacroAssembler::Zero, scratchGPR)); >+ slowCases.append(m_jit.branchIfEmpty(scratchGPR)); > #else > m_jit.load32(MacroAssembler::BaseIndex(storageGPR, indexGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), scratchGPR); >- slowCases.append(m_jit.branch32(MacroAssembler::Equal, scratchGPR, TrustedImm32(JSValue::EmptyValueTag))); >+ slowCases.append(m_jit.branchIfEmpty(scratchGPR)); > #endif > m_jit.move(TrustedImm32(1), resultGPR); > break; >@@ -12838,10 +12838,10 @@ void SpeculativeJIT::compileHasIndexedProperty(Node* node) > > #if USE(JSVALUE64) > m_jit.load64(MacroAssembler::BaseIndex(storageGPR, indexGPR, MacroAssembler::TimesEight, ArrayStorage::vectorOffset()), scratchGPR); >- slowCases.append(m_jit.branchTest64(MacroAssembler::Zero, scratchGPR)); >+ slowCases.append(m_jit.branchIfEmpty(scratchGPR)); > #else > m_jit.load32(MacroAssembler::BaseIndex(storageGPR, indexGPR, MacroAssembler::TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), scratchGPR); >- slowCases.append(m_jit.branch32(MacroAssembler::Equal, scratchGPR, TrustedImm32(JSValue::EmptyValueTag))); >+ slowCases.append(m_jit.branchIfEmpty(scratchGPR)); > #endif > m_jit.move(TrustedImm32(1), resultGPR); > break; >diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp >index c9fc1570733e7799a663cbe7fd47323d679a3d3c..9f131d914b1e5002b5466c174a7878e6a67534e2 100644 >--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp >+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp >@@ -809,7 +809,7 @@ void SpeculativeJIT::emitCall(Node* node) > prepareForExternalCall(); > m_jit.appendCall(operationCallEval); > m_jit.exceptionCheck(); >- JITCompiler::Jump done = m_jit.branch32(JITCompiler::NotEqual, GPRInfo::returnValueGPR2, TrustedImm32(JSValue::EmptyValueTag)); >+ JITCompiler::Jump done = m_jit.branchIfNotEmpty(GPRInfo::returnValueGPR2); > > // This is the part where we meant to make a normal call. Oops. > m_jit.addPtr(TrustedImm32(requiredBytes), JITCompiler::stackPointerRegister); >@@ -987,7 +987,7 @@ GPRReg SpeculativeJIT::fillSpeculateInt32Internal(Edge edge, DataFormat& returnF > m_gprs.lock(tagGPR); > m_gprs.lock(payloadGPR); > if (type & ~SpecInt32Only) >- speculationCheck(BadType, JSValueRegs(tagGPR, payloadGPR), edge, m_jit.branch32(MacroAssembler::NotEqual, tagGPR, TrustedImm32(JSValue::Int32Tag))); >+ speculationCheck(BadType, JSValueRegs(tagGPR, payloadGPR), edge, m_jit.branchIfNotInt32(tagGPR)); > m_gprs.unlock(tagGPR); > m_gprs.release(tagGPR); > m_gprs.release(payloadGPR); >@@ -1197,7 +1197,7 @@ GPRReg SpeculativeJIT::fillSpeculateBoolean(Edge edge) > m_gprs.lock(tagGPR); > m_gprs.lock(payloadGPR); > if (type & ~SpecBoolean) >- speculationCheck(BadType, JSValueRegs(tagGPR, payloadGPR), edge, m_jit.branch32(MacroAssembler::NotEqual, tagGPR, TrustedImm32(JSValue::BooleanTag))); >+ speculationCheck(BadType, JSValueRegs(tagGPR, payloadGPR), edge, m_jit.branchIfNotBoolean(tagGPR, InvalidGPRReg)); > > m_gprs.unlock(tagGPR); > m_gprs.release(tagGPR); >@@ -2272,18 +2272,14 @@ void SpeculativeJIT::compile(Node* node) > storageReg, propertyReg, MacroAssembler::TimesEight, PayloadOffset), > resultPayload.gpr()); > if (node->arrayMode().isSaneChain()) { >- JITCompiler::Jump notHole = m_jit.branch32( >- MacroAssembler::NotEqual, resultTag.gpr(), >- TrustedImm32(JSValue::EmptyValueTag)); >+ JITCompiler::Jump notHole = m_jit.branchIfNotEmpty(resultTag.gpr()); > m_jit.move(TrustedImm32(JSValue::UndefinedTag), resultTag.gpr()); > m_jit.move(TrustedImm32(0), resultPayload.gpr()); > notHole.link(&m_jit); > } else { > speculationCheck( > LoadFromHole, JSValueRegs(), 0, >- m_jit.branch32( >- MacroAssembler::Equal, resultTag.gpr(), >- TrustedImm32(JSValue::EmptyValueTag))); >+ m_jit.branchIfEmpty(resultTag.gpr())); > } > jsValueResult(resultTag.gpr(), resultPayload.gpr(), node); > break; >@@ -2311,7 +2307,7 @@ void SpeculativeJIT::compile(Node* node) > > m_jit.load32(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), resultTagReg); > m_jit.load32(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)), resultPayloadReg); >- slowCases.append(m_jit.branch32(MacroAssembler::Equal, resultTagReg, TrustedImm32(JSValue::EmptyValueTag))); >+ slowCases.append(m_jit.branchIfEmpty(resultTagReg)); > > addSlowPathGenerator( > slowPathCall( >@@ -2393,7 +2389,7 @@ void SpeculativeJIT::compile(Node* node) > GPRTemporary resultPayload(this); > > m_jit.load32(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), resultTag.gpr()); >- speculationCheck(LoadFromHole, JSValueRegs(), 0, m_jit.branch32(MacroAssembler::Equal, resultTag.gpr(), TrustedImm32(JSValue::EmptyValueTag))); >+ speculationCheck(LoadFromHole, JSValueRegs(), 0, m_jit.branchIfEmpty(resultTag.gpr())); > m_jit.load32(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), resultPayload.gpr()); > > jsValueResult(resultTag.gpr(), resultPayload.gpr(), node); >@@ -2420,8 +2416,7 @@ void SpeculativeJIT::compile(Node* node) > MacroAssembler::Address(storageReg, ArrayStorage::vectorLengthOffset())); > > m_jit.load32(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), resultTagReg); >- JITCompiler::Jump hole = m_jit.branch32( >- MacroAssembler::Equal, resultTag.gpr(), TrustedImm32(JSValue::EmptyValueTag)); >+ JITCompiler::Jump hole = m_jit.branchIfEmpty(resultTag.gpr()); > m_jit.load32(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), resultPayloadReg); > > JITCompiler::JumpList slowCases; >@@ -2806,7 +2801,7 @@ void SpeculativeJIT::compile(Node* node) > m_jit.load32( > MacroAssembler::BaseIndex(storageGPR, valuePayloadGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), > valueTagGPR); >- MacroAssembler::Jump slowCase = m_jit.branch32(MacroAssembler::Equal, valueTagGPR, TrustedImm32(JSValue::EmptyValueTag)); >+ MacroAssembler::Jump slowCase = m_jit.branchIfEmpty(valueTagGPR); > m_jit.store32( > MacroAssembler::TrustedImm32(JSValue::EmptyValueTag), > MacroAssembler::BaseIndex(storageGPR, valuePayloadGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag))); >@@ -2884,7 +2879,7 @@ void SpeculativeJIT::compile(Node* node) > > m_jit.store32(storageLengthGPR, MacroAssembler::Address(storageGPR, ArrayStorage::lengthOffset())); > >- setUndefinedCases.append(m_jit.branch32(MacroAssembler::Equal, TrustedImm32(JSValue::EmptyValueTag), valueTagGPR)); >+ setUndefinedCases.append(m_jit.branchIfEmpty(valueTagGPR)); > > m_jit.store32(TrustedImm32(JSValue::EmptyValueTag), MacroAssembler::BaseIndex(storageGPR, storageLengthGPR, MacroAssembler::TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag))); > >@@ -3011,8 +3006,7 @@ void SpeculativeJIT::compile(Node* node) > GPRReg resultPayloadGPR = resultPayload.gpr(); > > m_jit.move(valuePayloadGPR, resultPayloadGPR); >- JITCompiler::Jump isBoolean = m_jit.branch32( >- JITCompiler::Equal, valueTagGPR, TrustedImm32(JSValue::BooleanTag)); >+ JITCompiler::Jump isBoolean = m_jit.branchIfBoolean(valueTagGPR, InvalidGPRReg); > m_jit.move(valueTagGPR, resultTagGPR); > JITCompiler::Jump done = m_jit.jump(); > isBoolean.link(&m_jit); >diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp >index 5db9859523d51c1b3faaa2591a5f12496e799649..1f3fbecf97c096d1b37977ebb8dc69d3f2f27186 100644 >--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp >+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp >@@ -366,13 +366,13 @@ void SpeculativeJIT::nonSpeculativePeepholeStrictEq(Node* node, Node* branchNode > } else { > m_jit.or64(arg1GPR, arg2GPR, resultGPR); > >- JITCompiler::Jump twoCellsCase = m_jit.branchTest64(JITCompiler::Zero, resultGPR, GPRInfo::tagMaskRegister); >+ JITCompiler::Jump twoCellsCase = m_jit.branchIfCell(resultGPR); > >- JITCompiler::Jump leftOK = m_jit.branch64(JITCompiler::AboveOrEqual, arg1GPR, GPRInfo::tagTypeNumberRegister); >- JITCompiler::Jump leftDouble = m_jit.branchTest64(JITCompiler::NonZero, arg1GPR, GPRInfo::tagTypeNumberRegister); >+ JITCompiler::Jump leftOK = m_jit.branchIfInt32(arg1GPR); >+ JITCompiler::Jump leftDouble = m_jit.branchIfNumber(arg1GPR); > leftOK.link(&m_jit); >- JITCompiler::Jump rightOK = m_jit.branch64(JITCompiler::AboveOrEqual, arg2GPR, GPRInfo::tagTypeNumberRegister); >- JITCompiler::Jump rightDouble = m_jit.branchTest64(JITCompiler::NonZero, arg2GPR, GPRInfo::tagTypeNumberRegister); >+ JITCompiler::Jump rightOK = m_jit.branchIfInt32(arg2GPR); >+ JITCompiler::Jump rightDouble = m_jit.branchIfNumber(arg2GPR); > rightOK.link(&m_jit); > > branch64(invert ? JITCompiler::NotEqual : JITCompiler::Equal, arg1GPR, arg2GPR, taken); >@@ -434,7 +434,7 @@ void SpeculativeJIT::nonSpeculativeNonPeepholeStrictEq(Node* node, bool invert) > > JITCompiler::JumpList slowPathCases; > >- JITCompiler::Jump twoCellsCase = m_jit.branchTest64(JITCompiler::Zero, resultGPR, GPRInfo::tagMaskRegister); >+ JITCompiler::Jump twoCellsCase = m_jit.branchIfCell(resultGPR); > > JITCompiler::Jump leftOK = m_jit.branchIfInt32(arg1Regs); > slowPathCases.append(m_jit.branchIfNumber(arg1Regs, InvalidGPRReg)); >@@ -763,7 +763,7 @@ void SpeculativeJIT::emitCall(Node* node) > prepareForExternalCall(); > m_jit.appendCall(operationCallEval); > m_jit.exceptionCheck(); >- JITCompiler::Jump done = m_jit.branchTest64(JITCompiler::NonZero, GPRInfo::returnValueGPR); >+ JITCompiler::Jump done = m_jit.branchIfNotEmpty(GPRInfo::returnValueGPR); > > // This is the part where we meant to make a normal call. Oops. > m_jit.addPtr(TrustedImm32(requiredBytes), JITCompiler::stackPointerRegister); >@@ -953,7 +953,7 @@ GPRReg SpeculativeJIT::fillSpeculateInt32Internal(Edge edge, DataFormat& returnF > GPRReg gpr = info.gpr(); > m_gprs.lock(gpr); > if (type & ~SpecInt32Only) >- speculationCheck(BadType, JSValueRegs(gpr), edge, m_jit.branch64(MacroAssembler::Below, gpr, GPRInfo::tagTypeNumberRegister)); >+ speculationCheck(BadType, JSValueRegs(gpr), edge, m_jit.branchIfNotInt32(gpr)); > info.fillJSValue(*m_stream, gpr, DataFormatJSInt32); > // If !strict we're done, return. > if (!strict) { >@@ -2390,14 +2390,13 @@ void SpeculativeJIT::compile(Node* node) > m_jit.load64(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight), result.gpr()); > if (node->arrayMode().isSaneChain()) { > ASSERT(node->arrayMode().type() == Array::Contiguous); >- JITCompiler::Jump notHole = m_jit.branchTest64( >- MacroAssembler::NonZero, result.gpr()); >+ JITCompiler::Jump notHole = m_jit.branchIfNotEmpty(result.gpr()); > m_jit.move(TrustedImm64(JSValue::encode(jsUndefined())), result.gpr()); > notHole.link(&m_jit); > } else { > speculationCheck( > LoadFromHole, JSValueRegs(), 0, >- m_jit.branchTest64(MacroAssembler::Zero, result.gpr())); >+ m_jit.branchIfEmpty(result.gpr())); > } > jsValueResult(result.gpr(), node, node->arrayMode().type() == Array::Int32 ? DataFormatJSInt32 : DataFormatJS); > break; >@@ -2422,7 +2421,7 @@ void SpeculativeJIT::compile(Node* node) > slowCases.append(m_jit.branch32(MacroAssembler::AboveOrEqual, propertyReg, MacroAssembler::Address(storageReg, Butterfly::offsetOfPublicLength()))); > > m_jit.load64(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight), resultReg); >- slowCases.append(m_jit.branchTest64(MacroAssembler::Zero, resultReg)); >+ slowCases.append(m_jit.branchIfEmpty(resultReg)); > > addSlowPathGenerator( > slowPathCall( >@@ -2505,7 +2504,7 @@ void SpeculativeJIT::compile(Node* node) > > GPRTemporary result(this); > m_jit.load64(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, ArrayStorage::vectorOffset()), result.gpr()); >- speculationCheck(LoadFromHole, JSValueRegs(), 0, m_jit.branchTest64(MacroAssembler::Zero, result.gpr())); >+ speculationCheck(LoadFromHole, JSValueRegs(), 0, m_jit.branchIfEmpty(result.gpr())); > > jsValueResult(result.gpr(), node); > break; >@@ -2530,7 +2529,7 @@ void SpeculativeJIT::compile(Node* node) > slowCases.append(m_jit.branch32(MacroAssembler::AboveOrEqual, propertyReg, MacroAssembler::Address(storageReg, ArrayStorage::vectorLengthOffset()))); > > m_jit.load64(MacroAssembler::BaseIndex(storageReg, propertyReg, MacroAssembler::TimesEight, ArrayStorage::vectorOffset()), resultReg); >- slowCases.append(m_jit.branchTest64(MacroAssembler::Zero, resultReg)); >+ slowCases.append(m_jit.branchIfEmpty(resultReg)); > > addSlowPathGenerator( > slowPathCall( >@@ -2640,8 +2639,7 @@ void SpeculativeJIT::compile(Node* node) > if (arrayMode.type() == Array::Int32) { > DFG_TYPE_CHECK( > JSValueRegs(valueReg), child3, SpecInt32Only, >- m_jit.branch64( >- MacroAssembler::Below, valueReg, GPRInfo::tagTypeNumberRegister)); >+ m_jit.branchIfNotInt32(valueReg)); > } > > StorageOperand storage(this, child4); >@@ -3142,7 +3140,7 @@ void SpeculativeJIT::compile(Node* node) > // length and the new length. > m_jit.store64( > MacroAssembler::TrustedImm64((int64_t)0), MacroAssembler::BaseIndex(storageGPR, storageLengthGPR, MacroAssembler::TimesEight)); >- slowCase = m_jit.branchTest64(MacroAssembler::Zero, valueGPR); >+ slowCase = m_jit.branchIfEmpty(valueGPR); > } > > addSlowPathGenerator( >@@ -3170,7 +3168,7 @@ void SpeculativeJIT::compile(Node* node) > slowCases.append(m_jit.branch32(MacroAssembler::AboveOrEqual, storageLengthGPR, MacroAssembler::Address(storageGPR, ArrayStorage::vectorLengthOffset()))); > > m_jit.load64(MacroAssembler::BaseIndex(storageGPR, storageLengthGPR, MacroAssembler::TimesEight, ArrayStorage::vectorOffset()), valueGPR); >- slowCases.append(m_jit.branchTest64(MacroAssembler::Zero, valueGPR)); >+ slowCases.append(m_jit.branchIfEmpty(valueGPR)); > > m_jit.store32(storageLengthGPR, MacroAssembler::Address(storageGPR, ArrayStorage::lengthOffset())); > >@@ -3533,7 +3531,7 @@ void SpeculativeJIT::compile(Node* node) > if (validationEnabled()) { > JSValueOperand operand(this, node->child1()); > GPRReg input = operand.gpr(); >- auto done = m_jit.branchTest64(MacroAssembler::NonZero, input); >+ auto done = m_jit.branchIfNotEmpty(input); > m_jit.breakpoint(); > done.link(&m_jit); > } >@@ -3555,7 +3553,7 @@ void SpeculativeJIT::compile(Node* node) > GPRReg cellGPR = cell.gpr(); > MacroAssembler::Jump isEmpty; > if (m_interpreter.forNode(node->child1()).m_type & SpecEmpty) >- isEmpty = m_jit.branchTest64(MacroAssembler::Zero, cellGPR); >+ isEmpty = m_jit.branchIfEmpty(cellGPR); > > emitStructureCheck(node, cellGPR, InvalidGPRReg); > >@@ -4700,8 +4698,7 @@ void SpeculativeJIT::convertAnyInt(Edge valueEdge, GPRReg resultGPR) > JSValueOperand value(this, valueEdge, ManualOperandSpeculation); > GPRReg valueGPR = value.gpr(); > >- JITCompiler::Jump notInt32 = >- m_jit.branch64(JITCompiler::Below, valueGPR, GPRInfo::tagTypeNumberRegister); >+ JITCompiler::Jump notInt32 = m_jit.branchIfNotInt32(valueGPR); > > m_jit.signExtend32ToPtr(valueGPR, resultGPR); > JITCompiler::Jump done = m_jit.jump(); >diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp >index 21f30abe0060fed2be3c9579eaa668d969392027..641176c88bd54c8ca1f9918ce132968c4c11cdbd 100644 >--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp >+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp >@@ -2953,7 +2953,7 @@ class LowerDFGToB3 { > [=] (CCallHelpers& jit, const StackmapGenerationParams& params) { > AllowMacroScratchRegisterUsage allowScratch(jit); > GPRReg input = params[0].gpr(); >- CCallHelpers::Jump done = jit.branchTest64(CCallHelpers::NonZero, input); >+ CCallHelpers::Jump done = jit.branchIfNotEmpty(input); > jit.breakpoint(); > done.link(&jit); > }); >diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.h b/Source/JavaScriptCore/jit/AssemblyHelpers.h >index ecd1885d1cc770d9fd2faa00dd50fde8da21a710..56251991ac91154d69c77f14baf6e9fc3bb7d091 100644 >--- a/Source/JavaScriptCore/jit/AssemblyHelpers.h >+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.h >@@ -707,6 +707,7 @@ class AssemblyHelpers : public MacroAssembler { > return branch32(MacroAssembler::NotEqual, reg, TrustedImm32(JSValue::CellTag)); > #endif > } >+ > Jump branchIfNotCell(JSValueRegs regs, TagRegistersMode mode = HaveTagRegisters) > { > #if USE(JSVALUE64) >@@ -760,34 +761,45 @@ class AssemblyHelpers : public MacroAssembler { > #endif > } > >- Jump branchIfInt32(JSValueRegs regs, TagRegistersMode mode = HaveTagRegisters) >+ Jump branchIfInt32(GPRReg gpr, TagRegistersMode mode = HaveTagRegisters) > { > #if USE(JSVALUE64) > if (mode == HaveTagRegisters) >- return branch64(AboveOrEqual, regs.gpr(), GPRInfo::tagTypeNumberRegister); >- return branch64(AboveOrEqual, regs.gpr(), TrustedImm64(TagTypeNumber)); >+ return branch64(AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister); >+ return branch64(AboveOrEqual, gpr, TrustedImm64(TagTypeNumber)); > #else > UNUSED_PARAM(mode); >- return branch32(Equal, regs.tagGPR(), TrustedImm32(JSValue::Int32Tag)); >+ return branch32(Equal, gpr, TrustedImm32(JSValue::Int32Tag)); > #endif > } > >+ Jump branchIfInt32(JSValueRegs regs, TagRegistersMode mode = HaveTagRegisters) >+ { > #if USE(JSVALUE64) >+ return branchIfInt32(regs.gpr(), mode); >+#else >+ return branchIfInt32(regs.tagGPR(), mode); >+#endif >+ } >+ > Jump branchIfNotInt32(GPRReg gpr, TagRegistersMode mode = HaveTagRegisters) > { >+#if USE(JSVALUE64) > if (mode == HaveTagRegisters) > return branch64(Below, gpr, GPRInfo::tagTypeNumberRegister); > return branch64(Below, gpr, TrustedImm64(TagTypeNumber)); >- } >+#else >+ UNUSED_PARAM(mode); >+ return branch32(NotEqual, gpr, TrustedImm32(JSValue::Int32Tag)); > #endif >+ } > > Jump branchIfNotInt32(JSValueRegs regs, TagRegistersMode mode = HaveTagRegisters) > { > #if USE(JSVALUE64) > return branchIfNotInt32(regs.gpr(), mode); > #else >- UNUSED_PARAM(mode); >- return branch32(NotEqual, regs.tagGPR(), TrustedImm32(JSValue::Int32Tag)); >+ return branchIfNotInt32(regs.tagGPR(), mode); > #endif > } > >@@ -799,17 +811,18 @@ class AssemblyHelpers : public MacroAssembler { > return branchIfNumber(regs.gpr(), mode); > #else > UNUSED_PARAM(mode); >+ ASSERT(tempGPR != InvalidGPRReg); > add32(TrustedImm32(1), regs.tagGPR(), tempGPR); > return branch32(Below, tempGPR, TrustedImm32(JSValue::LowestTag + 1)); > #endif > } > > #if USE(JSVALUE64) >- Jump branchIfNumber(GPRReg reg, TagRegistersMode mode = HaveTagRegisters) >+ Jump branchIfNumber(GPRReg gpr, TagRegistersMode mode = HaveTagRegisters) > { > if (mode == HaveTagRegisters) >- return branchTest64(NonZero, reg, GPRInfo::tagTypeNumberRegister); >- return branchTest64(NonZero, reg, TrustedImm64(TagTypeNumber)); >+ return branchTest64(NonZero, gpr, GPRInfo::tagTypeNumberRegister); >+ return branchTest64(NonZero, gpr, TrustedImm64(TagTypeNumber)); > } > #endif > >@@ -827,11 +840,11 @@ class AssemblyHelpers : public MacroAssembler { > } > > #if USE(JSVALUE64) >- Jump branchIfNotNumber(GPRReg reg, TagRegistersMode mode = HaveTagRegisters) >+ Jump branchIfNotNumber(GPRReg gpr, TagRegistersMode mode = HaveTagRegisters) > { > if (mode == HaveTagRegisters) >- return branchTest64(Zero, reg, GPRInfo::tagTypeNumberRegister); >- return branchTest64(Zero, reg, TrustedImm64(TagTypeNumber)); >+ return branchTest64(Zero, gpr, GPRInfo::tagTypeNumberRegister); >+ return branchTest64(Zero, gpr, TrustedImm64(TagTypeNumber)); > } > #endif > >@@ -848,28 +861,50 @@ class AssemblyHelpers : public MacroAssembler { > } > > // Note that the tempGPR is not used in 32-bit mode. >- Jump branchIfBoolean(JSValueRegs regs, GPRReg tempGPR) >+ Jump branchIfBoolean(GPRReg gpr, GPRReg tempGPR) > { > #if USE(JSVALUE64) >- move(regs.gpr(), tempGPR); >+ ASSERT(tempGPR != InvalidGPRReg); >+ move(gpr, tempGPR); > xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), tempGPR); > return branchTest64(Zero, tempGPR, TrustedImm32(static_cast<int32_t>(~1))); > #else > UNUSED_PARAM(tempGPR); >- return branch32(Equal, regs.tagGPR(), TrustedImm32(JSValue::BooleanTag)); >+ return branch32(Equal, gpr, TrustedImm32(JSValue::BooleanTag)); >+#endif >+ } >+ >+ // Note that the tempGPR is not used in 32-bit mode. >+ Jump branchIfBoolean(JSValueRegs regs, GPRReg tempGPR) >+ { >+#if USE(JSVALUE64) >+ return branchIfBoolean(regs.gpr(), tempGPR); >+#else >+ return branchIfBoolean(regs.tagGPR(), tempGPR); > #endif > } > > // Note that the tempGPR is not used in 32-bit mode. >- Jump branchIfNotBoolean(JSValueRegs regs, GPRReg tempGPR) >+ Jump branchIfNotBoolean(GPRReg gpr, GPRReg tempGPR) > { > #if USE(JSVALUE64) >- move(regs.gpr(), tempGPR); >+ ASSERT(tempGPR != InvalidGPRReg); >+ move(gpr, tempGPR); > xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), tempGPR); > return branchTest64(NonZero, tempGPR, TrustedImm32(static_cast<int32_t>(~1))); > #else > UNUSED_PARAM(tempGPR); >- return branch32(NotEqual, regs.tagGPR(), TrustedImm32(JSValue::BooleanTag)); >+ return branch32(NotEqual, gpr, TrustedImm32(JSValue::BooleanTag)); >+#endif >+ } >+ >+ // Note that the tempGPR is not used in 32-bit mode. >+ Jump branchIfNotBoolean(JSValueRegs regs, GPRReg tempGPR) >+ { >+#if USE(JSVALUE64) >+ return branchIfNotBoolean(regs.gpr(), tempGPR); >+#else >+ return branchIfNotBoolean(regs.tagGPR(), tempGPR); > #endif > } > >@@ -904,12 +939,49 @@ class AssemblyHelpers : public MacroAssembler { > Jump branchIfFunction(GPRReg cellGPR) { return branchIfType(cellGPR, JSFunctionType); } > Jump branchIfNotFunction(GPRReg cellGPR) { return branchIfNotType(cellGPR, JSFunctionType); } > >+ Jump branchIfEmpty(GPRReg gpr) >+ { >+#if USE(JSVALUE64) >+ return branchTest64(Zero, gpr); >+#else >+ return branch32(Equal, gpr, TrustedImm32(JSValue::EmptyValueTag)); >+#endif >+ } >+ > Jump branchIfEmpty(JSValueRegs regs) > { > #if USE(JSVALUE64) >- return branchTest64(Zero, regs.gpr()); >+ return branchIfEmpty(regs.gpr()); >+#else >+ return branchIfEmpty(regs.tagGPR()); >+#endif >+ } >+ >+ Jump branchIfNotEmpty(GPRReg gpr) >+ { >+#if USE(JSVALUE64) >+ return branchTest64(NonZero, gpr); > #else >- return branch32(Equal, regs.tagGPR(), TrustedImm32(JSValue::EmptyValueTag)); >+ return branch32(NotEqual, gpr, TrustedImm32(JSValue::EmptyValueTag)); >+#endif >+ } >+ >+ Jump branchIfNotEmpty(JSValueRegs regs) >+ { >+#if USE(JSVALUE64) >+ return branchIfNotEmpty(regs.gpr()); >+#else >+ return branchIfNotEmpty(regs.tagGPR()); >+#endif >+ } >+ >+ // Note that this function does not respect MasqueradesAsUndefined. >+ Jump branchIfUndefined(GPRReg gpr) >+ { >+#if USE(JSVALUE64) >+ return branch64(Equal, gpr, TrustedImm64(JSValue::encode(jsUndefined()))); >+#else >+ return branch32(Equal, gpr, TrustedImm32(JSValue::UndefinedTag)); > #endif > } > >@@ -917,18 +989,65 @@ class AssemblyHelpers : public MacroAssembler { > Jump branchIfUndefined(JSValueRegs regs) > { > #if USE(JSVALUE64) >- return branch64(Equal, regs.gpr(), TrustedImm64(JSValue::encode(jsUndefined()))); >+ return branchIfUndefined(regs.gpr()); > #else >- return branch32(Equal, regs.tagGPR(), TrustedImm32(JSValue::UndefinedTag)); >+ return branchIfUndefined(regs.tagGPR()); >+#endif >+ } >+ >+ // Note that this function does not respect MasqueradesAsUndefined. >+ Jump branchIfNotUndefined(GPRReg gpr) >+ { >+#if USE(JSVALUE64) >+ return branch64(NotEqual, gpr, TrustedImm64(JSValue::encode(jsUndefined()))); >+#else >+ return branch32(NotEqual, gpr, TrustedImm32(JSValue::UndefinedTag)); >+#endif >+ } >+ >+ // Note that this function does not respect MasqueradesAsUndefined. >+ Jump branchIfNotUndefined(JSValueRegs regs) >+ { >+#if USE(JSVALUE64) >+ return branchIfNotUndefined(regs.gpr()); >+#else >+ return branchIfNotUndefined(regs.tagGPR()); >+#endif >+ } >+ >+ Jump branchIfNull(GPRReg gpr) >+ { >+#if USE(JSVALUE64) >+ return branch64(Equal, gpr, TrustedImm64(JSValue::encode(jsNull()))); >+#else >+ return branch32(Equal, gpr, TrustedImm32(JSValue::NullTag)); > #endif > } > > Jump branchIfNull(JSValueRegs regs) > { > #if USE(JSVALUE64) >- return branch64(Equal, regs.gpr(), TrustedImm64(JSValue::encode(jsNull()))); >+ return branchIfNull(regs.gpr()); >+#else >+ return branchIfNull(regs.tagGPR()); >+#endif >+ } >+ >+ Jump branchIfNotNull(GPRReg gpr) >+ { >+#if USE(JSVALUE64) >+ return branch64(NotEqual, gpr, TrustedImm64(JSValue::encode(jsNull()))); >+#else >+ return branch32(NotEqual, gpr, TrustedImm32(JSValue::NullTag)); >+#endif >+ } >+ >+ Jump branchIfNotNull(JSValueRegs regs) >+ { >+#if USE(JSVALUE64) >+ return branchIfNotNull(regs.gpr()); > #else >- return branch32(Equal, regs.tagGPR(), TrustedImm32(JSValue::NullTag)); >+ return branchIfNotNull(regs.tagGPR()); > #endif > } > >diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h >index aeb6ad96cf58c9b2eb4b4a862f2d4e6ebe4a9b5f..cb6d47172c605837c5cc660ff671b835bfa89153 100644 >--- a/Source/JavaScriptCore/jit/JIT.h >+++ b/Source/JavaScriptCore/jit/JIT.h >@@ -323,8 +323,6 @@ namespace JSC { > > void emitLoadDouble(int index, FPRegisterID value); > void emitLoadInt32ToDouble(int index, FPRegisterID value); >- Jump emitJumpIfCellObject(RegisterID cellReg); >- Jump emitJumpIfCellNotObject(RegisterID cellReg); > > enum WriteBarrierMode { UnconditionalWriteBarrier, ShouldFilterBase, ShouldFilterValue, ShouldFilterBaseAndValue }; > // value register in write barrier is used before any scratch registers >@@ -441,13 +439,10 @@ namespace JSC { > emitPutVirtualRegister(dst, payload); > } > >- Jump emitJumpIfJSCell(RegisterID); > Jump emitJumpIfBothJSCells(RegisterID, RegisterID, RegisterID); > void emitJumpSlowCaseIfJSCell(RegisterID); > void emitJumpSlowCaseIfNotJSCell(RegisterID); > void emitJumpSlowCaseIfNotJSCell(RegisterID, int VReg); >- Jump emitJumpIfInt(RegisterID); >- Jump emitJumpIfNotInt(RegisterID); > Jump emitJumpIfNotInt(RegisterID, RegisterID, RegisterID scratch); > PatchableJump emitPatchableJumpIfNotInt(RegisterID); > void emitJumpSlowCaseIfNotInt(RegisterID); >diff --git a/Source/JavaScriptCore/jit/JITArithmetic.cpp b/Source/JavaScriptCore/jit/JITArithmetic.cpp >index 3761c9bcb676f6ba585b987a0df0353c8e6445b5..2b3fe90ccecfa9a36008d004c49d08c6832887e3 100644 >--- a/Source/JavaScriptCore/jit/JITArithmetic.cpp >+++ b/Source/JavaScriptCore/jit/JITArithmetic.cpp >@@ -254,7 +254,7 @@ void JIT::emit_compareAndJump(OpcodeID, int op1, int op2, unsigned target, Relat > > if (isOperandConstantChar(op1)) { > emitGetVirtualRegister(op2, regT0); >- addSlowCase(emitJumpIfNotJSCell(regT0)); >+ addSlowCase(branchIfNotCell(regT0)); > JumpList failures; > emitLoadCharacterString(regT0, regT0, failures); > addSlowCase(failures); >@@ -263,7 +263,7 @@ void JIT::emit_compareAndJump(OpcodeID, int op1, int op2, unsigned target, Relat > } > if (isOperandConstantChar(op2)) { > emitGetVirtualRegister(op1, regT0); >- addSlowCase(emitJumpIfNotJSCell(regT0)); >+ addSlowCase(branchIfNotCell(regT0)); > JumpList failures; > emitLoadCharacterString(regT0, regT0, failures); > addSlowCase(failures); >@@ -354,7 +354,7 @@ void JIT::emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondi > linkAllSlowCases(iter); > > if (supportsFloatingPoint()) { >- Jump fail1 = emitJumpIfNotNumber(regT0); >+ Jump fail1 = branchIfNotNumber(regT0); > add64(tagTypeNumberRegister, regT0); > move64ToDouble(regT0, fpRegT0); > >@@ -380,7 +380,7 @@ void JIT::emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondi > linkAllSlowCases(iter); > > if (supportsFloatingPoint()) { >- Jump fail1 = emitJumpIfNotNumber(regT1); >+ Jump fail1 = branchIfNotNumber(regT1); > add64(tagTypeNumberRegister, regT1); > move64ToDouble(regT1, fpRegT1); > >@@ -405,9 +405,9 @@ void JIT::emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondi > linkSlowCase(iter); // LHS is not Int. > > if (supportsFloatingPoint()) { >- Jump fail1 = emitJumpIfNotNumber(regT0); >- Jump fail2 = emitJumpIfNotNumber(regT1); >- Jump fail3 = emitJumpIfInt(regT1); >+ Jump fail1 = branchIfNotNumber(regT0); >+ Jump fail2 = branchIfNotNumber(regT1); >+ Jump fail3 = branchIfInt32(regT1); > add64(tagTypeNumberRegister, regT0); > add64(tagTypeNumberRegister, regT1); > move64ToDouble(regT0, fpRegT0); >diff --git a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp b/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp >index 8245702fc4e3cdd66251a1747903b3b13bb2d772..d3ebdb67c521cb699f37e0989207a4e21e79cc53 100644 >--- a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp >+++ b/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp >@@ -49,7 +49,7 @@ void JIT::emit_compareAndJump(OpcodeID opcode, int op1, int op2, unsigned target > // Character less. > if (isOperandConstantChar(op1)) { > emitLoad(op2, regT1, regT0); >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfNotCell(regT1)); > JumpList failures; > emitLoadCharacterString(regT0, regT0, failures); > addSlowCase(failures); >@@ -58,7 +58,7 @@ void JIT::emit_compareAndJump(OpcodeID opcode, int op1, int op2, unsigned target > } > if (isOperandConstantChar(op2)) { > emitLoad(op1, regT1, regT0); >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfNotCell(regT1)); > JumpList failures; > emitLoadCharacterString(regT0, regT0, failures); > addSlowCase(failures); >@@ -67,16 +67,16 @@ void JIT::emit_compareAndJump(OpcodeID opcode, int op1, int op2, unsigned target > } > if (isOperandConstantInt(op1)) { > emitLoad(op2, regT3, regT2); >- notInt32Op2.append(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag))); >+ notInt32Op2.append(branchIfNotInt32(regT3)); > addJump(branch32(commute(condition), regT2, Imm32(getConstantOperand(op1).asInt32())), target); > } else if (isOperandConstantInt(op2)) { > emitLoad(op1, regT1, regT0); >- notInt32Op1.append(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag))); >+ notInt32Op1.append(branchIfNotInt32(regT1)); > addJump(branch32(condition, regT0, Imm32(getConstantOperand(op2).asInt32())), target); > } else { > emitLoad2(op1, regT1, regT0, op2, regT3, regT2); >- notInt32Op1.append(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag))); >- notInt32Op2.append(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag))); >+ notInt32Op1.append(branchIfNotInt32(regT1)); >+ notInt32Op2.append(branchIfNotInt32(regT3)); > addJump(branch32(condition, regT0, regT2), target); > } > >@@ -139,7 +139,7 @@ void JIT::emit_op_unsigned(Instruction* currentInstruction) > > emitLoad(op1, regT1, regT0); > >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag))); >+ addSlowCase(branchIfNotInt32(regT1)); > addSlowCase(branch32(LessThan, regT0, TrustedImm32(0))); > emitStoreInt32(result, regT0, result == op1); > } >@@ -150,7 +150,7 @@ void JIT::emit_op_inc(Instruction* currentInstruction) > > emitLoad(srcDst, regT1, regT0); > >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag))); >+ addSlowCase(branchIfNotInt32(regT1)); > addSlowCase(branchAdd32(Overflow, TrustedImm32(1), regT0)); > emitStoreInt32(srcDst, regT0, true); > } >@@ -161,7 +161,7 @@ void JIT::emit_op_dec(Instruction* currentInstruction) > > emitLoad(srcDst, regT1, regT0); > >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag))); >+ addSlowCase(branchIfNotInt32(regT1)); > addSlowCase(branchSub32(Overflow, TrustedImm32(1), regT0)); > emitStoreInt32(srcDst, regT0, true); > } >@@ -186,7 +186,7 @@ void JIT::emitBinaryDoubleOp(OpcodeID opcodeID, int dst, int op1, int op2, Opera > Jump doubleOp2 = branch32(Below, regT3, TrustedImm32(JSValue::LowestTag)); > > if (!types.second().definitelyIsNumber()) >- addSlowCase(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag))); >+ addSlowCase(branchIfNotInt32(regT3)); > > convertInt32ToDouble(regT2, fpRegT0); > Jump doTheMath = jump(); >@@ -313,8 +313,8 @@ void JIT::emit_op_mod(Instruction* currentInstruction) > ASSERT(regT3 == X86Registers::ebx); > > emitLoad2(op1, regT0, regT3, op2, regT1, regT2); >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag))); >- addSlowCase(branch32(NotEqual, regT0, TrustedImm32(JSValue::Int32Tag))); >+ addSlowCase(branchIfNotInt32(regT1)); >+ addSlowCase(branchIfNotInt32(regT0)); > > move(regT3, regT0); > addSlowCase(branchTest32(Zero, regT2)); >diff --git a/Source/JavaScriptCore/jit/JITCall.cpp b/Source/JavaScriptCore/jit/JITCall.cpp >index 1daa7bc1322f5322a9fd17993a793a0cd42fab2e..50ab48b15af6d56cd1ed0c3df2596d2528c21626 100644 >--- a/Source/JavaScriptCore/jit/JITCall.cpp >+++ b/Source/JavaScriptCore/jit/JITCall.cpp >@@ -103,7 +103,7 @@ void JIT::compileCallEval(Instruction* instruction) > > callOperation(operationCallEval, regT1); > >- addSlowCase(branch64(Equal, regT0, TrustedImm64(JSValue::encode(JSValue())))); >+ addSlowCase(branchIfEmpty(regT0)); > > sampleCodeBlock(m_codeBlock); > >@@ -165,7 +165,7 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca > > if (opcodeID == op_call && shouldEmitProfiling()) { > emitGetVirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0); >- Jump done = emitJumpIfNotJSCell(regT0); >+ Jump done = branchIfNotCell(regT0); > load32(Address(regT0, JSCell::structureIDOffset()), regT0); > store32(regT0, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID()); > done.link(this); >diff --git a/Source/JavaScriptCore/jit/JITCall32_64.cpp b/Source/JavaScriptCore/jit/JITCall32_64.cpp >index fdc4217579b1091733421aee6240fcfc55aecec2..d1efbe97d557b3011b64eb85b792a0ee086e06dc 100644 >--- a/Source/JavaScriptCore/jit/JITCall32_64.cpp >+++ b/Source/JavaScriptCore/jit/JITCall32_64.cpp >@@ -193,7 +193,7 @@ void JIT::compileCallEval(Instruction* instruction) > > callOperation(operationCallEval, regT1); > >- addSlowCase(branch32(Equal, regT1, TrustedImm32(JSValue::EmptyValueTag))); >+ addSlowCase(branchIfEmpty(regT1)); > > sampleCodeBlock(m_codeBlock); > >@@ -249,7 +249,7 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca > > if (opcodeID == op_call && shouldEmitProfiling()) { > emitLoad(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0, regT1); >- Jump done = branch32(NotEqual, regT0, TrustedImm32(JSValue::CellTag)); >+ Jump done = branchIfNotCell(regT0); > loadPtr(Address(regT1, JSCell::structureIDOffset()), regT1); > storePtr(regT1, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID()); > done.link(this); >@@ -275,7 +275,7 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca > if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) > emitRestoreCalleeSaves(); > >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfNotCell(regT1)); > > DataLabelPtr addressOfLinkedFunctionCheck; > Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(nullptr)); >diff --git a/Source/JavaScriptCore/jit/JITInlines.h b/Source/JavaScriptCore/jit/JITInlines.h >index 17090071e486bcca924d43de2702a16996b23947..9a4b737c84aade6da37c371d8567eec9ef506055 100644 >--- a/Source/JavaScriptCore/jit/JITInlines.h >+++ b/Source/JavaScriptCore/jit/JITInlines.h >@@ -252,16 +252,6 @@ ALWAYS_INLINE void JIT::emitJumpSlowToHot(Jump jump, int relativeOffset) > jump.linkTo(m_labels[m_bytecodeOffset + relativeOffset], this); > } > >-ALWAYS_INLINE JIT::Jump JIT::emitJumpIfCellObject(RegisterID cellReg) >-{ >- return branch8(AboveOrEqual, Address(cellReg, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType)); >-} >- >-ALWAYS_INLINE JIT::Jump JIT::emitJumpIfCellNotObject(RegisterID cellReg) >-{ >- return branch8(Below, Address(cellReg, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType)); >-} >- > #if ENABLE(SAMPLING_FLAGS) > ALWAYS_INLINE void JIT::setSamplingFlag(int32_t flag) > { >@@ -556,7 +546,7 @@ inline void JIT::emitJumpSlowCaseIfNotJSCell(int virtualRegisterIndex, RegisterI > if (m_codeBlock->isConstantRegisterIndex(virtualRegisterIndex)) > addSlowCase(jump()); > else >- addSlowCase(branch32(NotEqual, tag, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfNotCell(tag)); > } > } > >@@ -648,26 +638,21 @@ ALWAYS_INLINE void JIT::emitInitRegister(int dst) > store64(TrustedImm64(JSValue::encode(jsUndefined())), Address(callFrameRegister, dst * sizeof(Register))); > } > >-ALWAYS_INLINE JIT::Jump JIT::emitJumpIfJSCell(RegisterID reg) >-{ >- return branchTest64(Zero, reg, tagMaskRegister); >-} >- > ALWAYS_INLINE JIT::Jump JIT::emitJumpIfBothJSCells(RegisterID reg1, RegisterID reg2, RegisterID scratch) > { > move(reg1, scratch); > or64(reg2, scratch); >- return emitJumpIfJSCell(scratch); >+ return branchIfCell(scratch); > } > > ALWAYS_INLINE void JIT::emitJumpSlowCaseIfJSCell(RegisterID reg) > { >- addSlowCase(emitJumpIfJSCell(reg)); >+ addSlowCase(branchIfCell(reg)); > } > > ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg) > { >- addSlowCase(emitJumpIfNotJSCell(reg)); >+ addSlowCase(branchIfNotCell(reg)); > } > > ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg, int vReg) >@@ -694,16 +679,6 @@ inline void JIT::emitLoadInt32ToDouble(int index, FPRegisterID value) > convertInt32ToDouble(addressFor(index), value); > } > >-ALWAYS_INLINE JIT::Jump JIT::emitJumpIfInt(RegisterID reg) >-{ >- return branch64(AboveOrEqual, reg, tagTypeNumberRegister); >-} >- >-ALWAYS_INLINE JIT::Jump JIT::emitJumpIfNotInt(RegisterID reg) >-{ >- return branch64(Below, reg, tagTypeNumberRegister); >-} >- > ALWAYS_INLINE JIT::PatchableJump JIT::emitPatchableJumpIfNotInt(RegisterID reg) > { > return patchableBranch64(Below, reg, tagTypeNumberRegister); >@@ -713,12 +688,12 @@ ALWAYS_INLINE JIT::Jump JIT::emitJumpIfNotInt(RegisterID reg1, RegisterID reg2, > { > move(reg1, scratch); > and64(reg2, scratch); >- return emitJumpIfNotInt(scratch); >+ return branchIfNotInt32(scratch); > } > > ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotInt(RegisterID reg) > { >- addSlowCase(emitJumpIfNotInt(reg)); >+ addSlowCase(branchIfNotInt32(reg)); > } > > ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotInt(RegisterID reg1, RegisterID reg2, RegisterID scratch) >@@ -728,7 +703,7 @@ ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotInt(RegisterID reg1, RegisterID reg > > ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotNumber(RegisterID reg) > { >- addSlowCase(emitJumpIfNotNumber(reg)); >+ addSlowCase(branchIfNotNumber(reg)); > } > > ALWAYS_INLINE void JIT::emitTagBool(RegisterID reg) >diff --git a/Source/JavaScriptCore/jit/JITOpcodes.cpp b/Source/JavaScriptCore/jit/JITOpcodes.cpp >index 3296e0b675ccf3be9c73c18f5430911af502ed1f..2c09684a3eeec303cf76342efa1e1a6c484fd6fa 100644 >--- a/Source/JavaScriptCore/jit/JITOpcodes.cpp >+++ b/Source/JavaScriptCore/jit/JITOpcodes.cpp >@@ -152,7 +152,7 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction) > emitJumpSlowCaseIfNotJSCell(regT1, proto); > > // Check that prototype is an object >- addSlowCase(emitJumpIfCellNotObject(regT1)); >+ addSlowCase(branchIfNotObject(regT1)); > > // Optimistically load the result true, and start looping. > // Initially, regT1 still contains proto and regT2 still contains value. >@@ -166,12 +166,12 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction) > // Otherwise, check if we've hit null - if we have then drop out of the loop, if not go again. > emitLoadStructure(*vm(), regT2, regT4, regT3); > load64(Address(regT4, Structure::prototypeOffset()), regT4); >- auto hasMonoProto = branchTest64(NonZero, regT4); >+ auto hasMonoProto = branchIfNotEmpty(regT4); > load64(Address(regT2, offsetRelativeToBase(knownPolyProtoOffset)), regT4); > hasMonoProto.link(this); > move(regT4, regT2); > Jump isInstance = branchPtr(Equal, regT2, regT1); >- emitJumpIfJSCell(regT2).linkTo(loop, this); >+ branchIfCell(regT2).linkTo(loop, this); > > // We get here either by dropping out of the loop, or if value was not an Object. Result is false. > move(TrustedImm64(JSValue::encode(jsBoolean(false))), regT0); >@@ -205,7 +205,7 @@ void JIT::emit_op_is_undefined(Instruction* currentInstruction) > int value = currentInstruction[2].u.operand; > > emitGetVirtualRegister(value, regT0); >- Jump isCell = emitJumpIfJSCell(regT0); >+ Jump isCell = branchIfCell(regT0); > > compare64(Equal, regT0, TrustedImm32(ValueUndefined), regT0); > Jump done = jump(); >@@ -257,7 +257,7 @@ void JIT::emit_op_is_cell_with_type(Instruction* currentInstruction) > int type = currentInstruction[3].u.operand; > > emitGetVirtualRegister(value, regT0); >- Jump isNotCell = emitJumpIfNotJSCell(regT0); >+ Jump isNotCell = branchIfNotCell(regT0); > > compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(type), regT0); > emitTagBool(regT0); >@@ -276,7 +276,7 @@ void JIT::emit_op_is_object(Instruction* currentInstruction) > int value = currentInstruction[2].u.operand; > > emitGetVirtualRegister(value, regT0); >- Jump isNotCell = emitJumpIfNotJSCell(regT0); >+ Jump isNotCell = branchIfNotCell(regT0); > > compare8(AboveOrEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType), regT0); > emitTagBool(regT0); >@@ -311,8 +311,8 @@ void JIT::emit_op_to_primitive(Instruction* currentInstruction) > > emitGetVirtualRegister(src, regT0); > >- Jump isImm = emitJumpIfNotJSCell(regT0); >- addSlowCase(emitJumpIfCellObject(regT0)); >+ Jump isImm = branchIfNotCell(regT0); >+ addSlowCase(branchIfObject(regT0)); > isImm.link(this); > > if (dst != src) >@@ -362,7 +362,7 @@ void JIT::emit_op_jeq_null(Instruction* currentInstruction) > unsigned target = currentInstruction[2].u.operand; > > emitGetVirtualRegister(src, regT0); >- Jump isImmediate = emitJumpIfNotJSCell(regT0); >+ Jump isImmediate = branchIfNotCell(regT0); > > // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure. > Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >@@ -385,7 +385,7 @@ void JIT::emit_op_jneq_null(Instruction* currentInstruction) > unsigned target = currentInstruction[2].u.operand; > > emitGetVirtualRegister(src, regT0); >- Jump isImmediate = emitJumpIfNotJSCell(regT0); >+ Jump isImmediate = branchIfNotCell(regT0); > > // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure. > addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target); >@@ -483,15 +483,15 @@ void JIT::compileOpStrictEq(Instruction* currentInstruction, CompileOpStrictEqTy > // Jump slow if both are cells (to cover strings). > move(regT0, regT2); > or64(regT1, regT2); >- addSlowCase(emitJumpIfJSCell(regT2)); >+ addSlowCase(branchIfCell(regT2)); > > // Jump slow if either is a double. First test if it's an integer, which is fine, and then test > // if it's a double. >- Jump leftOK = emitJumpIfInt(regT0); >- addSlowCase(emitJumpIfNumber(regT0)); >+ Jump leftOK = branchIfInt32(regT0); >+ addSlowCase(branchIfNumber(regT0)); > leftOK.link(this); >- Jump rightOK = emitJumpIfInt(regT1); >- addSlowCase(emitJumpIfNumber(regT1)); >+ Jump rightOK = branchIfInt32(regT1); >+ addSlowCase(branchIfNumber(regT1)); > rightOK.link(this); > > if (type == CompileOpStrictEqType::StrictEq) >@@ -524,15 +524,15 @@ void JIT::compileOpStrictEqJump(Instruction* currentInstruction, CompileOpStrict > // Jump slow if both are cells (to cover strings). > move(regT0, regT2); > or64(regT1, regT2); >- addSlowCase(emitJumpIfJSCell(regT2)); >+ addSlowCase(branchIfCell(regT2)); > > // Jump slow if either is a double. First test if it's an integer, which is fine, and then test > // if it's a double. >- Jump leftOK = emitJumpIfInt(regT0); >- addSlowCase(emitJumpIfNumber(regT0)); >+ Jump leftOK = branchIfInt32(regT0); >+ addSlowCase(branchIfNumber(regT0)); > leftOK.link(this); >- Jump rightOK = emitJumpIfInt(regT1); >- addSlowCase(emitJumpIfNumber(regT1)); >+ Jump rightOK = branchIfInt32(regT1); >+ addSlowCase(branchIfNumber(regT1)); > rightOK.link(this); > > if (type == CompileOpStrictEqType::StrictEq) >@@ -575,7 +575,7 @@ void JIT::emit_op_to_number(Instruction* currentInstruction) > int srcVReg = currentInstruction[2].u.operand; > emitGetVirtualRegister(srcVReg, regT0); > >- addSlowCase(emitJumpIfNotNumber(regT0)); >+ addSlowCase(branchIfNotNumber(regT0)); > > emitValueProfilingSite(); > if (srcVReg != dstVReg) >@@ -587,7 +587,7 @@ void JIT::emit_op_to_string(Instruction* currentInstruction) > int srcVReg = currentInstruction[2].u.operand; > emitGetVirtualRegister(srcVReg, regT0); > >- addSlowCase(emitJumpIfNotJSCell(regT0)); >+ addSlowCase(branchIfNotCell(regT0)); > addSlowCase(branch8(NotEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(StringType))); > > emitPutVirtualRegister(currentInstruction[1].u.operand); >@@ -599,7 +599,7 @@ void JIT::emit_op_to_object(Instruction* currentInstruction) > int srcVReg = currentInstruction[2].u.operand; > emitGetVirtualRegister(srcVReg, regT0); > >- addSlowCase(emitJumpIfNotJSCell(regT0)); >+ addSlowCase(branchIfNotCell(regT0)); > addSlowCase(branch8(Below, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType))); > > emitValueProfilingSite(); >@@ -728,7 +728,7 @@ void JIT::emit_op_eq_null(Instruction* currentInstruction) > int src1 = currentInstruction[2].u.operand; > > emitGetVirtualRegister(src1, regT0); >- Jump isImmediate = emitJumpIfNotJSCell(regT0); >+ Jump isImmediate = branchIfNotCell(regT0); > > Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); > move(TrustedImm32(0), regT0); >@@ -760,7 +760,7 @@ void JIT::emit_op_neq_null(Instruction* currentInstruction) > int src1 = currentInstruction[2].u.operand; > > emitGetVirtualRegister(src1, regT0); >- Jump isImmediate = emitJumpIfNotJSCell(regT0); >+ Jump isImmediate = branchIfNotCell(regT0); > > Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); > move(TrustedImm32(1), regT0); >@@ -862,7 +862,7 @@ void JIT::emit_op_create_this(Instruction* currentInstruction) > void JIT::emit_op_check_tdz(Instruction* currentInstruction) > { > emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- addSlowCase(branchTest64(Zero, regT0)); >+ addSlowCase(branchIfEmpty(regT0)); > } > > >@@ -1058,7 +1058,7 @@ void JIT::emitNewFuncExprCommon(Instruction* currentInstruction) > int dst = currentInstruction[1].u.operand; > #if USE(JSVALUE64) > emitGetVirtualRegister(currentInstruction[2].u.operand, regT0); >- notUndefinedScope = branch64(NotEqual, regT0, TrustedImm64(JSValue::encode(jsUndefined()))); >+ notUndefinedScope = branchIfNotUndefined(regT0); > store64(TrustedImm64(JSValue::encode(jsUndefined())), Address(callFrameRegister, sizeof(Register) * dst)); > #else > emitLoadPayload(currentInstruction[2].u.operand, regT0); >@@ -1333,25 +1333,23 @@ void JIT::emit_op_profile_type(Instruction* currentInstruction) > > JumpList jumpToEnd; > >- jumpToEnd.append(branchTest64(Zero, regT0)); >+ jumpToEnd.append(branchIfEmpty(regT0)); > > // Compile in a predictive type check, if possible, to see if we can skip writing to the log. > // These typechecks are inlined to match those of the 64-bit JSValue type checks. > if (cachedTypeLocation->m_lastSeenType == TypeUndefined) >- jumpToEnd.append(branch64(Equal, regT0, TrustedImm64(JSValue::encode(jsUndefined())))); >+ jumpToEnd.append(branchIfUndefined(regT0)); > else if (cachedTypeLocation->m_lastSeenType == TypeNull) >- jumpToEnd.append(branch64(Equal, regT0, TrustedImm64(JSValue::encode(jsNull())))); >- else if (cachedTypeLocation->m_lastSeenType == TypeBoolean) { >- move(regT0, regT1); >- and64(TrustedImm32(~1), regT1); >- jumpToEnd.append(branch64(Equal, regT1, TrustedImm64(ValueFalse))); >- } else if (cachedTypeLocation->m_lastSeenType == TypeAnyInt) >- jumpToEnd.append(emitJumpIfInt(regT0)); >+ jumpToEnd.append(branchIfNull(regT0)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeBoolean) >+ jumpToEnd.append(branchIfBoolean(regT0, regT1)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeAnyInt) >+ jumpToEnd.append(branchIfInt32(regT0)); > else if (cachedTypeLocation->m_lastSeenType == TypeNumber) >- jumpToEnd.append(emitJumpIfNumber(regT0)); >+ jumpToEnd.append(branchIfNumber(regT0)); > else if (cachedTypeLocation->m_lastSeenType == TypeString) { >- Jump isNotCell = emitJumpIfNotJSCell(regT0); >- jumpToEnd.append(branch8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(StringType))); >+ Jump isNotCell = branchIfNotCell(regT0); >+ jumpToEnd.append(branchIfString(regT0)); > isNotCell.link(this); > } > >@@ -1365,7 +1363,7 @@ void JIT::emit_op_profile_type(Instruction* currentInstruction) > store64(regT0, Address(regT1, TypeProfilerLog::LogEntry::valueOffset())); > > // Store the structureID of the cell if T0 is a cell, otherwise, store 0 on the log entry. >- Jump notCell = emitJumpIfNotJSCell(regT0); >+ Jump notCell = branchIfNotCell(regT0); > load32(Address(regT0, JSCell::structureIDOffset()), regT0); > store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); > Jump skipIsCell = jump(); >diff --git a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >index 4c2e44558c778b97685f173c7d326f5e937f5d8f..7286432e75e81195a50c34cbe90ce6dced66f352 100644 >--- a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >+++ b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >@@ -152,7 +152,7 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction) > emitJumpSlowCaseIfNotJSCell(proto); > > // Check that prototype is an object >- addSlowCase(emitJumpIfCellNotObject(regT1)); >+ addSlowCase(branchIfNotObject(regT1)); > > // Optimistically load the result true, and start looping. > // Initially, regT1 still contains proto and regT2 still contains value. >@@ -167,7 +167,7 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction) > loadPtr(Address(regT2, JSCell::structureIDOffset()), regT4); > load32(Address(regT4, Structure::prototypeOffset() + TagOffset), regT3); > load32(Address(regT4, Structure::prototypeOffset() + PayloadOffset), regT4); >- auto hasMonoProto = branch32(NotEqual, regT3, TrustedImm32(JSValue::EmptyValueTag)); >+ auto hasMonoProto = branchIfNotEmpty(regT3); > load32(Address(regT2, offsetRelativeToBase(knownPolyProtoOffset) + PayloadOffset), regT4); > hasMonoProto.link(this); > move(regT4, regT2); >@@ -236,7 +236,7 @@ void JIT::emit_op_is_undefined(Instruction* currentInstruction) > int value = currentInstruction[2].u.operand; > > emitLoad(value, regT1, regT0); >- Jump isCell = branch32(Equal, regT1, TrustedImm32(JSValue::CellTag)); >+ Jump isCell = branchIfCell(regT1); > > compare32(Equal, regT1, TrustedImm32(JSValue::UndefinedTag), regT0); > Jump done = jump(); >@@ -285,7 +285,7 @@ void JIT::emit_op_is_cell_with_type(Instruction* currentInstruction) > int type = currentInstruction[3].u.operand; > > emitLoad(value, regT1, regT0); >- Jump isNotCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >+ Jump isNotCell = branchIfNotCell(regT1); > > compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(type), regT0); > Jump done = jump(); >@@ -303,7 +303,7 @@ void JIT::emit_op_is_object(Instruction* currentInstruction) > int value = currentInstruction[2].u.operand; > > emitLoad(value, regT1, regT0); >- Jump isNotCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >+ Jump isNotCell = branchIfNotCell(regT1); > > compare8(AboveOrEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType), regT0); > Jump done = jump(); >@@ -322,8 +322,8 @@ void JIT::emit_op_to_primitive(Instruction* currentInstruction) > > emitLoad(src, regT1, regT0); > >- Jump isImm = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >- addSlowCase(emitJumpIfCellObject(regT0)); >+ Jump isImm = branchIfNotCell(regT1); >+ addSlowCase(branchIfObject(regT0)); > isImm.link(this); > > if (dst != src) >@@ -347,7 +347,7 @@ void JIT::emit_op_not(Instruction* currentInstruction) > emitLoadTag(src, regT0); > > emitLoad(src, regT1, regT0); >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::BooleanTag))); >+ addSlowCase(branchIfNotBoolean(regT1, InvalidGPRReg)); > xor32(TrustedImm32(1), regT0); > > emitStoreBool(dst, regT0, (dst == src)); >@@ -391,7 +391,7 @@ void JIT::emit_op_jeq_null(Instruction* currentInstruction) > > emitLoad(src, regT1, regT0); > >- Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >+ Jump isImmediate = branchIfNotCell(regT1); > > Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); > loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >@@ -401,9 +401,9 @@ void JIT::emit_op_jeq_null(Instruction* currentInstruction) > > // Now handle the immediate cases - undefined & null > isImmediate.link(this); >- ASSERT((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1)); >+ static_assert((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1), ""); > or32(TrustedImm32(1), regT1); >- addJump(branch32(Equal, regT1, TrustedImm32(JSValue::NullTag)), target); >+ addJump(branchIfNull(regT1), target); > > isNotMasqueradesAsUndefined.link(this); > masqueradesGlobalObjectIsForeign.link(this); >@@ -416,7 +416,7 @@ void JIT::emit_op_jneq_null(Instruction* currentInstruction) > > emitLoad(src, regT1, regT0); > >- Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >+ Jump isImmediate = branchIfNotCell(regT1); > > addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target); > loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >@@ -427,9 +427,9 @@ void JIT::emit_op_jneq_null(Instruction* currentInstruction) > // Now handle the immediate cases - undefined & null > isImmediate.link(this); > >- ASSERT((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1)); >+ static_assert((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1), ""); > or32(TrustedImm32(1), regT1); >- addJump(branch32(NotEqual, regT1, TrustedImm32(JSValue::NullTag)), target); >+ addJump(branchIfNotNull(regT1), target); > > wasNotImmediate.link(this); > } >@@ -441,8 +441,8 @@ void JIT::emit_op_jneq_ptr(Instruction* currentInstruction) > unsigned target = currentInstruction[3].u.operand; > > emitLoad(src, regT1, regT0); >- CCallHelpers::Jump notCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >- CCallHelpers::Jump equal = branchPtr(Equal, regT0, TrustedImmPtr(actualPointerFor(m_codeBlock, ptr))); >+ Jump notCell = branchIfNotCell(regT1); >+ Jump equal = branchPtr(Equal, regT0, TrustedImmPtr(actualPointerFor(m_codeBlock, ptr))); > notCell.link(this); > store32(TrustedImm32(1), ¤tInstruction[4].u.operand); > addJump(jump(), target); >@@ -457,7 +457,7 @@ void JIT::emit_op_eq(Instruction* currentInstruction) > > emitLoad2(src1, regT1, regT0, src2, regT3, regT2); > addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branch32(Equal, regT1, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfCell(regT1)); > addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); > > compare32(Equal, regT0, regT2, regT0); >@@ -499,7 +499,7 @@ void JIT::emit_op_jeq(Instruction* currentInstruction) > > emitLoad2(src1, regT1, regT0, src2, regT3, regT2); > addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branch32(Equal, regT1, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfCell(regT1)); > addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); > > addJump(branch32(Equal, regT0, regT2), target); >@@ -543,7 +543,7 @@ void JIT::emit_op_neq(Instruction* currentInstruction) > > emitLoad2(src1, regT1, regT0, src2, regT3, regT2); > addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branch32(Equal, regT1, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfCell(regT1)); > addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); > > compare32(NotEqual, regT0, regT2, regT0); >@@ -586,7 +586,7 @@ void JIT::emit_op_jneq(Instruction* currentInstruction) > > emitLoad2(src1, regT1, regT0, src2, regT3, regT2); > addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branch32(Equal, regT1, TrustedImm32(JSValue::CellTag))); >+ addSlowCase(branchIfCell(regT1)); > addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); > > addJump(branch32(NotEqual, regT0, regT2), target); >@@ -610,9 +610,9 @@ void JIT::compileOpStrictEq(Instruction* currentInstruction, CompileOpStrictEqTy > addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); > > // Jump to a slow case if both are strings or symbols (non object). >- Jump notCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >- Jump firstIsObject = emitJumpIfCellObject(regT0); >- addSlowCase(emitJumpIfCellNotObject(regT2)); >+ Jump notCell = branchIfNotCell(regT1); >+ Jump firstIsObject = branchIfObject(regT0); >+ addSlowCase(branchIfNotObject(regT2)); > notCell.link(this); > firstIsObject.link(this); > >@@ -648,9 +648,9 @@ void JIT::compileOpStrictEqJump(Instruction* currentInstruction, CompileOpStrict > addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); > > // Jump to a slow case if both are strings or symbols (non object). >- Jump notCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >- Jump firstIsObject = emitJumpIfCellObject(regT0); >- addSlowCase(emitJumpIfCellNotObject(regT2)); >+ Jump notCell = branchIfNotCell(regT1); >+ Jump firstIsObject = branchIfObject(regT0); >+ addSlowCase(branchIfNotObject(regT2)); > notCell.link(this); > firstIsObject.link(this); > >@@ -695,7 +695,7 @@ void JIT::emit_op_eq_null(Instruction* currentInstruction) > int src = currentInstruction[2].u.operand; > > emitLoad(src, regT1, regT0); >- Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >+ Jump isImmediate = branchIfNotCell(regT1); > > Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); > move(TrustedImm32(0), regT1); >@@ -726,7 +726,7 @@ void JIT::emit_op_neq_null(Instruction* currentInstruction) > int src = currentInstruction[2].u.operand; > > emitLoad(src, regT1, regT0); >- Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)); >+ Jump isImmediate = branchIfNotCell(regT1); > > Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); > move(TrustedImm32(1), regT1); >@@ -767,7 +767,7 @@ void JIT::emit_op_to_number(Instruction* currentInstruction) > > emitLoad(src, regT1, regT0); > >- Jump isInt32 = branch32(Equal, regT1, TrustedImm32(JSValue::Int32Tag)); >+ Jump isInt32 = branchIfInt32(regT1); > addSlowCase(branch32(AboveOrEqual, regT1, TrustedImm32(JSValue::LowestTag))); > isInt32.link(this); > >@@ -783,8 +783,8 @@ void JIT::emit_op_to_string(Instruction* currentInstruction) > > emitLoad(src, regT1, regT0); > >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag))); >- addSlowCase(branch8(NotEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(StringType))); >+ addSlowCase(branchIfNotCell(regT1)); >+ addSlowCase(branchIfNotString(regT0)); > > if (src != dst) > emitStore(dst, regT1, regT0); >@@ -797,8 +797,8 @@ void JIT::emit_op_to_object(Instruction* currentInstruction) > > emitLoad(src, regT1, regT0); > >- addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag))); >- addSlowCase(branch8(Below, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType))); >+ addSlowCase(branchIfNotCell(regT1)); >+ addSlowCase(branchIfNotObject(regT0)); > > emitValueProfilingSite(); > if (src != dst) >@@ -992,8 +992,8 @@ void JIT::emit_op_to_this(Instruction* currentInstruction) > > emitLoad(thisRegister, regT3, regT2); > >- addSlowCase(branch32(NotEqual, regT3, TrustedImm32(JSValue::CellTag))); >- addSlowCase(branch8(NotEqual, Address(regT2, JSCell::typeInfoTypeOffset()), TrustedImm32(FinalObjectType))); >+ addSlowCase(branchIfNotCell(regT3)); >+ addSlowCase(branchIfNotType(regT2, FinalObjectType)); > loadPtr(Address(regT2, JSCell::structureIDOffset()), regT0); > loadPtr(cachedStructure, regT2); > addSlowCase(branchPtr(NotEqual, regT0, regT2)); >@@ -1002,7 +1002,7 @@ void JIT::emit_op_to_this(Instruction* currentInstruction) > void JIT::emit_op_check_tdz(Instruction* currentInstruction) > { > emitLoadTag(currentInstruction[1].u.operand, regT0); >- addSlowCase(branch32(Equal, regT0, TrustedImm32(JSValue::EmptyValueTag))); >+ addSlowCase(branchIfEmpty(regT0)); > } > > void JIT::emit_op_has_structure_property(Instruction* currentInstruction) >@@ -1213,24 +1213,23 @@ void JIT::emit_op_profile_type(Instruction* currentInstruction) > > JumpList jumpToEnd; > >- jumpToEnd.append(branch32(Equal, regT3, TrustedImm32(JSValue::EmptyValueTag))); >+ jumpToEnd.append(branchIfEmpty(regT3)); > > // Compile in a predictive type check, if possible, to see if we can skip writing to the log. > // These typechecks are inlined to match those of the 32-bit JSValue type checks. > if (cachedTypeLocation->m_lastSeenType == TypeUndefined) >- jumpToEnd.append(branch32(Equal, regT3, TrustedImm32(JSValue::UndefinedTag))); >+ jumpToEnd.append(branchIfUndefined(regT3)); > else if (cachedTypeLocation->m_lastSeenType == TypeNull) >- jumpToEnd.append(branch32(Equal, regT3, TrustedImm32(JSValue::NullTag))); >+ jumpToEnd.append(branchIfNull(regT3)); > else if (cachedTypeLocation->m_lastSeenType == TypeBoolean) >- jumpToEnd.append(branch32(Equal, regT3, TrustedImm32(JSValue::BooleanTag))); >+ jumpToEnd.append(branchIfBoolean(regT3, InvalidGPRReg)); > else if (cachedTypeLocation->m_lastSeenType == TypeAnyInt) >- jumpToEnd.append(branch32(Equal, regT3, TrustedImm32(JSValue::Int32Tag))); >+ jumpToEnd.append(branchIfInt32(regT3)); > else if (cachedTypeLocation->m_lastSeenType == TypeNumber) { >- jumpToEnd.append(branch32(Below, regT3, TrustedImm32(JSValue::LowestTag))); >- jumpToEnd.append(branch32(Equal, regT3, TrustedImm32(JSValue::Int32Tag))); >+ jumpToEnd.append(branchIfNumber(JSValueRegs(regT3, regT0), regT1)); > } else if (cachedTypeLocation->m_lastSeenType == TypeString) { >- Jump isNotCell = branch32(NotEqual, regT3, TrustedImm32(JSValue::CellTag)); >- jumpToEnd.append(branch8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(StringType))); >+ Jump isNotCell = branchIfNotCell(regT3); >+ jumpToEnd.append(branchIfString(regT0)); > isNotCell.link(this); > } > >@@ -1246,7 +1245,7 @@ void JIT::emit_op_profile_type(Instruction* currentInstruction) > store32(regT3, Address(regT1, TypeProfilerLog::LogEntry::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag))); > > // Store the structureID of the cell if argument is a cell, otherwise, store 0 on the log entry. >- Jump notCell = branch32(NotEqual, regT3, TrustedImm32(JSValue::CellTag)); >+ Jump notCell = branchIfNotCell(regT3); > load32(Address(regT0, JSCell::structureIDOffset()), regT0); > store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); > Jump skipNotCell = jump(); >diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >index 1b97da1a07b49ac3066262b1284dc20f1462e91a..6ce0a1907e0d741e91c8ef568dcfd3c67cd35a63 100644 >--- a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >+++ b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >@@ -156,7 +156,7 @@ void JIT::emit_op_get_by_val(Instruction* currentInstruction) > Label done = label(); > > if (!ASSERT_DISABLED) { >- Jump resultOK = branchTest64(NonZero, regT0); >+ Jump resultOK = branchIfNotEmpty(regT0); > abortWithReason(JITGetByValResultIsNotEmpty); > resultOK.link(this); > } >@@ -219,7 +219,7 @@ JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, Instruct > > int dst = currentInstruction[1].u.operand; > >- slowCases.append(emitJumpIfNotJSCell(regT1)); >+ slowCases.append(branchIfNotCell(regT1)); > emitByValIdentifierCheck(byValInfo, regT1, regT3, propertyName, slowCases); > > JITGetByIdGenerator gen( >@@ -350,11 +350,11 @@ JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction > emitGetVirtualRegister(value, regT3); > switch (indexingShape) { > case Int32Shape: >- slowCases.append(emitJumpIfNotInt(regT3)); >+ slowCases.append(branchIfNotInt32(regT3)); > store64(regT3, BaseIndex(regT2, regT1, TimesEight)); > break; > case DoubleShape: { >- Jump notInt = emitJumpIfNotInt(regT3); >+ Jump notInt = branchIfNotInt32(regT3); > convertInt32ToDouble(regT3, fpRegT0); > Jump ready = jump(); > notInt.link(this); >@@ -433,7 +433,7 @@ JITPutByIdGenerator JIT::emitPutByValWithCachedId(ByValInfo* byValInfo, Instruct > int base = currentInstruction[1].u.operand; > int value = currentInstruction[3].u.operand; > >- slowCases.append(emitJumpIfNotJSCell(regT1)); >+ slowCases.append(branchIfNotCell(regT1)); > emitByValIdentifierCheck(byValInfo, regT1, regT1, propertyName, slowCases); > > // Write barrier breaks the registers. So after issuing the write barrier, >@@ -914,7 +914,7 @@ void JIT::emit_op_get_from_scope(Instruction* currentInstruction) > else > emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT0); > if (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks) // TDZ check. >- addSlowCase(branchTest64(Zero, regT0)); >+ addSlowCase(branchIfEmpty(regT0)); > break; > case ClosureVar: > case ClosureVarWithVarInjectionChecks: >@@ -1041,7 +1041,7 @@ void JIT::emit_op_put_to_scope(Instruction* currentInstruction) > emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT0); > else > emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT0); >- addSlowCase(branchTest64(Zero, regT0)); >+ addSlowCase(branchIfEmpty(regT0)); > } > if (indirectLoadForOperand) > emitPutGlobalVariableIndirect(bitwise_cast<JSValue**>(operandSlot), value, bitwise_cast<WatchpointSet**>(¤tInstruction[5])); >@@ -1150,13 +1150,13 @@ void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode > Jump valueNotCell; > if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) { > emitGetVirtualRegister(value, regT0); >- valueNotCell = branchTest64(NonZero, regT0, tagMaskRegister); >+ valueNotCell = branchIfNotCell(regT0); > } > > emitGetVirtualRegister(owner, regT0); > Jump ownerNotCell; > if (mode == ShouldFilterBaseAndValue || mode == ShouldFilterBase) >- ownerNotCell = branchTest64(NonZero, regT0, tagMaskRegister); >+ ownerNotCell = branchIfNotCell(regT0); > > Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), regT0, regT1); > callOperation(operationWriteBarrierSlowPath, regT0); >@@ -1173,7 +1173,7 @@ void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode) > emitGetVirtualRegister(value, regT0); > Jump valueNotCell; > if (mode == ShouldFilterValue) >- valueNotCell = branchTest64(NonZero, regT0, tagMaskRegister); >+ valueNotCell = branchIfNotCell(regT0); > > emitWriteBarrier(owner); > >@@ -1188,13 +1188,13 @@ void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode > Jump valueNotCell; > if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) { > emitLoadTag(value, regT0); >- valueNotCell = branch32(NotEqual, regT0, TrustedImm32(JSValue::CellTag)); >+ valueNotCell = branchIfNotCell(regT0); > } > > emitLoad(owner, regT0, regT1); > Jump ownerNotCell; > if (mode == ShouldFilterBase || mode == ShouldFilterBaseAndValue) >- ownerNotCell = branch32(NotEqual, regT0, TrustedImm32(JSValue::CellTag)); >+ ownerNotCell = branchIfNotCell(regT0); > > Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), regT1, regT2); > callOperation(operationWriteBarrierSlowPath, regT1); >@@ -1211,7 +1211,7 @@ void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode) > Jump valueNotCell; > if (mode == ShouldFilterValue) { > emitLoadTag(value, regT0); >- valueNotCell = branch32(NotEqual, regT0, TrustedImm32(JSValue::CellTag)); >+ valueNotCell = branchIfNotCell(regT0); > } > > emitWriteBarrier(owner); >@@ -1660,10 +1660,10 @@ JIT::JumpList JIT::emitIntTypedArrayPutByVal(Instruction* currentInstruction, Pa > > #if USE(JSVALUE64) > emitGetVirtualRegister(value, earlyScratch); >- slowCases.append(emitJumpIfNotInt(earlyScratch)); >+ slowCases.append(branchIfNotInt32(earlyScratch)); > #else > emitLoad(value, lateScratch, earlyScratch); >- slowCases.append(branch32(NotEqual, lateScratch, TrustedImm32(JSValue::Int32Tag))); >+ slowCases.append(branchIfNotInt32(lateScratch)); > #endif > > // We would be loading this into base as in get_by_val, except that the slow >@@ -1733,17 +1733,17 @@ JIT::JumpList JIT::emitFloatTypedArrayPutByVal(Instruction* currentInstruction, > > #if USE(JSVALUE64) > emitGetVirtualRegister(value, earlyScratch); >- Jump doubleCase = emitJumpIfNotInt(earlyScratch); >+ Jump doubleCase = branchIfNotInt32(earlyScratch); > convertInt32ToDouble(earlyScratch, fpRegT0); > Jump ready = jump(); > doubleCase.link(this); >- slowCases.append(emitJumpIfNotNumber(earlyScratch)); >+ slowCases.append(branchIfNotNumber(earlyScratch)); > add64(tagTypeNumberRegister, earlyScratch); > move64ToDouble(earlyScratch, fpRegT0); > ready.link(this); > #else > emitLoad(value, lateScratch, earlyScratch); >- Jump doubleCase = branch32(NotEqual, lateScratch, TrustedImm32(JSValue::Int32Tag)); >+ Jump doubleCase = branchIfNotInt32(lateScratch); > convertInt32ToDouble(earlyScratch, fpRegT0); > Jump ready = jump(); > doubleCase.link(this); >diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp >index 51bd480858f7e84467275ac2e74927c16179fc09..db2a472e850a566e07a023cf4330422218115d67 100644 >--- a/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp >+++ b/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp >@@ -213,7 +213,7 @@ void JIT::emit_op_get_by_val(Instruction* currentInstruction) > Label done = label(); > > if (!ASSERT_DISABLED) { >- Jump resultOK = branch32(NotEqual, regT1, TrustedImm32(JSValue::EmptyValueTag)); >+ Jump resultOK = branchIfNotEmpty(regT1); > abortWithReason(JITGetByValResultIsNotEmpty); > resultOK.link(this); > } >@@ -235,7 +235,7 @@ JIT::JumpList JIT::emitContiguousLoad(Instruction*, PatchableJump& badType, Inde > slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfPublicLength()))); > load32(BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); // tag > load32(BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); // payload >- slowCases.append(branch32(Equal, regT1, TrustedImm32(JSValue::EmptyValueTag))); >+ slowCases.append(branchIfEmpty(regT1)); > > return slowCases; > } >@@ -263,7 +263,7 @@ JIT::JumpList JIT::emitArrayStorageLoad(Instruction*, PatchableJump& badType) > slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, ArrayStorage::vectorLengthOffset()))); > load32(BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); // tag > load32(BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); // payload >- slowCases.append(branch32(Equal, regT1, TrustedImm32(JSValue::EmptyValueTag))); >+ slowCases.append(branchIfEmpty(regT1)); > > return slowCases; > } >@@ -276,7 +276,7 @@ JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, Instruct > // property: tag(regT3), payload(regT2) > // scratch: regT4 > >- slowCases.append(branch32(NotEqual, regT3, TrustedImm32(JSValue::CellTag))); >+ slowCases.append(branchIfNotCell(regT3)); > emitByValIdentifierCheck(byValInfo, regT2, regT4, propertyName, slowCases); > > JITGetByIdGenerator gen( >@@ -395,7 +395,7 @@ JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction > emitLoad(value, regT1, regT0); > switch (indexingShape) { > case Int32Shape: >- slowCases.append(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag))); >+ slowCases.append(branchIfNotInt32(regT1)); > store32(regT0, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload))); > store32(regT1, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag))); > break; >@@ -406,7 +406,7 @@ JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction > emitWriteBarrier(base, value, ShouldFilterValue); > break; > case DoubleShape: { >- Jump notInt = branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag)); >+ Jump notInt = branchIfNotInt32(regT1); > convertInt32ToDouble(regT0, fpRegT0); > Jump ready = jump(); > notInt.link(this); >@@ -482,7 +482,7 @@ JITPutByIdGenerator JIT::emitPutByValWithCachedId(ByValInfo* byValInfo, Instruct > int base = currentInstruction[1].u.operand; > int value = currentInstruction[3].u.operand; > >- slowCases.append(branch32(NotEqual, regT3, TrustedImm32(JSValue::CellTag))); >+ slowCases.append(branchIfNotCell(regT3)); > emitByValIdentifierCheck(byValInfo, regT2, regT2, propertyName, slowCases); > > // Write barrier breaks the registers. So after issuing the write barrier, >@@ -933,7 +933,7 @@ void JIT::emit_op_get_from_scope(Instruction* currentInstruction) > else > emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT1, regT0); > if (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks) // TDZ check. >- addSlowCase(branch32(Equal, regT1, TrustedImm32(JSValue::EmptyValueTag))); >+ addSlowCase(branchIfEmpty(regT1)); > break; > case ClosureVar: > case ClosureVarWithVarInjectionChecks: >@@ -1063,7 +1063,7 @@ void JIT::emit_op_put_to_scope(Instruction* currentInstruction) > emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT1, regT0); > else > emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT1, regT0); >- addSlowCase(branch32(Equal, regT1, TrustedImm32(JSValue::EmptyValueTag))); >+ addSlowCase(branchIfEmpty(regT1)); > } > if (indirectLoadForOperand) > emitPutGlobalVariableIndirect(bitwise_cast<JSValue**>(operandSlot), value, bitwise_cast<WatchpointSet**>(¤tInstruction[5])); >diff --git a/Source/JavaScriptCore/jit/JSInterfaceJIT.h b/Source/JavaScriptCore/jit/JSInterfaceJIT.h >index f1ed055783fc7a34ddad30f3572f87cc82bee337..2351973d4f9264eedf8d54467b3676df519ecd88 100644 >--- a/Source/JavaScriptCore/jit/JSInterfaceJIT.h >+++ b/Source/JavaScriptCore/jit/JSInterfaceJIT.h >@@ -59,14 +59,9 @@ namespace JSC { > #endif > > #if USE(JSVALUE64) >- Jump emitJumpIfNotJSCell(RegisterID); >- Jump emitJumpIfNumber(RegisterID); >- Jump emitJumpIfNotNumber(RegisterID); > void emitTagInt(RegisterID src, RegisterID dest); > #endif > >- Jump emitJumpIfNotType(RegisterID baseReg, JSType); >- > void emitGetFromCallFrameHeaderPtr(int entry, RegisterID to, RegisterID from = callFrameRegister); > void emitPutToCallFrameHeader(RegisterID from, int entry); > void emitPutToCallFrameHeader(void* value, int entry); >@@ -147,43 +142,29 @@ namespace JSC { > #endif > > #if USE(JSVALUE64) >- ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotJSCell(RegisterID reg) >- { >- return branchTest64(NonZero, reg, tagMaskRegister); >- } >- >- ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNumber(RegisterID reg) >- { >- return branchTest64(NonZero, reg, tagTypeNumberRegister); >- } >- ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotNumber(RegisterID reg) >- { >- return branchTest64(Zero, reg, tagTypeNumberRegister); >- } > inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadJSCell(unsigned virtualRegisterIndex, RegisterID dst) > { > load64(addressFor(virtualRegisterIndex), dst); >- return branchTest64(NonZero, dst, tagMaskRegister); >+ return branchIfNotCell(dst); > } > > inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(unsigned virtualRegisterIndex, RegisterID dst) > { > load64(addressFor(virtualRegisterIndex), dst); >- Jump result = branch64(Below, dst, tagTypeNumberRegister); >+ Jump notInt32 = branchIfNotInt32(dst); > zeroExtend32ToPtr(dst, dst); >- return result; >+ return notInt32; > } > > inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadDouble(unsigned virtualRegisterIndex, FPRegisterID dst, RegisterID scratch) > { > load64(addressFor(virtualRegisterIndex), scratch); >- Jump notNumber = emitJumpIfNotNumber(scratch); >- Jump notInt = branch64(Below, scratch, tagTypeNumberRegister); >+ Jump notNumber = branchIfNotNumber(scratch); >+ Jump notInt = branchIfNotInt32(scratch); > convertInt32ToDouble(scratch, dst); > Jump done = jump(); > notInt.link(this); >- add64(tagTypeNumberRegister, scratch); >- move64ToDouble(scratch, dst); >+ unboxDouble(scratch, scratch, dst); > done.link(this); > return notNumber; > } >@@ -216,11 +197,6 @@ namespace JSC { > } > #endif > >- ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotType(RegisterID baseReg, JSType type) >- { >- return branch8(NotEqual, Address(baseReg, JSCell::typeInfoTypeOffset()), TrustedImm32(type)); >- } >- > ALWAYS_INLINE void JSInterfaceJIT::emitGetFromCallFrameHeaderPtr(int entry, RegisterID to, RegisterID from) > { > loadPtr(Address(from, entry * sizeof(Register)), to); >diff --git a/Source/JavaScriptCore/jit/Repatch.cpp b/Source/JavaScriptCore/jit/Repatch.cpp >index 41eb1ea8d2fdd399f7b7f2138d61c0a2539cb9c6..bf88bdbb67ba31bc9128b7ffc56a8923f3f63066 100644 >--- a/Source/JavaScriptCore/jit/Repatch.cpp >+++ b/Source/JavaScriptCore/jit/Repatch.cpp >@@ -949,7 +949,7 @@ void linkPolymorphicCall( > // Verify that we have a function and stash the executable in scratchGPR. > > #if USE(JSVALUE64) >- slowPath.append(stubJit.branchTest64(CCallHelpers::NonZero, calleeGPR, GPRInfo::tagMaskRegister)); >+ slowPath.append(stubJit.branchIfNotCell(calleeGPR)); > #else > // We would have already checked that the callee is a cell. > #endif >diff --git a/Source/JavaScriptCore/jit/ThunkGenerators.cpp b/Source/JavaScriptCore/jit/ThunkGenerators.cpp >index 8832065bc2f515f2316d99ae73f3a9d24b2e13f4..6e7e6313dcf3e0ebc0e87efd6270f2365ed90231 100644 >--- a/Source/JavaScriptCore/jit/ThunkGenerators.cpp >+++ b/Source/JavaScriptCore/jit/ThunkGenerators.cpp >@@ -194,10 +194,7 @@ MacroAssemblerCodeRef<JITStubRoutinePtrTag> virtualThunkFor(VM* vm, CallLinkInfo > slowCase.append( > jit.branchTest64(CCallHelpers::NonZero, GPRInfo::regT0, tagMaskRegister)); > #else >- slowCase.append( >- jit.branch32( >- CCallHelpers::NotEqual, GPRInfo::regT1, >- CCallHelpers::TrustedImm32(JSValue::CellTag))); >+ slowCase.append(jit.branchIfNotCell(GPRInfo::regT1)); > #endif > auto notJSFunction = jit.branchIfNotType(GPRInfo::regT0, JSFunctionType); > >@@ -1024,7 +1021,7 @@ MacroAssemblerCodeRef<JITThunkPtrTag> absThunkGenerator(VM* vm) > #if USE(JSVALUE64) > unsigned virtualRegisterIndex = CallFrame::argumentOffset(0); > jit.load64(AssemblyHelpers::addressFor(virtualRegisterIndex), GPRInfo::regT0); >- MacroAssembler::Jump notInteger = jit.branch64(MacroAssembler::Below, GPRInfo::regT0, GPRInfo::tagTypeNumberRegister); >+ auto notInteger = jit.branchIfNotInt32(GPRInfo::regT0); > > // Abs Int32. > jit.rshift32(GPRInfo::regT0, MacroAssembler::TrustedImm32(31), GPRInfo::regT1); >@@ -1040,7 +1037,7 @@ MacroAssemblerCodeRef<JITThunkPtrTag> absThunkGenerator(VM* vm) > > // Handle Doubles. > notInteger.link(&jit); >- jit.appendFailure(jit.branchTest64(MacroAssembler::Zero, GPRInfo::regT0, GPRInfo::tagTypeNumberRegister)); >+ jit.appendFailure(jit.branchIfNotNumber(GPRInfo::regT0)); > jit.unboxDoubleWithoutAssertions(GPRInfo::regT0, GPRInfo::regT0, FPRInfo::fpRegT0); > MacroAssembler::Label absFPR0Label = jit.label(); > jit.absDouble(FPRInfo::fpRegT0, FPRInfo::fpRegT1);
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 185730
:
340584
|
340587
|
340588