WebKit Bugzilla
Attachment 341398 Details for
Bug 186013
: [Baseline] Merge JITPropertyAccess, JITArithmetic, JITOpcodes, and JITCall to JIT
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
Patch
bug-186013-20180527004057.patch (text/plain), 621.01 KB, created by
Yusuke Suzuki
on 2018-05-26 08:40:59 PDT
(
hide
)
Description:
Patch
Filename:
MIME Type:
Creator:
Yusuke Suzuki
Created:
2018-05-26 08:40:59 PDT
Size:
621.01 KB
patch
obsolete
>Subversion Revision: 232224 >diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog >index a3e2a22f79f95679fabc61bd22fd39974ba730d5..1c7c110bf16e50bdaae1e9ce70456700db43bb13 100644 >--- a/Source/JavaScriptCore/ChangeLog >+++ b/Source/JavaScriptCore/ChangeLog >@@ -1,3 +1,406 @@ >+2018-05-26 Yusuke Suzuki <utatane.tea@gmail.com> >+ >+ [Baseline] Merge JITPropertyAccess, JITArithmetic, JITOpcodes, and JITCall to JIT >+ https://bugs.webkit.org/show_bug.cgi?id=186013 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ It is hard to check whether the given functionality is implemented in 32bit / 64bit, >+ since JIT code is scattered into JITPropertyAccess, JITArithmetic, JITOpcodes, and JITCall. >+ While JITPropertyAccess, JITArithmetic, JITOpcodes, and JITCall includes 32bit and 64bit code, >+ XXX32_64.cpp does not include 64bit code. So when looking into JITOpcodes, we need to check >+ that the function is implemented for 64bit, 32bit, or both. >+ >+ This patch aligns JIT's implementation to DFG. We have three files for JIT.h, JIT.cpp, JIT32_64.cpp, >+ and JIT64.cpp. JIT32_64 only includes 32bit implementation, and JIT64 includes 64bit implementation. >+ JIT.cpp includes code for both. This change extracts 32bit code from scattered JIT files effectively. >+ >+ * JavaScriptCore.xcodeproj/project.pbxproj: >+ * Sources.txt: >+ * jit/JIT.cpp: >+ (JSC::JIT::emit_op_loop_hint): >+ (JSC::JIT::emitSlow_op_loop_hint): >+ (JSC::JIT::emit_op_check_traps): >+ (JSC::JIT::emit_op_nop): >+ (JSC::JIT::emit_op_super_sampler_begin): >+ (JSC::JIT::emit_op_super_sampler_end): >+ (JSC::JIT::emitSlow_op_check_traps): >+ (JSC::JIT::emit_op_new_regexp): >+ (JSC::JIT::emitNewFuncCommon): >+ (JSC::JIT::emit_op_new_func): >+ (JSC::JIT::emit_op_new_generator_func): >+ (JSC::JIT::emit_op_new_async_generator_func): >+ (JSC::JIT::emit_op_new_async_func): >+ (JSC::JIT::emitNewFuncExprCommon): >+ (JSC::JIT::emit_op_new_func_exp): >+ (JSC::JIT::emit_op_new_generator_func_exp): >+ (JSC::JIT::emit_op_new_async_func_exp): >+ (JSC::JIT::emit_op_new_async_generator_func_exp): >+ (JSC::JIT::emit_op_new_array): >+ (JSC::JIT::emit_op_new_array_with_size): >+ (JSC::JIT::emit_op_profile_control_flow): >+ (JSC::JIT::emit_op_argument_count): >+ (JSC::JIT::emit_op_get_rest_length): >+ (JSC::JIT::emit_op_get_argument): >+ (JSC::JIT::emit_op_jless): >+ (JSC::JIT::emit_op_jlesseq): >+ (JSC::JIT::emit_op_jgreater): >+ (JSC::JIT::emit_op_jgreatereq): >+ (JSC::JIT::emit_op_jnless): >+ (JSC::JIT::emit_op_jnlesseq): >+ (JSC::JIT::emit_op_jngreater): >+ (JSC::JIT::emit_op_jngreatereq): >+ (JSC::JIT::emitSlow_op_jless): >+ (JSC::JIT::emitSlow_op_jlesseq): >+ (JSC::JIT::emitSlow_op_jgreater): >+ (JSC::JIT::emitSlow_op_jgreatereq): >+ (JSC::JIT::emitSlow_op_jnless): >+ (JSC::JIT::emitSlow_op_jnlesseq): >+ (JSC::JIT::emitSlow_op_jngreater): >+ (JSC::JIT::emitSlow_op_jngreatereq): >+ (JSC::JIT::emit_op_below): >+ (JSC::JIT::emit_op_beloweq): >+ (JSC::JIT::emit_op_jbelow): >+ (JSC::JIT::emit_op_jbeloweq): >+ (JSC::JIT::emit_op_negate): >+ (JSC::JIT::emitSlow_op_negate): >+ (JSC::JIT::emitBitBinaryOpFastPath): >+ (JSC::JIT::emit_op_bitand): >+ (JSC::JIT::emit_op_bitor): >+ (JSC::JIT::emit_op_bitxor): >+ (JSC::JIT::emit_op_lshift): >+ (JSC::JIT::emitRightShiftFastPath): >+ (JSC::JIT::emit_op_rshift): >+ (JSC::JIT::emit_op_urshift): >+ (JSC::getOperandTypes): >+ (JSC::JIT::emit_op_add): >+ (JSC::JIT::emitSlow_op_add): >+ (JSC::JIT::emitMathICFast): >+ (JSC::JIT::emitMathICSlow): >+ (JSC::JIT::emit_op_div): >+ (JSC::JIT::emit_op_mul): >+ (JSC::JIT::emitSlow_op_mul): >+ (JSC::JIT::emit_op_sub): >+ (JSC::JIT::emitSlow_op_sub): >+ (JSC::JIT::emitWriteBarrier): >+ (JSC::JIT::emitByValIdentifierCheck): >+ (JSC::JIT::privateCompileGetByVal): >+ (JSC::JIT::privateCompileGetByValWithCachedId): >+ (JSC::JIT::privateCompilePutByVal): >+ (JSC::JIT::privateCompilePutByValWithCachedId): >+ (JSC::JIT::emitDirectArgumentsGetByVal): >+ (JSC::JIT::emitScopedArgumentsGetByVal): >+ (JSC::JIT::emitIntTypedArrayGetByVal): >+ (JSC::JIT::emitFloatTypedArrayGetByVal): >+ (JSC::JIT::emitIntTypedArrayPutByVal): >+ (JSC::JIT::emitFloatTypedArrayPutByVal): >+ * jit/JIT32_64.cpp: Added. >+ (JSC::JIT::emit_op_mov): >+ (JSC::JIT::emit_op_end): >+ (JSC::JIT::emit_op_jmp): >+ (JSC::JIT::emit_op_new_object): >+ (JSC::JIT::emitSlow_op_new_object): >+ (JSC::JIT::emit_op_overrides_has_instance): >+ (JSC::JIT::emit_op_instanceof): >+ (JSC::JIT::emit_op_instanceof_custom): >+ (JSC::JIT::emitSlow_op_instanceof): >+ (JSC::JIT::emitSlow_op_instanceof_custom): >+ (JSC::JIT::emit_op_is_empty): >+ (JSC::JIT::emit_op_is_undefined): >+ (JSC::JIT::emit_op_is_boolean): >+ (JSC::JIT::emit_op_is_number): >+ (JSC::JIT::emit_op_is_cell_with_type): >+ (JSC::JIT::emit_op_is_object): >+ (JSC::JIT::emit_op_to_primitive): >+ (JSC::JIT::emit_op_set_function_name): >+ (JSC::JIT::emit_op_not): >+ (JSC::JIT::emit_op_jfalse): >+ (JSC::JIT::emit_op_jtrue): >+ (JSC::JIT::emit_op_jeq_null): >+ (JSC::JIT::emit_op_jneq_null): >+ (JSC::JIT::emit_op_jneq_ptr): >+ (JSC::JIT::emit_op_eq): >+ (JSC::JIT::emitSlow_op_eq): >+ (JSC::JIT::emit_op_jeq): >+ (JSC::JIT::compileOpEqJumpSlow): >+ (JSC::JIT::emitSlow_op_jeq): >+ (JSC::JIT::emit_op_neq): >+ (JSC::JIT::emitSlow_op_neq): >+ (JSC::JIT::emit_op_jneq): >+ (JSC::JIT::emitSlow_op_jneq): >+ (JSC::JIT::compileOpStrictEq): >+ (JSC::JIT::emit_op_stricteq): >+ (JSC::JIT::emit_op_nstricteq): >+ (JSC::JIT::compileOpStrictEqJump): >+ (JSC::JIT::emit_op_jstricteq): >+ (JSC::JIT::emit_op_jnstricteq): >+ (JSC::JIT::emitSlow_op_jstricteq): >+ (JSC::JIT::emitSlow_op_jnstricteq): >+ (JSC::JIT::emit_op_eq_null): >+ (JSC::JIT::emit_op_neq_null): >+ (JSC::JIT::emit_op_throw): >+ (JSC::JIT::emit_op_to_number): >+ (JSC::JIT::emit_op_to_string): >+ (JSC::JIT::emit_op_to_object): >+ (JSC::JIT::emit_op_catch): >+ (JSC::JIT::emit_op_identity_with_profile): >+ (JSC::JIT::emit_op_get_parent_scope): >+ (JSC::JIT::emit_op_switch_imm): >+ (JSC::JIT::emit_op_switch_char): >+ (JSC::JIT::emit_op_switch_string): >+ (JSC::JIT::emit_op_debug): >+ (JSC::JIT::emit_op_enter): >+ (JSC::JIT::emit_op_get_scope): >+ (JSC::JIT::emit_op_create_this): >+ (JSC::JIT::emit_op_to_this): >+ (JSC::JIT::emit_op_check_tdz): >+ (JSC::JIT::emit_op_has_structure_property): >+ (JSC::JIT::privateCompileHasIndexedProperty): >+ (JSC::JIT::emit_op_has_indexed_property): >+ (JSC::JIT::emitSlow_op_has_indexed_property): >+ (JSC::JIT::emit_op_get_direct_pname): >+ (JSC::JIT::emit_op_enumerator_structure_pname): >+ (JSC::JIT::emit_op_enumerator_generic_pname): >+ (JSC::JIT::emit_op_profile_type): >+ (JSC::JIT::emit_op_log_shadow_chicken_prologue): >+ (JSC::JIT::emit_op_log_shadow_chicken_tail): >+ (JSC::JIT::emit_compareAndJump): >+ (JSC::JIT::emit_compareUnsignedAndJump): >+ (JSC::JIT::emit_compareUnsigned): >+ (JSC::JIT::emit_compareAndJumpSlow): >+ (JSC::JIT::emit_op_unsigned): >+ (JSC::JIT::emit_op_inc): >+ (JSC::JIT::emit_op_dec): >+ (JSC::JIT::emitBinaryDoubleOp): >+ (JSC::JIT::emit_op_mod): >+ (JSC::JIT::emitSlow_op_mod): >+ (JSC::JIT::emit_op_put_getter_by_id): >+ (JSC::JIT::emit_op_put_setter_by_id): >+ (JSC::JIT::emit_op_put_getter_setter_by_id): >+ (JSC::JIT::emit_op_put_getter_by_val): >+ (JSC::JIT::emit_op_put_setter_by_val): >+ (JSC::JIT::emit_op_del_by_id): >+ (JSC::JIT::emit_op_del_by_val): >+ (JSC::JIT::emit_op_get_by_val): >+ (JSC::JIT::emitContiguousLoad): >+ (JSC::JIT::emitDoubleLoad): >+ (JSC::JIT::emitArrayStorageLoad): >+ (JSC::JIT::emitGetByValWithCachedId): >+ (JSC::JIT::emitSlow_op_get_by_val): >+ (JSC::JIT::emit_op_put_by_val): >+ (JSC::JIT::emitGenericContiguousPutByVal): >+ (JSC::JIT::emitArrayStoragePutByVal): >+ (JSC::JIT::emitPutByValWithCachedId): >+ (JSC::JIT::emitSlow_op_put_by_val): >+ (JSC::JIT::emit_op_try_get_by_id): >+ (JSC::JIT::emitSlow_op_try_get_by_id): >+ (JSC::JIT::emit_op_get_by_id_direct): >+ (JSC::JIT::emitSlow_op_get_by_id_direct): >+ (JSC::JIT::emit_op_get_by_id): >+ (JSC::JIT::emitSlow_op_get_by_id): >+ (JSC::JIT::emit_op_get_by_id_with_this): >+ (JSC::JIT::emitSlow_op_get_by_id_with_this): >+ (JSC::JIT::emit_op_put_by_id): >+ (JSC::JIT::emitSlow_op_put_by_id): >+ (JSC::JIT::emit_op_in_by_id): >+ (JSC::JIT::emitSlow_op_in_by_id): >+ (JSC::JIT::emitVarInjectionCheck): >+ (JSC::JIT::emitResolveClosure): >+ (JSC::JIT::emit_op_resolve_scope): >+ (JSC::JIT::emitLoadWithStructureCheck): >+ (JSC::JIT::emitGetVarFromPointer): >+ (JSC::JIT::emitGetVarFromIndirectPointer): >+ (JSC::JIT::emitGetClosureVar): >+ (JSC::JIT::emit_op_get_from_scope): >+ (JSC::JIT::emitSlow_op_get_from_scope): >+ (JSC::JIT::emitPutGlobalVariable): >+ (JSC::JIT::emitPutGlobalVariableIndirect): >+ (JSC::JIT::emitPutClosureVar): >+ (JSC::JIT::emit_op_put_to_scope): >+ (JSC::JIT::emitSlow_op_put_to_scope): >+ (JSC::JIT::emit_op_get_from_arguments): >+ (JSC::JIT::emit_op_put_to_arguments): >+ (JSC::JIT::emitWriteBarrier): >+ (JSC::JIT::emitPutCallResult): >+ (JSC::JIT::emit_op_ret): >+ (JSC::JIT::emitSlow_op_call): >+ (JSC::JIT::emitSlow_op_tail_call): >+ (JSC::JIT::emitSlow_op_call_eval): >+ (JSC::JIT::emitSlow_op_call_varargs): >+ (JSC::JIT::emitSlow_op_tail_call_varargs): >+ (JSC::JIT::emitSlow_op_tail_call_forward_arguments): >+ (JSC::JIT::emitSlow_op_construct_varargs): >+ (JSC::JIT::emitSlow_op_construct): >+ (JSC::JIT::emit_op_call): >+ (JSC::JIT::emit_op_tail_call): >+ (JSC::JIT::emit_op_call_eval): >+ (JSC::JIT::emit_op_call_varargs): >+ (JSC::JIT::emit_op_tail_call_varargs): >+ (JSC::JIT::emit_op_tail_call_forward_arguments): >+ (JSC::JIT::emit_op_construct_varargs): >+ (JSC::JIT::emit_op_construct): >+ (JSC::JIT::compileSetupVarargsFrame): >+ (JSC::JIT::compileCallEval): >+ (JSC::JIT::compileCallEvalSlowCase): >+ (JSC::JIT::compileOpCall): >+ (JSC::JIT::compileOpCallSlowCase): >+ * jit/JIT64.cpp: Added. >+ (JSC::JIT::emit_op_mov): >+ (JSC::JIT::emit_op_end): >+ (JSC::JIT::emit_op_jmp): >+ (JSC::JIT::emit_op_new_object): >+ (JSC::JIT::emitSlow_op_new_object): >+ (JSC::JIT::emit_op_overrides_has_instance): >+ (JSC::JIT::emit_op_instanceof): >+ (JSC::JIT::emitSlow_op_instanceof): >+ (JSC::JIT::emit_op_instanceof_custom): >+ (JSC::JIT::emit_op_is_empty): >+ (JSC::JIT::emit_op_is_undefined): >+ (JSC::JIT::emit_op_is_boolean): >+ (JSC::JIT::emit_op_is_number): >+ (JSC::JIT::emit_op_is_cell_with_type): >+ (JSC::JIT::emit_op_is_object): >+ (JSC::JIT::emit_op_ret): >+ (JSC::JIT::emit_op_to_primitive): >+ (JSC::JIT::emit_op_set_function_name): >+ (JSC::JIT::emit_op_not): >+ (JSC::JIT::emit_op_jfalse): >+ (JSC::JIT::emit_op_jeq_null): >+ (JSC::JIT::emit_op_jneq_null): >+ (JSC::JIT::emit_op_jneq_ptr): >+ (JSC::JIT::emit_op_eq): >+ (JSC::JIT::emit_op_jeq): >+ (JSC::JIT::emit_op_jtrue): >+ (JSC::JIT::emit_op_neq): >+ (JSC::JIT::emit_op_jneq): >+ (JSC::JIT::emit_op_throw): >+ (JSC::JIT::compileOpStrictEq): >+ (JSC::JIT::emit_op_stricteq): >+ (JSC::JIT::emit_op_nstricteq): >+ (JSC::JIT::compileOpStrictEqJump): >+ (JSC::JIT::emit_op_jstricteq): >+ (JSC::JIT::emit_op_jnstricteq): >+ (JSC::JIT::emitSlow_op_jstricteq): >+ (JSC::JIT::emitSlow_op_jnstricteq): >+ (JSC::JIT::emit_op_to_number): >+ (JSC::JIT::emit_op_to_string): >+ (JSC::JIT::emit_op_to_object): >+ (JSC::JIT::emit_op_catch): >+ (JSC::JIT::emit_op_identity_with_profile): >+ (JSC::JIT::emit_op_get_parent_scope): >+ (JSC::JIT::emit_op_switch_imm): >+ (JSC::JIT::emit_op_switch_char): >+ (JSC::JIT::emit_op_switch_string): >+ (JSC::JIT::emit_op_debug): >+ (JSC::JIT::emit_op_eq_null): >+ (JSC::JIT::emit_op_neq_null): >+ (JSC::JIT::emit_op_enter): >+ (JSC::JIT::emit_op_get_scope): >+ (JSC::JIT::emit_op_to_this): >+ (JSC::JIT::emit_op_create_this): >+ (JSC::JIT::emit_op_check_tdz): >+ (JSC::JIT::emitSlow_op_eq): >+ (JSC::JIT::emitSlow_op_neq): >+ (JSC::JIT::emitSlow_op_jeq): >+ (JSC::JIT::emitSlow_op_jneq): >+ (JSC::JIT::emitSlow_op_instanceof_custom): >+ (JSC::JIT::emit_op_has_structure_property): >+ (JSC::JIT::privateCompileHasIndexedProperty): >+ (JSC::JIT::emit_op_has_indexed_property): >+ (JSC::JIT::emitSlow_op_has_indexed_property): >+ (JSC::JIT::emit_op_get_direct_pname): >+ (JSC::JIT::emit_op_enumerator_structure_pname): >+ (JSC::JIT::emit_op_enumerator_generic_pname): >+ (JSC::JIT::emit_op_profile_type): >+ (JSC::JIT::emit_op_log_shadow_chicken_prologue): >+ (JSC::JIT::emit_op_log_shadow_chicken_tail): >+ (JSC::JIT::emit_op_unsigned): >+ (JSC::JIT::emit_compareAndJump): >+ (JSC::JIT::emit_compareUnsignedAndJump): >+ (JSC::JIT::emit_compareUnsigned): >+ (JSC::JIT::emit_compareAndJumpSlow): >+ (JSC::JIT::emit_op_inc): >+ (JSC::JIT::emit_op_dec): >+ (JSC::JIT::emit_op_mod): >+ (JSC::JIT::emitSlow_op_mod): >+ (JSC::JIT::emit_op_get_by_val): >+ (JSC::JIT::emitDoubleLoad): >+ (JSC::JIT::emitContiguousLoad): >+ (JSC::JIT::emitArrayStorageLoad): >+ (JSC::JIT::emitGetByValWithCachedId): >+ (JSC::JIT::emitSlow_op_get_by_val): >+ (JSC::JIT::emit_op_put_by_val): >+ (JSC::JIT::emitGenericContiguousPutByVal): >+ (JSC::JIT::emitArrayStoragePutByVal): >+ (JSC::JIT::emitPutByValWithCachedId): >+ (JSC::JIT::emitSlow_op_put_by_val): >+ (JSC::JIT::emit_op_put_getter_by_id): >+ (JSC::JIT::emit_op_put_setter_by_id): >+ (JSC::JIT::emit_op_put_getter_setter_by_id): >+ (JSC::JIT::emit_op_put_getter_by_val): >+ (JSC::JIT::emit_op_put_setter_by_val): >+ (JSC::JIT::emit_op_del_by_id): >+ (JSC::JIT::emit_op_del_by_val): >+ (JSC::JIT::emit_op_try_get_by_id): >+ (JSC::JIT::emitSlow_op_try_get_by_id): >+ (JSC::JIT::emit_op_get_by_id_direct): >+ (JSC::JIT::emitSlow_op_get_by_id_direct): >+ (JSC::JIT::emit_op_get_by_id): >+ (JSC::JIT::emit_op_get_by_id_with_this): >+ (JSC::JIT::emitSlow_op_get_by_id): >+ (JSC::JIT::emitSlow_op_get_by_id_with_this): >+ (JSC::JIT::emit_op_put_by_id): >+ (JSC::JIT::emitSlow_op_put_by_id): >+ (JSC::JIT::emit_op_in_by_id): >+ (JSC::JIT::emitSlow_op_in_by_id): >+ (JSC::JIT::emitVarInjectionCheck): >+ (JSC::JIT::emitResolveClosure): >+ (JSC::JIT::emit_op_resolve_scope): >+ (JSC::JIT::emitLoadWithStructureCheck): >+ (JSC::JIT::emitGetVarFromPointer): >+ (JSC::JIT::emitGetVarFromIndirectPointer): >+ (JSC::JIT::emitGetClosureVar): >+ (JSC::JIT::emit_op_get_from_scope): >+ (JSC::JIT::emitSlow_op_get_from_scope): >+ (JSC::JIT::emitPutGlobalVariable): >+ (JSC::JIT::emitPutGlobalVariableIndirect): >+ (JSC::JIT::emitPutClosureVar): >+ (JSC::JIT::emit_op_put_to_scope): >+ (JSC::JIT::emitSlow_op_put_to_scope): >+ (JSC::JIT::emit_op_get_from_arguments): >+ (JSC::JIT::emit_op_put_to_arguments): >+ (JSC::JIT::emitWriteBarrier): >+ (JSC::JIT::emitPutCallResult): >+ (JSC::JIT::compileSetupVarargsFrame): >+ (JSC::JIT::compileCallEval): >+ (JSC::JIT::compileCallEvalSlowCase): >+ (JSC::JIT::compileOpCall): >+ (JSC::JIT::compileOpCallSlowCase): >+ (JSC::JIT::emit_op_call): >+ (JSC::JIT::emit_op_tail_call): >+ (JSC::JIT::emit_op_call_eval): >+ (JSC::JIT::emit_op_call_varargs): >+ (JSC::JIT::emit_op_tail_call_varargs): >+ (JSC::JIT::emit_op_tail_call_forward_arguments): >+ (JSC::JIT::emit_op_construct_varargs): >+ (JSC::JIT::emit_op_construct): >+ (JSC::JIT::emitSlow_op_call): >+ (JSC::JIT::emitSlow_op_tail_call): >+ (JSC::JIT::emitSlow_op_call_eval): >+ (JSC::JIT::emitSlow_op_call_varargs): >+ (JSC::JIT::emitSlow_op_tail_call_varargs): >+ (JSC::JIT::emitSlow_op_tail_call_forward_arguments): >+ (JSC::JIT::emitSlow_op_construct_varargs): >+ (JSC::JIT::emitSlow_op_construct): >+ * jit/JITArithmetic.cpp: Removed. >+ * jit/JITArithmetic32_64.cpp: Removed. >+ * jit/JITCall.cpp: Removed. >+ * jit/JITCall32_64.cpp: Removed. >+ * jit/JITOpcodes.cpp: Removed. >+ * jit/JITOpcodes32_64.cpp: Removed. >+ * jit/JITPropertyAccess.cpp: Removed. >+ * jit/JITPropertyAccess32_64.cpp: Removed. >+ > 2018-05-25 Mark Lam <mark.lam@apple.com> > > for-in loops should preserve and restore the TDZ stack for each of its internal loops. >diff --git a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj >index 29cbab4d3b04b888e0eb5e184055d8d3bbd560db..783cf7696f7bb074ecc3922a7705c76bef1f4709 100644 >--- a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj >+++ b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj >@@ -3086,7 +3086,7 @@ > 146AAB370B66A94400E55F16 /* JSStringRefCF.cpp */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.cpp.cpp; path = JSStringRefCF.cpp; sourceTree = "<group>"; }; > 146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ConservativeRoots.cpp; sourceTree = "<group>"; }; > 146FA5A81378F6B0003627A3 /* HandleTypes.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HandleTypes.h; sourceTree = "<group>"; }; >- 146FE51111A710430087AE66 /* JITCall32_64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITCall32_64.cpp; sourceTree = "<group>"; }; >+ 146FE51111A710430087AE66 /* JIT32_64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JIT32_64.cpp; sourceTree = "<group>"; }; > 147341CB1DC02D7200AA29BA /* ExecutableBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ExecutableBase.h; sourceTree = "<group>"; }; > 147341CD1DC02D7900AA29BA /* ScriptExecutable.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ScriptExecutable.h; sourceTree = "<group>"; }; > 147341CF1DC02DB400AA29BA /* NativeExecutable.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = NativeExecutable.h; sourceTree = "<group>"; }; >@@ -3767,7 +3767,6 @@ > 86A054461556451B00445157 /* LowLevelInterpreter.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; name = LowLevelInterpreter.asm; path = llint/LowLevelInterpreter.asm; sourceTree = "<group>"; }; > 86A054471556451B00445157 /* LowLevelInterpreter32_64.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; lineEnding = 0; name = LowLevelInterpreter32_64.asm; path = llint/LowLevelInterpreter32_64.asm; sourceTree = "<group>"; }; > 86A054481556451B00445157 /* LowLevelInterpreter64.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; lineEnding = 0; name = LowLevelInterpreter64.asm; path = llint/LowLevelInterpreter64.asm; sourceTree = "<group>"; }; >- 86A90ECF0EE7D51F00AB350D /* JITArithmetic.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITArithmetic.cpp; sourceTree = "<group>"; }; > 86ADD1430FDDEA980006EEC2 /* ARMv7Assembler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ARMv7Assembler.h; sourceTree = "<group>"; }; > 86ADD1440FDDEA980006EEC2 /* MacroAssemblerARMv7.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MacroAssemblerARMv7.h; sourceTree = "<group>"; }; > 86B5822C14D22F5F00A9C306 /* ProfileTreeNode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProfileTreeNode.h; sourceTree = "<group>"; }; >@@ -3780,8 +3779,7 @@ > 86C568DE11A213EE0007F7F0 /* MacroAssemblerMIPS.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MacroAssemblerMIPS.h; sourceTree = "<group>"; }; > 86C568DF11A213EE0007F7F0 /* MIPSAssembler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MIPSAssembler.h; sourceTree = "<group>"; }; > 86CC85A00EE79A4700288682 /* JITInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITInlines.h; sourceTree = "<group>"; }; >- 86CC85A20EE79B7400288682 /* JITCall.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITCall.cpp; sourceTree = "<group>"; }; >- 86CC85C30EE7A89400288682 /* JITPropertyAccess.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITPropertyAccess.cpp; sourceTree = "<group>"; }; >+ 86CC85A20EE79B7400288682 /* JIT64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JIT64.cpp; sourceTree = "<group>"; }; > 86CCEFDD0F413F8900FD7F9E /* JITCode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITCode.h; sourceTree = "<group>"; }; > 86D22219167EF9440024C804 /* testapi.mm */ = {isa = PBXFileReference; explicitFileType = sourcecode.cpp.objcpp; fileEncoding = 4; name = testapi.mm; path = API/tests/testapi.mm; sourceTree = "<group>"; }; > 86D3B2BF10156BDE002865E7 /* ARMAssembler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ARMAssembler.cpp; sourceTree = "<group>"; }; >@@ -4115,7 +4113,6 @@ > A704D90117A0BAA8006BA554 /* DFGInPlaceAbstractState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGInPlaceAbstractState.h; path = dfg/DFGInPlaceAbstractState.h; sourceTree = "<group>"; }; > A709F2EF17A0AC0400512E98 /* SlowPathCall.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SlowPathCall.h; sourceTree = "<group>"; }; > A709F2F117A0AC2A00512E98 /* CommonSlowPaths.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CommonSlowPaths.cpp; sourceTree = "<group>"; }; >- A71236E41195F33C00BD2174 /* JITOpcodes32_64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITOpcodes32_64.cpp; sourceTree = "<group>"; }; > A718F61A11754A21002465A7 /* RegExpJitTables.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegExpJitTables.h; sourceTree = "<group>"; }; > A718F8211178EB4B002465A7 /* create_regex_tables */ = {isa = PBXFileReference; explicitFileType = text.script.python; fileEncoding = 4; name = create_regex_tables; path = yarr/create_regex_tables; sourceTree = "<group>"; }; > A72028B41797601E0098028C /* JSCTestRunnerUtils.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSCTestRunnerUtils.cpp; sourceTree = "<group>"; }; >@@ -4145,7 +4142,6 @@ > A74DEF8E182D991400522C22 /* MapIteratorPrototype.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MapIteratorPrototype.h; sourceTree = "<group>"; }; > A74DEF8F182D991400522C22 /* JSMapIterator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSMapIterator.cpp; sourceTree = "<group>"; }; > A74DEF90182D991400522C22 /* JSMapIterator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSMapIterator.h; sourceTree = "<group>"; }; >- A75706DD118A2BCF0057F88F /* JITArithmetic32_64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITArithmetic32_64.cpp; sourceTree = "<group>"; }; > A75EE9B018AAB7E200AAD043 /* BuiltinNames.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BuiltinNames.h; sourceTree = "<group>"; }; > A767B5B317A0B9650063D940 /* DFGLoopPreHeaderCreationPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGLoopPreHeaderCreationPhase.cpp; path = dfg/DFGLoopPreHeaderCreationPhase.cpp; sourceTree = "<group>"; }; > A767B5B417A0B9650063D940 /* DFGLoopPreHeaderCreationPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGLoopPreHeaderCreationPhase.h; path = dfg/DFGLoopPreHeaderCreationPhase.h; sourceTree = "<group>"; }; >@@ -4204,7 +4200,6 @@ > A7BFF3BF179868940002F462 /* DFGFiltrationResult.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGFiltrationResult.h; path = dfg/DFGFiltrationResult.h; sourceTree = "<group>"; }; > A7C0C4AA167C08CD0017011D /* JSScriptRef.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSScriptRef.cpp; sourceTree = "<group>"; }; > A7C0C4AB167C08CD0017011D /* JSScriptRefPrivate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSScriptRefPrivate.h; sourceTree = "<group>"; }; >- A7C1E8C8112E701C00A37F98 /* JITPropertyAccess32_64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITPropertyAccess32_64.cpp; sourceTree = "<group>"; }; > A7C1EAEB17987AB600299DB2 /* CLoopStackInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CLoopStackInlines.h; sourceTree = "<group>"; }; > A7C1EAEC17987AB600299DB2 /* StackVisitor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; lineEnding = 0; path = StackVisitor.cpp; sourceTree = "<group>"; }; > A7C1EAED17987AB600299DB2 /* StackVisitor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StackVisitor.h; sourceTree = "<group>"; }; >@@ -4452,7 +4447,6 @@ > BCD203470E17135E002C7E82 /* DatePrototype.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DatePrototype.cpp; sourceTree = "<group>"; }; > BCD203480E17135E002C7E82 /* DatePrototype.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DatePrototype.h; sourceTree = "<group>"; }; > BCD203E70E1718F4002C7E82 /* DatePrototype.lut.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DatePrototype.lut.h; sourceTree = "<group>"; }; >- BCDD51E90FB8DF74004A8BDC /* JITOpcodes.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITOpcodes.cpp; sourceTree = "<group>"; }; > BCDE3AB00E6C82CF001453A7 /* Structure.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Structure.cpp; sourceTree = "<group>"; }; > BCDE3AB10E6C82CF001453A7 /* Structure.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Structure.h; sourceTree = "<group>"; }; > BCF605110E203EF800B9A64D /* ArgList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ArgList.cpp; sourceTree = "<group>"; }; >@@ -4912,9 +4906,9 @@ > 14BD59BF0A3E8F9000BAF59C /* testapi */, > 0FEC85AD1BDB5CF10080FF74 /* testb3 */, > FE533CAC1F217DB40016A1FE /* testmasm */, >+ 79281BDC20B62B3E002E2A60 /* testmem */, > 6511230514046A4C002B101D /* testRegExp */, > 932F5BD90822A1C700736975 /* JavaScriptCore.framework */, >- 79281BDC20B62B3E002E2A60 /* testmem */, > ); > name = Products; > sourceTree = "<group>"; >@@ -5540,11 +5534,11 @@ > DE5A09FF1BA3AC3E003D4424 /* IntrinsicEmitter.cpp */, > 1429D92D0ED22D7000B89619 /* JIT.cpp */, > 1429D92E0ED22D7000B89619 /* JIT.h */, >+ 146FE51111A710430087AE66 /* JIT32_64.cpp */, >+ 86CC85A20EE79B7400288682 /* JIT64.cpp */, > FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */, > FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */, > 0F75A0652013E4EF0038E2CF /* JITAllocator.h */, >- 86A90ECF0EE7D51F00AB350D /* JITArithmetic.cpp */, >- A75706DD118A2BCF0057F88F /* JITArithmetic32_64.cpp */, > FE3A06AD1C10CB6F00390FDD /* JITBitAndGenerator.cpp */, > FE3A06AE1C10CB6F00390FDD /* JITBitAndGenerator.h */, > FE3A06A71C10BC7400390FDD /* JITBitBinaryOpGenerator.h */, >@@ -5552,8 +5546,6 @@ > FE3A06A41C10B70800390FDD /* JITBitOrGenerator.h */, > FE3A06AF1C10CB6F00390FDD /* JITBitXorGenerator.cpp */, > FE3A06B01C10CB6F00390FDD /* JITBitXorGenerator.h */, >- 86CC85A20EE79B7400288682 /* JITCall.cpp */, >- 146FE51111A710430087AE66 /* JITCall32_64.cpp */, > 0F8F94431667635200D61971 /* JITCode.cpp */, > 86CCEFDD0F413F8900FD7F9E /* JITCode.h */, > 0FFB80BB20A794700006AAF6 /* JITCodeInlines.h */, >@@ -5577,12 +5569,8 @@ > FE187A001BFBC73C0038BBCA /* JITMulGenerator.h */, > FE99B2471C24B6D300C82159 /* JITNegGenerator.cpp */, > FE99B2481C24B6D300C82159 /* JITNegGenerator.h */, >- BCDD51E90FB8DF74004A8BDC /* JITOpcodes.cpp */, >- A71236E41195F33C00BD2174 /* JITOpcodes32_64.cpp */, > 0F24E54517EE274900ABB217 /* JITOperations.cpp */, > 0F24E54617EE274900ABB217 /* JITOperations.h */, >- 86CC85C30EE7A89400288682 /* JITPropertyAccess.cpp */, >- A7C1E8C8112E701C00A37F98 /* JITPropertyAccess32_64.cpp */, > FE3A06B81C1103D900390FDD /* JITRightShiftGenerator.cpp */, > FE3A06B91C1103D900390FDD /* JITRightShiftGenerator.h */, > 0F766D2615A8CC1B008F363E /* JITStubRoutine.cpp */, >@@ -6286,8 +6274,8 @@ > 79ABCC5520B7812600323D5F /* testmem2 */ = { > isa = PBXGroup; > children = ( >- 79ABCC5620B7812600323D5F /* testmem2.m */, > 79ABCC5820B7812600323D5F /* testmem2.1 */, >+ 79ABCC5620B7812600323D5F /* testmem2.m */, > ); > path = testmem2; > sourceTree = "<group>"; >@@ -8365,7 +8353,6 @@ > 0F4C91661C29F4F2004341A6 /* B3OriginDump.h in Headers */, > 0FEC85261BDACDAC0080FF74 /* B3PatchpointSpecial.h in Headers */, > 0FEC85281BDACDAC0080FF74 /* B3PatchpointValue.h in Headers */, >- 0FD2FD9520B52BE200F09441 /* IsoSubspaceInlines.h in Headers */, > 799EF7C41C56ED96002B0534 /* B3PCToOriginMap.h in Headers */, > 0FEC852A1BDACDAC0080FF74 /* B3PhaseScope.h in Headers */, > 0F37308D1C0BD29100052BFA /* B3PhiChildren.h in Headers */, >@@ -8493,6 +8480,7 @@ > A7E5A3A81797432D00E893C0 /* CompilationResult.h in Headers */, > 0F4F11E8209BCDAB00709654 /* CompilerTimingScope.h in Headers */, > 0FDCE12A1FAFA85F006F3901 /* CompleteSubspace.h in Headers */, >+ 0FD2FD9420B52BDE00F09441 /* CompleteSubspaceInlines.h in Headers */, > BC18C3F40E16F5CD00B34460 /* Completion.h in Headers */, > 0F6FC751196110A800E1D02D /* ComplexGetStatus.h in Headers */, > 0FDB2CEA174896C7007B3C1B /* ConcurrentJSLock.h in Headers */, >@@ -8990,6 +8978,7 @@ > 0FB467801FDDA6F1003FCB09 /* IsoCellSet.h in Headers */, > 0FB467811FDDA6F7003FCB09 /* IsoCellSetInlines.h in Headers */, > 0FDCE12D1FAFB4E5006F3901 /* IsoSubspace.h in Headers */, >+ 0FD2FD9520B52BE200F09441 /* IsoSubspaceInlines.h in Headers */, > 0F5E0FE72086AD480097F0DE /* IsoSubspacePerVM.h in Headers */, > 8B9F6D561D5912FA001C739F /* IterationKind.h in Headers */, > FE4D55B81AE716CA0052E459 /* IterationStatus.h in Headers */, >@@ -9000,7 +8989,6 @@ > BC18C4140E16F5CD00B34460 /* JavaScriptCore.h in Headers */, > BC18C4150E16F5CD00B34460 /* JavaScriptCorePrefix.h in Headers */, > 1429D9300ED22D7000B89619 /* JIT.h in Headers */, >- 0FD2FD9420B52BDE00F09441 /* CompleteSubspaceInlines.h in Headers */, > FE1220271BE7F58C0039E6F2 /* JITAddGenerator.h in Headers */, > 0F75A0662013E4F10038E2CF /* JITAllocator.h in Headers */, > FE3A06B21C10CB8900390FDD /* JITBitAndGenerator.h in Headers */, >diff --git a/Source/JavaScriptCore/Sources.txt b/Source/JavaScriptCore/Sources.txt >index c011ee6c5232cbe46bcc061411bb7b6d4400d2a6..4d180c818f367f82da10e9e9ce5efbb9517c28fb 100644 >--- a/Source/JavaScriptCore/Sources.txt >+++ b/Source/JavaScriptCore/Sources.txt >@@ -597,14 +597,12 @@ jit/HostCallReturnValue.cpp > jit/ICStats.cpp > jit/IntrinsicEmitter.cpp > jit/JIT.cpp >+jit/JIT32_64.cpp >+jit/JIT64.cpp > jit/JITAddGenerator.cpp >-jit/JITArithmetic.cpp >-jit/JITArithmetic32_64.cpp > jit/JITBitAndGenerator.cpp > jit/JITBitOrGenerator.cpp > jit/JITBitXorGenerator.cpp >-jit/JITCall.cpp >-jit/JITCall32_64.cpp > jit/JITCode.cpp > jit/JITDisassembler.cpp > jit/JITDivGenerator.cpp >@@ -613,11 +611,7 @@ jit/JITInlineCacheGenerator.cpp > jit/JITLeftShiftGenerator.cpp > jit/JITMulGenerator.cpp > jit/JITNegGenerator.cpp >-jit/JITOpcodes.cpp >-jit/JITOpcodes32_64.cpp > jit/JITOperations.cpp >-jit/JITPropertyAccess.cpp >-jit/JITPropertyAccess32_64.cpp > jit/JITRightShiftGenerator.cpp > jit/JITStubRoutine.cpp > jit/JITSubGenerator.cpp >diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp >index d9db0629a438c129d57bfa3831669eb8af156998..4a8365b7a9fc00a68a9099db1de90f8043057396 100644 >--- a/Source/JavaScriptCore/jit/JIT.cpp >+++ b/Source/JavaScriptCore/jit/JIT.cpp >@@ -29,14 +29,22 @@ > > #include "JIT.h" > >+#include "ArithProfile.h" > #include "BytecodeGraph.h" > #include "BytecodeLivenessAnalysis.h" > #include "CodeBlock.h" > #include "CodeBlockWithJITType.h" > #include "DFGCapabilities.h" > #include "InterpreterInlines.h" >+#include "JITBitAndGenerator.h" >+#include "JITBitOrGenerator.h" >+#include "JITBitXorGenerator.h" >+#include "JITDivGenerator.h" > #include "JITInlines.h" >+#include "JITLeftShiftGenerator.h" >+#include "JITMathIC.h" > #include "JITOperations.h" >+#include "JITRightShiftGenerator.h" > #include "JSArray.h" > #include "JSCInlines.h" > #include "JSFunction.h" >@@ -47,8 +55,11 @@ > #include "ProfilerDatabase.h" > #include "ProgramCodeBlock.h" > #include "ResultType.h" >+#include "ScopedArguments.h" >+#include "ScopedArgumentsTable.h" > #include "SlowPathCall.h" > #include "StackAlignment.h" >+#include "SuperSampler.h" > #include "ThunkGenerators.h" > #include "TypeProfilerLog.h" > #include <wtf/CryptographicallyRandomNumber.h> >@@ -1020,6 +1031,1485 @@ Seconds JIT::totalCompileTime() > return totalBaselineCompileTime + totalDFGCompileTime + totalFTLCompileTime; > } > >+void JIT::emit_op_loop_hint(Instruction*) >+{ >+ // Emit the JIT optimization check: >+ if (canBeOptimized()) { >+ addSlowCase(branchAdd32(PositiveOrZero, TrustedImm32(Options::executionCounterIncrementForLoop()), >+ AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter()))); >+ } >+} >+ >+void JIT::emitSlow_op_loop_hint(Instruction*, Vector<SlowCaseEntry>::iterator& iter) >+{ >+#if ENABLE(DFG_JIT) >+ // Emit the slow path for the JIT optimization check: >+ if (canBeOptimized()) { >+ linkAllSlowCases(iter); >+ >+ copyCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >+ >+ callOperation(operationOptimize, m_bytecodeOffset); >+ Jump noOptimizedEntry = branchTestPtr(Zero, returnValueGPR); >+ if (!ASSERT_DISABLED) { >+ Jump ok = branchPtr(MacroAssembler::Above, returnValueGPR, TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000)))); >+ abortWithReason(JITUnreasonableLoopHintJumpTarget); >+ ok.link(this); >+ } >+ jump(returnValueGPR, GPRInfo::callFrameRegister); >+ noOptimizedEntry.link(this); >+ >+ emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_loop_hint)); >+ } >+#else >+ UNUSED_PARAM(iter); >+#endif >+} >+ >+void JIT::emit_op_check_traps(Instruction*) >+{ >+ addSlowCase(branchTest8(NonZero, AbsoluteAddress(m_vm->needTrapHandlingAddress()))); >+} >+ >+void JIT::emit_op_nop(Instruction*) >+{ >+} >+ >+void JIT::emit_op_super_sampler_begin(Instruction*) >+{ >+ add32(TrustedImm32(1), AbsoluteAddress(bitwise_cast<void*>(&g_superSamplerCount))); >+} >+ >+void JIT::emit_op_super_sampler_end(Instruction*) >+{ >+ sub32(TrustedImm32(1), AbsoluteAddress(bitwise_cast<void*>(&g_superSamplerCount))); >+} >+ >+void JIT::emitSlow_op_check_traps(Instruction*, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ callOperation(operationHandleTraps); >+} >+ >+void JIT::emit_op_new_regexp(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ callOperation(operationNewRegexp, m_codeBlock->regexp(currentInstruction[2].u.operand)); >+ emitStoreCell(dst, returnValueGPR); >+} >+ >+void JIT::emitNewFuncCommon(Instruction* currentInstruction) >+{ >+ Jump lazyJump; >+ int dst = currentInstruction[1].u.operand; >+ >+#if USE(JSVALUE64) >+ emitGetVirtualRegister(currentInstruction[2].u.operand, regT0); >+#else >+ emitLoadPayload(currentInstruction[2].u.operand, regT0); >+#endif >+ FunctionExecutable* funcExec = m_codeBlock->functionDecl(currentInstruction[3].u.operand); >+ >+ OpcodeID opcodeID = Interpreter::getOpcodeID(currentInstruction->u.opcode); >+ if (opcodeID == op_new_func) >+ callOperation(operationNewFunction, dst, regT0, funcExec); >+ else if (opcodeID == op_new_generator_func) >+ callOperation(operationNewGeneratorFunction, dst, regT0, funcExec); >+ else if (opcodeID == op_new_async_func) >+ callOperation(operationNewAsyncFunction, dst, regT0, funcExec); >+ else { >+ ASSERT(opcodeID == op_new_async_generator_func); >+ callOperation(operationNewAsyncGeneratorFunction, dst, regT0, funcExec); >+ } >+} >+ >+void JIT::emit_op_new_func(Instruction* currentInstruction) >+{ >+ emitNewFuncCommon(currentInstruction); >+} >+ >+void JIT::emit_op_new_generator_func(Instruction* currentInstruction) >+{ >+ emitNewFuncCommon(currentInstruction); >+} >+ >+void JIT::emit_op_new_async_generator_func(Instruction* currentInstruction) >+{ >+ emitNewFuncCommon(currentInstruction); >+} >+ >+void JIT::emit_op_new_async_func(Instruction* currentInstruction) >+{ >+ emitNewFuncCommon(currentInstruction); >+} >+ >+void JIT::emitNewFuncExprCommon(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+#if USE(JSVALUE64) >+ emitGetVirtualRegister(currentInstruction[2].u.operand, regT0); >+#else >+ emitLoadPayload(currentInstruction[2].u.operand, regT0); >+#endif >+ >+ FunctionExecutable* function = m_codeBlock->functionExpr(currentInstruction[3].u.operand); >+ OpcodeID opcodeID = Interpreter::getOpcodeID(currentInstruction->u.opcode); >+ >+ if (opcodeID == op_new_func_exp) >+ callOperation(operationNewFunction, dst, regT0, function); >+ else if (opcodeID == op_new_generator_func_exp) >+ callOperation(operationNewGeneratorFunction, dst, regT0, function); >+ else if (opcodeID == op_new_async_func_exp) >+ callOperation(operationNewAsyncFunction, dst, regT0, function); >+ else { >+ ASSERT(opcodeID == op_new_async_generator_func_exp); >+ callOperation(operationNewAsyncGeneratorFunction, dst, regT0, function); >+ } >+} >+ >+void JIT::emit_op_new_func_exp(Instruction* currentInstruction) >+{ >+ emitNewFuncExprCommon(currentInstruction); >+} >+ >+void JIT::emit_op_new_generator_func_exp(Instruction* currentInstruction) >+{ >+ emitNewFuncExprCommon(currentInstruction); >+} >+ >+void JIT::emit_op_new_async_func_exp(Instruction* currentInstruction) >+{ >+ emitNewFuncExprCommon(currentInstruction); >+} >+ >+void JIT::emit_op_new_async_generator_func_exp(Instruction* currentInstruction) >+{ >+ emitNewFuncExprCommon(currentInstruction); >+} >+ >+void JIT::emit_op_new_array(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int valuesIndex = currentInstruction[2].u.operand; >+ int size = currentInstruction[3].u.operand; >+ addPtr(TrustedImm32(valuesIndex * sizeof(Register)), callFrameRegister, regT0); >+ callOperation(operationNewArrayWithProfile, dst, >+ currentInstruction[4].u.arrayAllocationProfile, regT0, size); >+} >+ >+void JIT::emit_op_new_array_with_size(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int sizeIndex = currentInstruction[2].u.operand; >+#if USE(JSVALUE64) >+ emitGetVirtualRegister(sizeIndex, regT0); >+ callOperation(operationNewArrayWithSizeAndProfile, dst, >+ currentInstruction[3].u.arrayAllocationProfile, regT0); >+#else >+ emitLoad(sizeIndex, regT1, regT0); >+ callOperation(operationNewArrayWithSizeAndProfile, dst, >+ currentInstruction[3].u.arrayAllocationProfile, JSValueRegs(regT1, regT0)); >+#endif >+} >+ >+void JIT::emit_op_profile_control_flow(Instruction* currentInstruction) >+{ >+ BasicBlockLocation* basicBlockLocation = currentInstruction[1].u.basicBlockLocation; >+#if USE(JSVALUE64) >+ basicBlockLocation->emitExecuteCode(*this); >+#else >+ basicBlockLocation->emitExecuteCode(*this, regT0); >+#endif >+} >+ >+void JIT::emit_op_argument_count(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ load32(payloadFor(CallFrameSlot::argumentCount), regT0); >+ sub32(TrustedImm32(1), regT0); >+ JSValueRegs result = JSValueRegs::withTwoAvailableRegs(regT0, regT1); >+ boxInt32(regT0, result); >+ emitPutVirtualRegister(dst, result); >+} >+ >+void JIT::emit_op_get_rest_length(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ unsigned numParamsToSkip = currentInstruction[2].u.unsignedValue; >+ load32(payloadFor(CallFrameSlot::argumentCount), regT0); >+ sub32(TrustedImm32(1), regT0); >+ Jump zeroLength = branch32(LessThanOrEqual, regT0, Imm32(numParamsToSkip)); >+ sub32(Imm32(numParamsToSkip), regT0); >+#if USE(JSVALUE64) >+ boxInt32(regT0, JSValueRegs(regT0)); >+#endif >+ Jump done = jump(); >+ >+ zeroLength.link(this); >+#if USE(JSVALUE64) >+ move(TrustedImm64(JSValue::encode(jsNumber(0))), regT0); >+#else >+ move(TrustedImm32(0), regT0); >+#endif >+ >+ done.link(this); >+#if USE(JSVALUE64) >+ emitPutVirtualRegister(dst, regT0); >+#else >+ move(TrustedImm32(JSValue::Int32Tag), regT1); >+ emitPutVirtualRegister(dst, JSValueRegs(regT1, regT0)); >+#endif >+} >+ >+void JIT::emit_op_get_argument(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int index = currentInstruction[2].u.operand; >+#if USE(JSVALUE64) >+ JSValueRegs resultRegs(regT0); >+#else >+ JSValueRegs resultRegs(regT1, regT0); >+#endif >+ >+ load32(payloadFor(CallFrameSlot::argumentCount), regT2); >+ Jump argumentOutOfBounds = branch32(LessThanOrEqual, regT2, TrustedImm32(index)); >+ loadValue(addressFor(CallFrameSlot::thisArgument + index), resultRegs); >+ Jump done = jump(); >+ >+ argumentOutOfBounds.link(this); >+ moveValue(jsUndefined(), resultRegs); >+ >+ done.link(this); >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(dst, resultRegs); >+} >+ >+void JIT::emit_op_jless(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jless, op1, op2, target, LessThan); >+} >+ >+void JIT::emit_op_jlesseq(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jlesseq, op1, op2, target, LessThanOrEqual); >+} >+ >+void JIT::emit_op_jgreater(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jgreater, op1, op2, target, GreaterThan); >+} >+ >+void JIT::emit_op_jgreatereq(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jgreatereq, op1, op2, target, GreaterThanOrEqual); >+} >+ >+void JIT::emit_op_jnless(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jnless, op1, op2, target, GreaterThanOrEqual); >+} >+ >+void JIT::emit_op_jnlesseq(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jnlesseq, op1, op2, target, GreaterThan); >+} >+ >+void JIT::emit_op_jngreater(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jngreater, op1, op2, target, LessThanOrEqual); >+} >+ >+void JIT::emit_op_jngreatereq(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJump(op_jngreatereq, op1, op2, target, LessThan); >+} >+ >+void JIT::emitSlow_op_jless(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleLessThan, operationCompareLess, false, iter); >+} >+ >+void JIT::emitSlow_op_jlesseq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleLessThanOrEqual, operationCompareLessEq, false, iter); >+} >+ >+void JIT::emitSlow_op_jgreater(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThan, operationCompareGreater, false, iter); >+} >+ >+void JIT::emitSlow_op_jgreatereq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThanOrEqual, operationCompareGreaterEq, false, iter); >+} >+ >+void JIT::emitSlow_op_jnless(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThanOrEqualOrUnordered, operationCompareLess, true, iter); >+} >+ >+void JIT::emitSlow_op_jnlesseq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThanOrUnordered, operationCompareLessEq, true, iter); >+} >+ >+void JIT::emitSlow_op_jngreater(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleLessThanOrEqualOrUnordered, operationCompareGreater, true, iter); >+} >+ >+void JIT::emitSlow_op_jngreatereq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareAndJumpSlow(op1, op2, target, DoubleLessThanOrUnordered, operationCompareGreaterEq, true, iter); >+} >+ >+void JIT::emit_op_below(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ emit_compareUnsigned(dst, op1, op2, Below); >+} >+ >+void JIT::emit_op_beloweq(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ emit_compareUnsigned(dst, op1, op2, BelowOrEqual); >+} >+ >+void JIT::emit_op_jbelow(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareUnsignedAndJump(op1, op2, target, Below); >+} >+ >+void JIT::emit_op_jbeloweq(Instruction* currentInstruction) >+{ >+ int op1 = currentInstruction[1].u.operand; >+ int op2 = currentInstruction[2].u.operand; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emit_compareUnsignedAndJump(op1, op2, target, BelowOrEqual); >+} >+ >+void JIT::emit_op_negate(Instruction* currentInstruction) >+{ >+ ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >+ JITNegIC* negateIC = m_codeBlock->addJITNegIC(arithProfile, currentInstruction); >+ m_instructionToMathIC.add(currentInstruction, negateIC); >+ emitMathICFast(negateIC, currentInstruction, operationArithNegateProfiled, operationArithNegate); >+} >+ >+void JIT::emitSlow_op_negate(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ JITNegIC* negIC = bitwise_cast<JITNegIC*>(m_instructionToMathIC.get(currentInstruction)); >+ emitMathICSlow(negIC, currentInstruction, operationArithNegateProfiledOptimize, operationArithNegateProfiled, operationArithNegateOptimize); >+} >+ >+template<typename SnippetGenerator> >+void JIT::emitBitBinaryOpFastPath(Instruction* currentInstruction) >+{ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ >+#if USE(JSVALUE64) >+ JSValueRegs leftRegs = JSValueRegs(regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT1); >+ JSValueRegs resultRegs = leftRegs; >+ GPRReg scratchGPR = regT2; >+#else >+ JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >+ JSValueRegs resultRegs = leftRegs; >+ GPRReg scratchGPR = regT4; >+#endif >+ >+ SnippetOperand leftOperand; >+ SnippetOperand rightOperand; >+ >+ if (isOperandConstantInt(op1)) >+ leftOperand.setConstInt32(getOperandConstantInt(op1)); >+ else if (isOperandConstantInt(op2)) >+ rightOperand.setConstInt32(getOperandConstantInt(op2)); >+ >+ RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >+ >+ if (!leftOperand.isConst()) >+ emitGetVirtualRegister(op1, leftRegs); >+ if (!rightOperand.isConst()) >+ emitGetVirtualRegister(op2, rightRegs); >+ >+ SnippetGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, scratchGPR); >+ >+ gen.generateFastPath(*this); >+ >+ ASSERT(gen.didEmitFastPath()); >+ gen.endJumpList().link(this); >+ emitPutVirtualRegister(result, resultRegs); >+ >+ addSlowCase(gen.slowPathJumpList()); >+} >+ >+void JIT::emit_op_bitand(Instruction* currentInstruction) >+{ >+ emitBitBinaryOpFastPath<JITBitAndGenerator>(currentInstruction); >+} >+ >+void JIT::emit_op_bitor(Instruction* currentInstruction) >+{ >+ emitBitBinaryOpFastPath<JITBitOrGenerator>(currentInstruction); >+} >+ >+void JIT::emit_op_bitxor(Instruction* currentInstruction) >+{ >+ emitBitBinaryOpFastPath<JITBitXorGenerator>(currentInstruction); >+} >+ >+void JIT::emit_op_lshift(Instruction* currentInstruction) >+{ >+ emitBitBinaryOpFastPath<JITLeftShiftGenerator>(currentInstruction); >+} >+ >+void JIT::emitRightShiftFastPath(Instruction* currentInstruction, OpcodeID opcodeID) >+{ >+ ASSERT(opcodeID == op_rshift || opcodeID == op_urshift); >+ >+ JITRightShiftGenerator::ShiftType snippetShiftType = opcodeID == op_rshift ? >+ JITRightShiftGenerator::SignedShift : JITRightShiftGenerator::UnsignedShift; >+ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ >+#if USE(JSVALUE64) >+ JSValueRegs leftRegs = JSValueRegs(regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT1); >+ JSValueRegs resultRegs = leftRegs; >+ GPRReg scratchGPR = regT2; >+ FPRReg scratchFPR = InvalidFPRReg; >+#else >+ JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >+ JSValueRegs resultRegs = leftRegs; >+ GPRReg scratchGPR = regT4; >+ FPRReg scratchFPR = fpRegT2; >+#endif >+ >+ SnippetOperand leftOperand; >+ SnippetOperand rightOperand; >+ >+ if (isOperandConstantInt(op1)) >+ leftOperand.setConstInt32(getOperandConstantInt(op1)); >+ else if (isOperandConstantInt(op2)) >+ rightOperand.setConstInt32(getOperandConstantInt(op2)); >+ >+ RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >+ >+ if (!leftOperand.isConst()) >+ emitGetVirtualRegister(op1, leftRegs); >+ if (!rightOperand.isConst()) >+ emitGetVirtualRegister(op2, rightRegs); >+ >+ JITRightShiftGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, >+ fpRegT0, scratchGPR, scratchFPR, snippetShiftType); >+ >+ gen.generateFastPath(*this); >+ >+ ASSERT(gen.didEmitFastPath()); >+ gen.endJumpList().link(this); >+ emitPutVirtualRegister(result, resultRegs); >+ >+ addSlowCase(gen.slowPathJumpList()); >+} >+ >+void JIT::emit_op_rshift(Instruction* currentInstruction) >+{ >+ emitRightShiftFastPath(currentInstruction, op_rshift); >+} >+ >+void JIT::emit_op_urshift(Instruction* currentInstruction) >+{ >+ emitRightShiftFastPath(currentInstruction, op_urshift); >+} >+ >+ALWAYS_INLINE static OperandTypes getOperandTypes(Instruction* instruction) >+{ >+ return OperandTypes(ArithProfile::fromInt(instruction[4].u.operand).lhsResultType(), ArithProfile::fromInt(instruction[4].u.operand).rhsResultType()); >+} >+ >+void JIT::emit_op_add(Instruction* currentInstruction) >+{ >+ ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >+ JITAddIC* addIC = m_codeBlock->addJITAddIC(arithProfile, currentInstruction); >+ m_instructionToMathIC.add(currentInstruction, addIC); >+ emitMathICFast(addIC, currentInstruction, operationValueAddProfiled, operationValueAdd); >+} >+ >+void JIT::emitSlow_op_add(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ JITAddIC* addIC = bitwise_cast<JITAddIC*>(m_instructionToMathIC.get(currentInstruction)); >+ emitMathICSlow(addIC, currentInstruction, operationValueAddProfiledOptimize, operationValueAddProfiled, operationValueAddOptimize); >+} >+ >+template <typename Generator, typename ProfiledFunction, typename NonProfiledFunction> >+void JIT::emitMathICFast(JITUnaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledFunction profiledFunction, NonProfiledFunction nonProfiledFunction) >+{ >+ int result = currentInstruction[1].u.operand; >+ int operand = currentInstruction[2].u.operand; >+ >+#if USE(JSVALUE64) >+ // ArithNegate benefits from using the same register as src and dst. >+ // Since regT1==argumentGPR1, using regT1 avoid shuffling register to call the slow path. >+ JSValueRegs srcRegs = JSValueRegs(regT1); >+ JSValueRegs resultRegs = JSValueRegs(regT1); >+ GPRReg scratchGPR = regT2; >+#else >+ JSValueRegs srcRegs = JSValueRegs(regT1, regT0); >+ JSValueRegs resultRegs = JSValueRegs(regT3, regT2); >+ GPRReg scratchGPR = regT4; >+#endif >+ >+#if ENABLE(MATH_IC_STATS) >+ auto inlineStart = label(); >+#endif >+ >+ mathIC->m_generator = Generator(resultRegs, srcRegs, scratchGPR); >+ >+ emitGetVirtualRegister(operand, srcRegs); >+ >+ MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.add(currentInstruction, MathICGenerationState()).iterator->value; >+ >+ bool generatedInlineCode = mathIC->generateInline(*this, mathICGenerationState); >+ if (!generatedInlineCode) { >+ ArithProfile* arithProfile = mathIC->arithProfile(); >+ if (arithProfile && shouldEmitProfiling()) >+ callOperationWithResult(profiledFunction, resultRegs, srcRegs, arithProfile); >+ else >+ callOperationWithResult(nonProfiledFunction, resultRegs, srcRegs); >+ } else >+ addSlowCase(mathICGenerationState.slowPathJumps); >+ >+#if ENABLE(MATH_IC_STATS) >+ auto inlineEnd = label(); >+ addLinkTask([=] (LinkBuffer& linkBuffer) { >+ size_t size = linkBuffer.locationOf(inlineEnd).executableAddress<char*>() - linkBuffer.locationOf(inlineStart).executableAddress<char*>(); >+ mathIC->m_generatedCodeSize += size; >+ }); >+#endif >+ >+ emitPutVirtualRegister(result, resultRegs); >+} >+ >+template <typename Generator, typename ProfiledFunction, typename NonProfiledFunction> >+void JIT::emitMathICFast(JITBinaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledFunction profiledFunction, NonProfiledFunction nonProfiledFunction) >+{ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ >+#if USE(JSVALUE64) >+ OperandTypes types = getOperandTypes(copiedInstruction(currentInstruction)); >+ JSValueRegs leftRegs = JSValueRegs(regT1); >+ JSValueRegs rightRegs = JSValueRegs(regT2); >+ JSValueRegs resultRegs = JSValueRegs(regT0); >+ GPRReg scratchGPR = regT3; >+ FPRReg scratchFPR = fpRegT2; >+#else >+ OperandTypes types = getOperandTypes(currentInstruction); >+ JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >+ JSValueRegs resultRegs = leftRegs; >+ GPRReg scratchGPR = regT4; >+ FPRReg scratchFPR = fpRegT2; >+#endif >+ >+ SnippetOperand leftOperand(types.first()); >+ SnippetOperand rightOperand(types.second()); >+ >+ if (isOperandConstantInt(op1)) >+ leftOperand.setConstInt32(getOperandConstantInt(op1)); >+ else if (isOperandConstantInt(op2)) >+ rightOperand.setConstInt32(getOperandConstantInt(op2)); >+ >+ RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >+ >+ mathIC->m_generator = Generator(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, fpRegT0, fpRegT1, scratchGPR, scratchFPR); >+ >+ ASSERT(!(Generator::isLeftOperandValidConstant(leftOperand) && Generator::isRightOperandValidConstant(rightOperand))); >+ >+ if (!Generator::isLeftOperandValidConstant(leftOperand)) >+ emitGetVirtualRegister(op1, leftRegs); >+ if (!Generator::isRightOperandValidConstant(rightOperand)) >+ emitGetVirtualRegister(op2, rightRegs); >+ >+#if ENABLE(MATH_IC_STATS) >+ auto inlineStart = label(); >+#endif >+ >+ MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.add(currentInstruction, MathICGenerationState()).iterator->value; >+ >+ bool generatedInlineCode = mathIC->generateInline(*this, mathICGenerationState); >+ if (!generatedInlineCode) { >+ if (leftOperand.isConst()) >+ emitGetVirtualRegister(op1, leftRegs); >+ else if (rightOperand.isConst()) >+ emitGetVirtualRegister(op2, rightRegs); >+ ArithProfile* arithProfile = mathIC->arithProfile(); >+ if (arithProfile && shouldEmitProfiling()) >+ callOperationWithResult(profiledFunction, resultRegs, leftRegs, rightRegs, arithProfile); >+ else >+ callOperationWithResult(nonProfiledFunction, resultRegs, leftRegs, rightRegs); >+ } else >+ addSlowCase(mathICGenerationState.slowPathJumps); >+ >+#if ENABLE(MATH_IC_STATS) >+ auto inlineEnd = label(); >+ addLinkTask([=] (LinkBuffer& linkBuffer) { >+ size_t size = linkBuffer.locationOf(inlineEnd).executableAddress<char*>() - linkBuffer.locationOf(inlineStart).executableAddress<char*>(); >+ mathIC->m_generatedCodeSize += size; >+ }); >+#endif >+ >+ emitPutVirtualRegister(result, resultRegs); >+} >+ >+template <typename Generator, typename ProfiledRepatchFunction, typename ProfiledFunction, typename RepatchFunction> >+void JIT::emitMathICSlow(JITUnaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledRepatchFunction profiledRepatchFunction, ProfiledFunction profiledFunction, RepatchFunction repatchFunction) >+{ >+ MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >+ mathICGenerationState.slowPathStart = label(); >+ >+ int result = currentInstruction[1].u.operand; >+ >+#if USE(JSVALUE64) >+ JSValueRegs srcRegs = JSValueRegs(regT1); >+ JSValueRegs resultRegs = JSValueRegs(regT0); >+#else >+ JSValueRegs srcRegs = JSValueRegs(regT1, regT0); >+ JSValueRegs resultRegs = JSValueRegs(regT3, regT2); >+#endif >+ >+#if ENABLE(MATH_IC_STATS) >+ auto slowPathStart = label(); >+#endif >+ >+ ArithProfile* arithProfile = mathIC->arithProfile(); >+ if (arithProfile && shouldEmitProfiling()) { >+ if (mathICGenerationState.shouldSlowPathRepatch) >+ mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(profiledRepatchFunction), resultRegs, srcRegs, TrustedImmPtr(mathIC)); >+ else >+ mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, resultRegs, srcRegs, arithProfile); >+ } else >+ mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(repatchFunction), resultRegs, srcRegs, TrustedImmPtr(mathIC)); >+ >+#if ENABLE(MATH_IC_STATS) >+ auto slowPathEnd = label(); >+ addLinkTask([=] (LinkBuffer& linkBuffer) { >+ size_t size = linkBuffer.locationOf(slowPathEnd).executableAddress<char*>() - linkBuffer.locationOf(slowPathStart).executableAddress<char*>(); >+ mathIC->m_generatedCodeSize += size; >+ }); >+#endif >+ >+ emitPutVirtualRegister(result, resultRegs); >+ >+ addLinkTask([=] (LinkBuffer& linkBuffer) { >+ MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >+ mathIC->finalizeInlineCode(mathICGenerationState, linkBuffer); >+ }); >+} >+ >+template <typename Generator, typename ProfiledRepatchFunction, typename ProfiledFunction, typename RepatchFunction> >+void JIT::emitMathICSlow(JITBinaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledRepatchFunction profiledRepatchFunction, ProfiledFunction profiledFunction, RepatchFunction repatchFunction) >+{ >+ MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >+ mathICGenerationState.slowPathStart = label(); >+ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ >+#if USE(JSVALUE64) >+ OperandTypes types = getOperandTypes(copiedInstruction(currentInstruction)); >+ JSValueRegs leftRegs = JSValueRegs(regT1); >+ JSValueRegs rightRegs = JSValueRegs(regT2); >+ JSValueRegs resultRegs = JSValueRegs(regT0); >+#else >+ OperandTypes types = getOperandTypes(currentInstruction); >+ JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >+ JSValueRegs resultRegs = leftRegs; >+#endif >+ >+ SnippetOperand leftOperand(types.first()); >+ SnippetOperand rightOperand(types.second()); >+ >+ if (isOperandConstantInt(op1)) >+ leftOperand.setConstInt32(getOperandConstantInt(op1)); >+ else if (isOperandConstantInt(op2)) >+ rightOperand.setConstInt32(getOperandConstantInt(op2)); >+ >+ ASSERT(!(Generator::isLeftOperandValidConstant(leftOperand) && Generator::isRightOperandValidConstant(rightOperand))); >+ >+ if (Generator::isLeftOperandValidConstant(leftOperand)) >+ emitGetVirtualRegister(op1, leftRegs); >+ else if (Generator::isRightOperandValidConstant(rightOperand)) >+ emitGetVirtualRegister(op2, rightRegs); >+ >+#if ENABLE(MATH_IC_STATS) >+ auto slowPathStart = label(); >+#endif >+ >+ ArithProfile* arithProfile = mathIC->arithProfile(); >+ if (arithProfile && shouldEmitProfiling()) { >+ if (mathICGenerationState.shouldSlowPathRepatch) >+ mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(profiledRepatchFunction), resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC)); >+ else >+ mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, resultRegs, leftRegs, rightRegs, arithProfile); >+ } else >+ mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(repatchFunction), resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC)); >+ >+#if ENABLE(MATH_IC_STATS) >+ auto slowPathEnd = label(); >+ addLinkTask([=] (LinkBuffer& linkBuffer) { >+ size_t size = linkBuffer.locationOf(slowPathEnd).executableAddress<char*>() - linkBuffer.locationOf(slowPathStart).executableAddress<char*>(); >+ mathIC->m_generatedCodeSize += size; >+ }); >+#endif >+ >+ emitPutVirtualRegister(result, resultRegs); >+ >+ addLinkTask([=] (LinkBuffer& linkBuffer) { >+ MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >+ mathIC->finalizeInlineCode(mathICGenerationState, linkBuffer); >+ }); >+} >+ >+void JIT::emit_op_div(Instruction* currentInstruction) >+{ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ >+#if USE(JSVALUE64) >+ OperandTypes types = getOperandTypes(copiedInstruction(currentInstruction)); >+ JSValueRegs leftRegs = JSValueRegs(regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT1); >+ JSValueRegs resultRegs = leftRegs; >+ GPRReg scratchGPR = regT2; >+#else >+ OperandTypes types = getOperandTypes(currentInstruction); >+ JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >+ JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >+ JSValueRegs resultRegs = leftRegs; >+ GPRReg scratchGPR = regT4; >+#endif >+ FPRReg scratchFPR = fpRegT2; >+ >+ ArithProfile* arithProfile = nullptr; >+ if (shouldEmitProfiling()) >+ arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >+ >+ SnippetOperand leftOperand(types.first()); >+ SnippetOperand rightOperand(types.second()); >+ >+ if (isOperandConstantInt(op1)) >+ leftOperand.setConstInt32(getOperandConstantInt(op1)); >+#if USE(JSVALUE64) >+ else if (isOperandConstantDouble(op1)) >+ leftOperand.setConstDouble(getOperandConstantDouble(op1)); >+#endif >+ else if (isOperandConstantInt(op2)) >+ rightOperand.setConstInt32(getOperandConstantInt(op2)); >+#if USE(JSVALUE64) >+ else if (isOperandConstantDouble(op2)) >+ rightOperand.setConstDouble(getOperandConstantDouble(op2)); >+#endif >+ >+ RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >+ >+ if (!leftOperand.isConst()) >+ emitGetVirtualRegister(op1, leftRegs); >+ if (!rightOperand.isConst()) >+ emitGetVirtualRegister(op2, rightRegs); >+ >+ JITDivGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, >+ fpRegT0, fpRegT1, scratchGPR, scratchFPR, arithProfile); >+ >+ gen.generateFastPath(*this); >+ >+ if (gen.didEmitFastPath()) { >+ gen.endJumpList().link(this); >+ emitPutVirtualRegister(result, resultRegs); >+ >+ addSlowCase(gen.slowPathJumpList()); >+ } else { >+ ASSERT(gen.endJumpList().empty()); >+ ASSERT(gen.slowPathJumpList().empty()); >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_div); >+ slowPathCall.call(); >+ } >+} >+ >+void JIT::emit_op_mul(Instruction* currentInstruction) >+{ >+ ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >+ JITMulIC* mulIC = m_codeBlock->addJITMulIC(arithProfile, currentInstruction); >+ m_instructionToMathIC.add(currentInstruction, mulIC); >+ emitMathICFast(mulIC, currentInstruction, operationValueMulProfiled, operationValueMul); >+} >+ >+void JIT::emitSlow_op_mul(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ JITMulIC* mulIC = bitwise_cast<JITMulIC*>(m_instructionToMathIC.get(currentInstruction)); >+ emitMathICSlow(mulIC, currentInstruction, operationValueMulProfiledOptimize, operationValueMulProfiled, operationValueMulOptimize); >+} >+ >+void JIT::emit_op_sub(Instruction* currentInstruction) >+{ >+ ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >+ JITSubIC* subIC = m_codeBlock->addJITSubIC(arithProfile, currentInstruction); >+ m_instructionToMathIC.add(currentInstruction, subIC); >+ emitMathICFast(subIC, currentInstruction, operationValueSubProfiled, operationValueSub); >+} >+ >+void JIT::emitSlow_op_sub(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ JITSubIC* subIC = bitwise_cast<JITSubIC*>(m_instructionToMathIC.get(currentInstruction)); >+ emitMathICSlow(subIC, currentInstruction, operationValueSubProfiledOptimize, operationValueSubProfiled, operationValueSubOptimize); >+} >+ >+/* ------------------------------ END: OP_ADD, OP_SUB, OP_MUL, OP_POW ------------------------------ */ >+ >+void JIT::emitWriteBarrier(JSCell* owner) >+{ >+ Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), owner, regT0); >+ callOperation(operationWriteBarrierSlowPath, owner); >+ ownerIsRememberedOrInEden.link(this); >+} >+ >+void JIT::emitByValIdentifierCheck(ByValInfo* byValInfo, RegisterID cell, RegisterID scratch, const Identifier& propertyName, JumpList& slowCases) >+{ >+ if (propertyName.isSymbol()) >+ slowCases.append(branchPtr(NotEqual, cell, TrustedImmPtr(byValInfo->cachedSymbol.get()))); >+ else { >+ slowCases.append(branchIfNotString(cell)); >+ loadPtr(Address(cell, JSString::offsetOfValue()), scratch); >+ slowCases.append(branchPtr(NotEqual, scratch, TrustedImmPtr(propertyName.impl()))); >+ } >+} >+ >+void JIT::privateCompileGetByVal(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >+{ >+ Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >+ >+ PatchableJump badType; >+ JumpList slowCases; >+ >+ switch (arrayMode) { >+ case JITInt32: >+ slowCases = emitInt32GetByVal(currentInstruction, badType); >+ break; >+ case JITDouble: >+ slowCases = emitDoubleGetByVal(currentInstruction, badType); >+ break; >+ case JITContiguous: >+ slowCases = emitContiguousGetByVal(currentInstruction, badType); >+ break; >+ case JITArrayStorage: >+ slowCases = emitArrayStorageGetByVal(currentInstruction, badType); >+ break; >+ case JITDirectArguments: >+ slowCases = emitDirectArgumentsGetByVal(currentInstruction, badType); >+ break; >+ case JITScopedArguments: >+ slowCases = emitScopedArgumentsGetByVal(currentInstruction, badType); >+ break; >+ default: >+ TypedArrayType type = typedArrayTypeForJITArrayMode(arrayMode); >+ if (isInt(type)) >+ slowCases = emitIntTypedArrayGetByVal(currentInstruction, badType, type); >+ else >+ slowCases = emitFloatTypedArrayGetByVal(currentInstruction, badType, type); >+ break; >+ } >+ >+ Jump done = jump(); >+ >+ LinkBuffer patchBuffer(*this, m_codeBlock); >+ >+ patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ >+ patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >+ >+ byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >+ m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >+ "Baseline get_by_val stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >+ >+ MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >+ MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationGetByValGeneric)); >+} >+ >+void JIT::privateCompileGetByValWithCachedId(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, const Identifier& propertyName) >+{ >+ Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >+ >+ Jump fastDoneCase; >+ Jump slowDoneCase; >+ JumpList slowCases; >+ >+ JITGetByIdGenerator gen = emitGetByValWithCachedId(byValInfo, currentInstruction, propertyName, fastDoneCase, slowDoneCase, slowCases); >+ >+ ConcurrentJSLocker locker(m_codeBlock->m_lock); >+ LinkBuffer patchBuffer(*this, m_codeBlock); >+ patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ patchBuffer.link(fastDoneCase, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >+ patchBuffer.link(slowDoneCase, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToNextHotPath)); >+ if (!m_exceptionChecks.empty()) >+ patchBuffer.link(m_exceptionChecks, byValInfo->exceptionHandler); >+ >+ for (const auto& callSite : m_calls) { >+ if (callSite.callee) >+ patchBuffer.link(callSite.from, callSite.callee); >+ } >+ gen.finalize(patchBuffer, patchBuffer); >+ >+ byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >+ m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >+ "Baseline get_by_val with cached property name '%s' stub for %s, return point %p", propertyName.impl()->utf8().data(), toCString(*m_codeBlock).data(), returnAddress.value()); >+ byValInfo->stubInfo = gen.stubInfo(); >+ >+ MacroAssembler::repatchJump(byValInfo->notIndexJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >+ MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationGetByValGeneric)); >+} >+ >+void JIT::privateCompilePutByVal(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >+{ >+ Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >+ >+ PatchableJump badType; >+ JumpList slowCases; >+ >+ bool needsLinkForWriteBarrier = false; >+ >+ switch (arrayMode) { >+ case JITInt32: >+ slowCases = emitInt32PutByVal(currentInstruction, badType); >+ break; >+ case JITDouble: >+ slowCases = emitDoublePutByVal(currentInstruction, badType); >+ break; >+ case JITContiguous: >+ slowCases = emitContiguousPutByVal(currentInstruction, badType); >+ needsLinkForWriteBarrier = true; >+ break; >+ case JITArrayStorage: >+ slowCases = emitArrayStoragePutByVal(currentInstruction, badType); >+ needsLinkForWriteBarrier = true; >+ break; >+ default: >+ TypedArrayType type = typedArrayTypeForJITArrayMode(arrayMode); >+ if (isInt(type)) >+ slowCases = emitIntTypedArrayPutByVal(currentInstruction, badType, type); >+ else >+ slowCases = emitFloatTypedArrayPutByVal(currentInstruction, badType, type); >+ break; >+ } >+ >+ Jump done = jump(); >+ >+ LinkBuffer patchBuffer(*this, m_codeBlock); >+ patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >+ if (needsLinkForWriteBarrier) { >+ ASSERT(removeCodePtrTag(m_calls.last().callee.executableAddress()) == removeCodePtrTag(operationWriteBarrierSlowPath)); >+ patchBuffer.link(m_calls.last().from, m_calls.last().callee); >+ } >+ >+ bool isDirect = Interpreter::getOpcodeID(currentInstruction->u.opcode) == op_put_by_val_direct; >+ if (!isDirect) { >+ byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >+ m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >+ "Baseline put_by_val stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >+ >+ } else { >+ byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >+ m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >+ "Baseline put_by_val_direct stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >+ } >+ MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >+ MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(isDirect ? operationDirectPutByValGeneric : operationPutByValGeneric)); >+} >+ >+void JIT::privateCompilePutByValWithCachedId(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, PutKind putKind, const Identifier& propertyName) >+{ >+ Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >+ >+ JumpList doneCases; >+ JumpList slowCases; >+ >+ JITPutByIdGenerator gen = emitPutByValWithCachedId(byValInfo, currentInstruction, putKind, propertyName, doneCases, slowCases); >+ >+ ConcurrentJSLocker locker(m_codeBlock->m_lock); >+ LinkBuffer patchBuffer(*this, m_codeBlock); >+ patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ patchBuffer.link(doneCases, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >+ if (!m_exceptionChecks.empty()) >+ patchBuffer.link(m_exceptionChecks, byValInfo->exceptionHandler); >+ >+ for (const auto& callSite : m_calls) { >+ if (callSite.callee) >+ patchBuffer.link(callSite.from, callSite.callee); >+ } >+ gen.finalize(patchBuffer, patchBuffer); >+ >+ byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >+ m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >+ "Baseline put_by_val%s with cached property name '%s' stub for %s, return point %p", (putKind == Direct) ? "_direct" : "", propertyName.impl()->utf8().data(), toCString(*m_codeBlock).data(), returnAddress.value()); >+ byValInfo->stubInfo = gen.stubInfo(); >+ >+ MacroAssembler::repatchJump(byValInfo->notIndexJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >+ MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(putKind == Direct ? operationDirectPutByValGeneric : operationPutByValGeneric)); >+} >+ >+ >+JIT::JumpList JIT::emitDirectArgumentsGetByVal(Instruction*, PatchableJump& badType) >+{ >+ JumpList slowCases; >+ >+#if USE(JSVALUE64) >+ RegisterID base = regT0; >+ RegisterID property = regT1; >+ JSValueRegs result = JSValueRegs(regT0); >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+#else >+ RegisterID base = regT0; >+ RegisterID property = regT2; >+ JSValueRegs result = JSValueRegs(regT1, regT0); >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+#endif >+ >+ load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >+ badType = patchableBranch32(NotEqual, scratch, TrustedImm32(DirectArgumentsType)); >+ >+ load32(Address(base, DirectArguments::offsetOfLength()), scratch2); >+ slowCases.append(branch32(AboveOrEqual, property, scratch2)); >+ slowCases.append(branchTestPtr(NonZero, Address(base, DirectArguments::offsetOfMappedArguments()))); >+ >+ loadValue(BaseIndex(base, property, TimesEight, DirectArguments::storageOffset()), result); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitScopedArgumentsGetByVal(Instruction*, PatchableJump& badType) >+{ >+ JumpList slowCases; >+ >+#if USE(JSVALUE64) >+ RegisterID base = regT0; >+ RegisterID property = regT1; >+ JSValueRegs result = JSValueRegs(regT0); >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+ RegisterID scratch3 = regT5; >+#else >+ RegisterID base = regT0; >+ RegisterID property = regT2; >+ JSValueRegs result = JSValueRegs(regT1, regT0); >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+ RegisterID scratch3 = regT5; >+#endif >+ >+ load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >+ badType = patchableBranch32(NotEqual, scratch, TrustedImm32(ScopedArgumentsType)); >+ loadPtr(Address(base, ScopedArguments::offsetOfStorage()), scratch3); >+ xorPtr(TrustedImmPtr(ScopedArgumentsPoison::key()), scratch3); >+ slowCases.append(branch32(AboveOrEqual, property, Address(scratch3, ScopedArguments::offsetOfTotalLengthInStorage()))); >+ >+ loadPtr(Address(base, ScopedArguments::offsetOfTable()), scratch); >+ xorPtr(TrustedImmPtr(ScopedArgumentsPoison::key()), scratch); >+ load32(Address(scratch, ScopedArgumentsTable::offsetOfLength()), scratch2); >+ Jump overflowCase = branch32(AboveOrEqual, property, scratch2); >+ loadPtr(Address(base, ScopedArguments::offsetOfScope()), scratch2); >+ xorPtr(TrustedImmPtr(ScopedArgumentsPoison::key()), scratch2); >+ loadPtr(Address(scratch, ScopedArgumentsTable::offsetOfArguments()), scratch); >+ load32(BaseIndex(scratch, property, TimesFour), scratch); >+ slowCases.append(branch32(Equal, scratch, TrustedImm32(ScopeOffset::invalidOffset))); >+ loadValue(BaseIndex(scratch2, scratch, TimesEight, JSLexicalEnvironment::offsetOfVariables()), result); >+ Jump done = jump(); >+ overflowCase.link(this); >+ sub32(property, scratch2); >+ neg32(scratch2); >+ loadValue(BaseIndex(scratch3, scratch2, TimesEight), result); >+ slowCases.append(branchIfEmpty(result)); >+ done.link(this); >+ >+ load32(Address(scratch3, ScopedArguments::offsetOfTotalLengthInStorage()), scratch); >+ emitPreparePreciseIndexMask32(property, scratch, scratch2); >+ andPtr(scratch2, result.payloadGPR()); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitIntTypedArrayGetByVal(Instruction*, PatchableJump& badType, TypedArrayType type) >+{ >+ ASSERT(isInt(type)); >+ >+ // The best way to test the array type is to use the classInfo. We need to do so without >+ // clobbering the register that holds the indexing type, base, and property. >+ >+#if USE(JSVALUE64) >+ RegisterID base = regT0; >+ RegisterID property = regT1; >+ RegisterID resultPayload = regT0; >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+#else >+ RegisterID base = regT0; >+ RegisterID property = regT2; >+ RegisterID resultPayload = regT0; >+ RegisterID resultTag = regT1; >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+#endif >+ >+ JumpList slowCases; >+ >+ load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >+ badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type))); >+ slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength()))); >+ loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch); >+ cageConditionally(Gigacage::Primitive, scratch, scratch2); >+ >+ switch (elementSize(type)) { >+ case 1: >+ if (JSC::isSigned(type)) >+ load8SignedExtendTo32(BaseIndex(scratch, property, TimesOne), resultPayload); >+ else >+ load8(BaseIndex(scratch, property, TimesOne), resultPayload); >+ break; >+ case 2: >+ if (JSC::isSigned(type)) >+ load16SignedExtendTo32(BaseIndex(scratch, property, TimesTwo), resultPayload); >+ else >+ load16(BaseIndex(scratch, property, TimesTwo), resultPayload); >+ break; >+ case 4: >+ load32(BaseIndex(scratch, property, TimesFour), resultPayload); >+ break; >+ default: >+ CRASH(); >+ } >+ >+ Jump done; >+ if (type == TypeUint32) { >+ Jump canBeInt = branch32(GreaterThanOrEqual, resultPayload, TrustedImm32(0)); >+ >+ convertInt32ToDouble(resultPayload, fpRegT0); >+ addDouble(AbsoluteAddress(&twoToThe32), fpRegT0); >+#if USE(JSVALUE64) >+ moveDoubleTo64(fpRegT0, resultPayload); >+ sub64(tagTypeNumberRegister, resultPayload); >+#else >+ moveDoubleToInts(fpRegT0, resultPayload, resultTag); >+#endif >+ >+ done = jump(); >+ canBeInt.link(this); >+ } >+ >+#if USE(JSVALUE64) >+ or64(tagTypeNumberRegister, resultPayload); >+#else >+ move(TrustedImm32(JSValue::Int32Tag), resultTag); >+#endif >+ if (done.isSet()) >+ done.link(this); >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitFloatTypedArrayGetByVal(Instruction*, PatchableJump& badType, TypedArrayType type) >+{ >+ ASSERT(isFloat(type)); >+ >+#if USE(JSVALUE64) >+ RegisterID base = regT0; >+ RegisterID property = regT1; >+ RegisterID resultPayload = regT0; >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+#else >+ RegisterID base = regT0; >+ RegisterID property = regT2; >+ RegisterID resultPayload = regT0; >+ RegisterID resultTag = regT1; >+ RegisterID scratch = regT3; >+ RegisterID scratch2 = regT4; >+#endif >+ >+ JumpList slowCases; >+ >+ load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >+ badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type))); >+ slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength()))); >+ loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch); >+ cageConditionally(Gigacage::Primitive, scratch, scratch2); >+ >+ switch (elementSize(type)) { >+ case 4: >+ loadFloat(BaseIndex(scratch, property, TimesFour), fpRegT0); >+ convertFloatToDouble(fpRegT0, fpRegT0); >+ break; >+ case 8: { >+ loadDouble(BaseIndex(scratch, property, TimesEight), fpRegT0); >+ break; >+ } >+ default: >+ CRASH(); >+ } >+ >+ Jump notNaN = branchDouble(DoubleEqual, fpRegT0, fpRegT0); >+ static const double NaN = PNaN; >+ loadDouble(TrustedImmPtr(&NaN), fpRegT0); >+ notNaN.link(this); >+ >+#if USE(JSVALUE64) >+ moveDoubleTo64(fpRegT0, resultPayload); >+ sub64(tagTypeNumberRegister, resultPayload); >+#else >+ moveDoubleToInts(fpRegT0, resultPayload, resultTag); >+#endif >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitIntTypedArrayPutByVal(Instruction* currentInstruction, PatchableJump& badType, TypedArrayType type) >+{ >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ASSERT(isInt(type)); >+ >+ int value = currentInstruction[3].u.operand; >+ >+#if USE(JSVALUE64) >+ RegisterID base = regT0; >+ RegisterID property = regT1; >+ RegisterID earlyScratch = regT3; >+ RegisterID lateScratch = regT2; >+ RegisterID lateScratch2 = regT4; >+#else >+ RegisterID base = regT0; >+ RegisterID property = regT2; >+ RegisterID earlyScratch = regT3; >+ RegisterID lateScratch = regT1; >+ RegisterID lateScratch2 = regT4; >+#endif >+ >+ JumpList slowCases; >+ >+ load8(Address(base, JSCell::typeInfoTypeOffset()), earlyScratch); >+ badType = patchableBranch32(NotEqual, earlyScratch, TrustedImm32(typeForTypedArrayType(type))); >+ Jump inBounds = branch32(Below, property, Address(base, JSArrayBufferView::offsetOfLength())); >+ emitArrayProfileOutOfBoundsSpecialCase(profile); >+ slowCases.append(jump()); >+ inBounds.link(this); >+ >+#if USE(JSVALUE64) >+ emitGetVirtualRegister(value, earlyScratch); >+ slowCases.append(branchIfNotInt32(earlyScratch)); >+#else >+ emitLoad(value, lateScratch, earlyScratch); >+ slowCases.append(branchIfNotInt32(lateScratch)); >+#endif >+ >+ // We would be loading this into base as in get_by_val, except that the slow >+ // path expects the base to be unclobbered. >+ loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch); >+ cageConditionally(Gigacage::Primitive, lateScratch, lateScratch2); >+ >+ if (isClamped(type)) { >+ ASSERT(elementSize(type) == 1); >+ ASSERT(!JSC::isSigned(type)); >+ Jump inBounds = branch32(BelowOrEqual, earlyScratch, TrustedImm32(0xff)); >+ Jump tooBig = branch32(GreaterThan, earlyScratch, TrustedImm32(0xff)); >+ xor32(earlyScratch, earlyScratch); >+ Jump clamped = jump(); >+ tooBig.link(this); >+ move(TrustedImm32(0xff), earlyScratch); >+ clamped.link(this); >+ inBounds.link(this); >+ } >+ >+ switch (elementSize(type)) { >+ case 1: >+ store8(earlyScratch, BaseIndex(lateScratch, property, TimesOne)); >+ break; >+ case 2: >+ store16(earlyScratch, BaseIndex(lateScratch, property, TimesTwo)); >+ break; >+ case 4: >+ store32(earlyScratch, BaseIndex(lateScratch, property, TimesFour)); >+ break; >+ default: >+ CRASH(); >+ } >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitFloatTypedArrayPutByVal(Instruction* currentInstruction, PatchableJump& badType, TypedArrayType type) >+{ >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ASSERT(isFloat(type)); >+ >+ int value = currentInstruction[3].u.operand; >+ >+#if USE(JSVALUE64) >+ RegisterID base = regT0; >+ RegisterID property = regT1; >+ RegisterID earlyScratch = regT3; >+ RegisterID lateScratch = regT2; >+ RegisterID lateScratch2 = regT4; >+#else >+ RegisterID base = regT0; >+ RegisterID property = regT2; >+ RegisterID earlyScratch = regT3; >+ RegisterID lateScratch = regT1; >+ RegisterID lateScratch2 = regT4; >+#endif >+ >+ JumpList slowCases; >+ >+ load8(Address(base, JSCell::typeInfoTypeOffset()), earlyScratch); >+ badType = patchableBranch32(NotEqual, earlyScratch, TrustedImm32(typeForTypedArrayType(type))); >+ Jump inBounds = branch32(Below, property, Address(base, JSArrayBufferView::offsetOfLength())); >+ emitArrayProfileOutOfBoundsSpecialCase(profile); >+ slowCases.append(jump()); >+ inBounds.link(this); >+ >+#if USE(JSVALUE64) >+ emitGetVirtualRegister(value, earlyScratch); >+ Jump doubleCase = branchIfNotInt32(earlyScratch); >+ convertInt32ToDouble(earlyScratch, fpRegT0); >+ Jump ready = jump(); >+ doubleCase.link(this); >+ slowCases.append(branchIfNotNumber(earlyScratch)); >+ add64(tagTypeNumberRegister, earlyScratch); >+ move64ToDouble(earlyScratch, fpRegT0); >+ ready.link(this); >+#else >+ emitLoad(value, lateScratch, earlyScratch); >+ Jump doubleCase = branchIfNotInt32(lateScratch); >+ convertInt32ToDouble(earlyScratch, fpRegT0); >+ Jump ready = jump(); >+ doubleCase.link(this); >+ slowCases.append(branch32(Above, lateScratch, TrustedImm32(JSValue::LowestTag))); >+ moveIntsToDouble(earlyScratch, lateScratch, fpRegT0, fpRegT1); >+ ready.link(this); >+#endif >+ >+ // We would be loading this into base as in get_by_val, except that the slow >+ // path expects the base to be unclobbered. >+ loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch); >+ cageConditionally(Gigacage::Primitive, lateScratch, lateScratch2); >+ >+ switch (elementSize(type)) { >+ case 4: >+ convertDoubleToFloat(fpRegT0, fpRegT0); >+ storeFloat(fpRegT0, BaseIndex(lateScratch, property, TimesFour)); >+ break; >+ case 8: >+ storeDouble(fpRegT0, BaseIndex(lateScratch, property, TimesEight)); >+ break; >+ default: >+ CRASH(); >+ } >+ >+ return slowCases; >+} >+ > } // namespace JSC > > #endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JIT32_64.cpp b/Source/JavaScriptCore/jit/JIT32_64.cpp >new file mode 100644 >index 0000000000000000000000000000000000000000..472f717ab302f25aae3a94a3eab504335f83b430 >--- /dev/null >+++ b/Source/JavaScriptCore/jit/JIT32_64.cpp >@@ -0,0 +1,3045 @@ >+/* >+ * Copyright (C) 2008-2018 Apple Inc. All rights reserved. >+ * Copyright (C) 2010 Patrick Gansterer <paroga@paroga.com> >+ * Copyright (C) 2018 Yusuke Suzuki <utatane.tea@gmail.com> >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >+ */ >+ >+#include "config.h" >+#include "JIT.h" >+ >+#if ENABLE(JIT) >+ >+#include "BytecodeStructs.h" >+#include "CodeBlock.h" >+#include "DirectArguments.h" >+#include "Exception.h" >+#include "GCAwareJITStubRoutine.h" >+#include "InterpreterInlines.h" >+#include "JITInlines.h" >+#include "JSArray.h" >+#include "JSCast.h" >+#include "JSFunction.h" >+#include "JSLexicalEnvironment.h" >+#include "JSPropertyNameEnumerator.h" >+#include "LinkBuffer.h" >+#include "SetupVarargsFrame.h" >+#include "SlowPathCall.h" >+#include "StackAlignment.h" >+#include "StructureStubInfo.h" >+#include "ThunkGenerators.h" >+#include "TypeLocation.h" >+#include "TypeProfilerLog.h" >+#include "VirtualRegister.h" >+#include <wtf/StringPrintStream.h> >+ >+namespace JSC { >+ >+#if USE(JSVALUE32_64) >+ >+void JIT::emit_op_mov(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ if (m_codeBlock->isConstantRegisterIndex(src)) >+ emitStore(dst, getConstantOperand(src)); >+ else { >+ emitLoad(src, regT1, regT0); >+ emitStore(dst, regT1, regT0); >+ } >+} >+ >+void JIT::emit_op_end(Instruction* currentInstruction) >+{ >+ ASSERT(returnValueGPR != callFrameRegister); >+ emitLoad(currentInstruction[1].u.operand, regT1, returnValueGPR); >+ emitRestoreCalleeSaves(); >+ emitFunctionEpilogue(); >+ ret(); >+} >+ >+void JIT::emit_op_jmp(Instruction* currentInstruction) >+{ >+ unsigned target = currentInstruction[1].u.operand; >+ addJump(jump(), target); >+} >+ >+void JIT::emit_op_new_object(Instruction* currentInstruction) >+{ >+ Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >+ size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); >+ Allocator allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); >+ >+ RegisterID resultReg = returnValueGPR; >+ RegisterID allocatorReg = regT1; >+ RegisterID scratchReg = regT3; >+ >+ if (!allocator) >+ addSlowCase(jump()); >+ else { >+ JumpList slowCases; >+ auto butterfly = TrustedImmPtr(nullptr); >+ emitAllocateJSObject(resultReg, JITAllocator::constant(allocator), allocatorReg, TrustedImmPtr(structure), butterfly, scratchReg, slowCases); >+ emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); >+ addSlowCase(slowCases); >+ emitStoreCell(currentInstruction[1].u.operand, resultReg); >+ } >+} >+ >+void JIT::emitSlow_op_new_object(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int dst = currentInstruction[1].u.operand; >+ Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >+ callOperation(operationNewObject, structure); >+ emitStoreCell(dst, returnValueGPR); >+} >+ >+void JIT::emit_op_overrides_has_instance(Instruction* currentInstruction) >+{ >+ auto& bytecode = *reinterpret_cast<OpOverridesHasInstance*>(currentInstruction); >+ int dst = bytecode.dst(); >+ int constructor = bytecode.constructor(); >+ int hasInstanceValue = bytecode.hasInstanceValue(); >+ >+ emitLoadPayload(hasInstanceValue, regT0); >+ // We don't jump if we know what Symbol.hasInstance would do. >+ Jump hasInstanceValueNotCell = emitJumpIfNotJSCell(hasInstanceValue); >+ Jump customhasInstanceValue = branchPtr(NotEqual, regT0, TrustedImmPtr(m_codeBlock->globalObject()->functionProtoHasInstanceSymbolFunction())); >+ >+ // We know that constructor is an object from the way bytecode is emitted for instanceof expressions. >+ emitLoadPayload(constructor, regT0); >+ >+ // Check that constructor 'ImplementsDefaultHasInstance' i.e. the object is not a C-API user nor a bound function. >+ test8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance), regT0); >+ Jump done = jump(); >+ >+ hasInstanceValueNotCell.link(this); >+ customhasInstanceValue.link(this); >+ move(TrustedImm32(1), regT0); >+ >+ done.link(this); >+ emitStoreBool(dst, regT0); >+ >+} >+ >+void JIT::emit_op_instanceof(Instruction* currentInstruction) >+{ >+ auto& bytecode = *reinterpret_cast<OpInstanceof*>(currentInstruction); >+ int dst = bytecode.dst(); >+ int value = bytecode.value(); >+ int proto = bytecode.prototype(); >+ >+ // Load the operands into registers. >+ // We use regT0 for baseVal since we will be done with this first, and we can then use it for the result. >+ emitLoadPayload(value, regT2); >+ emitLoadPayload(proto, regT1); >+ >+ // Check that proto are cells. baseVal must be a cell - this is checked by the get_by_id for Symbol.hasInstance. >+ emitJumpSlowCaseIfNotJSCell(value); >+ emitJumpSlowCaseIfNotJSCell(proto); >+ >+ JITInstanceOfGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), >+ RegisterSet::stubUnavailableRegisters(), >+ regT0, // result >+ regT2, // value >+ regT1, // proto >+ regT3, regT4); // scratch >+ gen.generateFastPath(*this); >+ m_instanceOfs.append(gen); >+ >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_instanceof_custom(Instruction*) >+{ >+ // This always goes to slow path since we expect it to be rare. >+ addSlowCase(jump()); >+} >+ >+void JIT::emitSlow_op_instanceof(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ auto& bytecode = *reinterpret_cast<OpInstanceof*>(currentInstruction); >+ int dst = bytecode.dst(); >+ int value = bytecode.value(); >+ int proto = bytecode.prototype(); >+ >+ JITInstanceOfGenerator& gen = m_instanceOfs[m_instanceOfIndex++]; >+ >+ Label coldPathBegin = label(); >+ emitLoadTag(value, regT0); >+ emitLoadTag(proto, regT3); >+ Call call = callOperation(operationInstanceOfOptimize, dst, gen.stubInfo(), JSValueRegs(regT0, regT2), JSValueRegs(regT3, regT1)); >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emitSlow_op_instanceof_custom(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ auto& bytecode = *reinterpret_cast<OpInstanceofCustom*>(currentInstruction); >+ int dst = bytecode.dst(); >+ int value = bytecode.value(); >+ int constructor = bytecode.constructor(); >+ int hasInstanceValue = bytecode.hasInstanceValue(); >+ >+ emitLoad(value, regT1, regT0); >+ emitLoadPayload(constructor, regT2); >+ emitLoad(hasInstanceValue, regT4, regT3); >+ callOperation(operationInstanceOfCustom, JSValueRegs(regT1, regT0), regT2, JSValueRegs(regT4, regT3)); >+ emitStoreBool(dst, returnValueGPR); >+} >+ >+void JIT::emit_op_is_empty(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitLoad(value, regT1, regT0); >+ compare32(Equal, regT1, TrustedImm32(JSValue::EmptyValueTag), regT0); >+ >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_is_undefined(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitLoad(value, regT1, regT0); >+ Jump isCell = branchIfCell(regT1); >+ >+ compare32(Equal, regT1, TrustedImm32(JSValue::UndefinedTag), regT0); >+ Jump done = jump(); >+ >+ isCell.link(this); >+ Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ move(TrustedImm32(0), regT0); >+ Jump notMasqueradesAsUndefined = jump(); >+ >+ isMasqueradesAsUndefined.link(this); >+ loadPtr(Address(regT0, JSCell::structureIDOffset()), regT1); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ loadPtr(Address(regT1, Structure::globalObjectOffset()), regT1); >+ compare32(Equal, regT0, regT1, regT0); >+ >+ notMasqueradesAsUndefined.link(this); >+ done.link(this); >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_is_boolean(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitLoadTag(value, regT0); >+ compare32(Equal, regT0, TrustedImm32(JSValue::BooleanTag), regT0); >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_is_number(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitLoadTag(value, regT0); >+ add32(TrustedImm32(1), regT0); >+ compare32(Below, regT0, TrustedImm32(JSValue::LowestTag + 1), regT0); >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_is_cell_with_type(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ int type = currentInstruction[3].u.operand; >+ >+ emitLoad(value, regT1, regT0); >+ Jump isNotCell = branchIfNotCell(regT1); >+ >+ compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(type), regT0); >+ Jump done = jump(); >+ >+ isNotCell.link(this); >+ move(TrustedImm32(0), regT0); >+ >+ done.link(this); >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_is_object(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitLoad(value, regT1, regT0); >+ Jump isNotCell = branchIfNotCell(regT1); >+ >+ compare8(AboveOrEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType), regT0); >+ Jump done = jump(); >+ >+ isNotCell.link(this); >+ move(TrustedImm32(0), regT0); >+ >+ done.link(this); >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_to_primitive(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ >+ Jump isImm = branchIfNotCell(regT1); >+ addSlowCase(branchIfObject(regT0)); >+ isImm.link(this); >+ >+ if (dst != src) >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_set_function_name(Instruction* currentInstruction) >+{ >+ int func = currentInstruction[1].u.operand; >+ int name = currentInstruction[2].u.operand; >+ emitLoadPayload(func, regT1); >+ emitLoad(name, regT3, regT2); >+ callOperation(operationSetFunctionName, regT1, JSValueRegs(regT3, regT2)); >+} >+ >+void JIT::emit_op_not(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitLoadTag(src, regT0); >+ >+ emitLoad(src, regT1, regT0); >+ addSlowCase(branchIfNotBoolean(regT1, InvalidGPRReg)); >+ xor32(TrustedImm32(1), regT0); >+ >+ emitStoreBool(dst, regT0, (dst == src)); >+} >+ >+void JIT::emit_op_jfalse(Instruction* currentInstruction) >+{ >+ int cond = currentInstruction[1].u.operand; >+ unsigned target = currentInstruction[2].u.operand; >+ >+ emitLoad(cond, regT1, regT0); >+ >+ JSValueRegs value(regT1, regT0); >+ GPRReg scratch = regT2; >+ GPRReg result = regT3; >+ bool shouldCheckMasqueradesAsUndefined = true; >+ emitConvertValueToBoolean(*vm(), value, result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >+ >+ addJump(branchTest32(Zero, result), target); >+} >+ >+void JIT::emit_op_jtrue(Instruction* currentInstruction) >+{ >+ int cond = currentInstruction[1].u.operand; >+ unsigned target = currentInstruction[2].u.operand; >+ >+ emitLoad(cond, regT1, regT0); >+ bool shouldCheckMasqueradesAsUndefined = true; >+ JSValueRegs value(regT1, regT0); >+ GPRReg scratch = regT2; >+ GPRReg result = regT3; >+ emitConvertValueToBoolean(*vm(), value, result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >+ >+ addJump(branchTest32(NonZero, result), target); >+} >+ >+void JIT::emit_op_jeq_null(Instruction* currentInstruction) >+{ >+ int src = currentInstruction[1].u.operand; >+ unsigned target = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ >+ Jump isImmediate = branchIfNotCell(regT1); >+ >+ Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ addJump(branchPtr(Equal, Address(regT2, Structure::globalObjectOffset()), regT0), target); >+ Jump masqueradesGlobalObjectIsForeign = jump(); >+ >+ // Now handle the immediate cases - undefined & null >+ isImmediate.link(this); >+ static_assert((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1), ""); >+ or32(TrustedImm32(1), regT1); >+ addJump(branchIfNull(regT1), target); >+ >+ isNotMasqueradesAsUndefined.link(this); >+ masqueradesGlobalObjectIsForeign.link(this); >+} >+ >+void JIT::emit_op_jneq_null(Instruction* currentInstruction) >+{ >+ int src = currentInstruction[1].u.operand; >+ unsigned target = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ >+ Jump isImmediate = branchIfNotCell(regT1); >+ >+ addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target); >+ loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ addJump(branchPtr(NotEqual, Address(regT2, Structure::globalObjectOffset()), regT0), target); >+ Jump wasNotImmediate = jump(); >+ >+ // Now handle the immediate cases - undefined & null >+ isImmediate.link(this); >+ >+ static_assert((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1), ""); >+ or32(TrustedImm32(1), regT1); >+ addJump(branchIfNotNull(regT1), target); >+ >+ wasNotImmediate.link(this); >+} >+ >+void JIT::emit_op_jneq_ptr(Instruction* currentInstruction) >+{ >+ int src = currentInstruction[1].u.operand; >+ Special::Pointer ptr = currentInstruction[2].u.specialPointer; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ Jump notCell = branchIfNotCell(regT1); >+ Jump equal = branchPtr(Equal, regT0, TrustedImmPtr(actualPointerFor(m_codeBlock, ptr))); >+ notCell.link(this); >+ store32(TrustedImm32(1), ¤tInstruction[4].u.operand); >+ addJump(jump(), target); >+ equal.link(this); >+} >+ >+void JIT::emit_op_eq(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src1 = currentInstruction[2].u.operand; >+ int src2 = currentInstruction[3].u.operand; >+ >+ emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >+ addSlowCase(branch32(NotEqual, regT1, regT3)); >+ addSlowCase(branchIfCell(regT1)); >+ addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >+ >+ compare32(Equal, regT0, regT2, regT0); >+ >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emitSlow_op_eq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int dst = currentInstruction[1].u.operand; >+ >+ JumpList storeResult; >+ JumpList genericCase; >+ >+ genericCase.append(getSlowCase(iter)); // tags not equal >+ >+ linkSlowCase(iter); // tags equal and JSCell >+ genericCase.append(branchIfNotString(regT0)); >+ genericCase.append(branchIfNotString(regT2)); >+ >+ // String case. >+ callOperation(operationCompareStringEq, regT0, regT2); >+ storeResult.append(jump()); >+ >+ // Generic case. >+ genericCase.append(getSlowCase(iter)); // doubles >+ genericCase.link(this); >+ callOperation(operationCompareEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >+ >+ storeResult.link(this); >+ emitStoreBool(dst, returnValueGPR); >+} >+ >+void JIT::emit_op_jeq(Instruction* currentInstruction) >+{ >+ int target = currentInstruction[3].u.operand; >+ int src1 = currentInstruction[1].u.operand; >+ int src2 = currentInstruction[2].u.operand; >+ >+ emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >+ addSlowCase(branch32(NotEqual, regT1, regT3)); >+ addSlowCase(branchIfCell(regT1)); >+ addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >+ >+ addJump(branch32(Equal, regT0, regT2), target); >+} >+ >+void JIT::compileOpEqJumpSlow(Vector<SlowCaseEntry>::iterator& iter, CompileOpEqType type, int jumpTarget) >+{ >+ JumpList done; >+ JumpList genericCase; >+ >+ genericCase.append(getSlowCase(iter)); // tags not equal >+ >+ linkSlowCase(iter); // tags equal and JSCell >+ genericCase.append(branchIfNotString(regT0)); >+ genericCase.append(branchIfNotString(regT2)); >+ >+ // String case. >+ callOperation(operationCompareStringEq, regT0, regT2); >+ emitJumpSlowToHot(branchTest32(type == CompileOpEqType::Eq ? NonZero : Zero, returnValueGPR), jumpTarget); >+ done.append(jump()); >+ >+ // Generic case. >+ genericCase.append(getSlowCase(iter)); // doubles >+ genericCase.link(this); >+ callOperation(operationCompareEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >+ emitJumpSlowToHot(branchTest32(type == CompileOpEqType::Eq ? NonZero : Zero, returnValueGPR), jumpTarget); >+ >+ done.link(this); >+} >+ >+void JIT::emitSlow_op_jeq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpEqJumpSlow(iter, CompileOpEqType::Eq, currentInstruction[3].u.operand); >+} >+ >+void JIT::emit_op_neq(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src1 = currentInstruction[2].u.operand; >+ int src2 = currentInstruction[3].u.operand; >+ >+ emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >+ addSlowCase(branch32(NotEqual, regT1, regT3)); >+ addSlowCase(branchIfCell(regT1)); >+ addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >+ >+ compare32(NotEqual, regT0, regT2, regT0); >+ >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emitSlow_op_neq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int dst = currentInstruction[1].u.operand; >+ >+ JumpList storeResult; >+ JumpList genericCase; >+ >+ genericCase.append(getSlowCase(iter)); // tags not equal >+ >+ linkSlowCase(iter); // tags equal and JSCell >+ genericCase.append(branchIfNotString(regT0)); >+ genericCase.append(branchIfNotString(regT2)); >+ >+ // String case. >+ callOperation(operationCompareStringEq, regT0, regT2); >+ storeResult.append(jump()); >+ >+ // Generic case. >+ genericCase.append(getSlowCase(iter)); // doubles >+ genericCase.link(this); >+ callOperation(operationCompareEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >+ >+ storeResult.link(this); >+ xor32(TrustedImm32(0x1), returnValueGPR); >+ emitStoreBool(dst, returnValueGPR); >+} >+ >+void JIT::emit_op_jneq(Instruction* currentInstruction) >+{ >+ int target = currentInstruction[3].u.operand; >+ int src1 = currentInstruction[1].u.operand; >+ int src2 = currentInstruction[2].u.operand; >+ >+ emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >+ addSlowCase(branch32(NotEqual, regT1, regT3)); >+ addSlowCase(branchIfCell(regT1)); >+ addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >+ >+ addJump(branch32(NotEqual, regT0, regT2), target); >+} >+ >+void JIT::emitSlow_op_jneq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpEqJumpSlow(iter, CompileOpEqType::NEq, currentInstruction[3].u.operand); >+} >+ >+void JIT::compileOpStrictEq(Instruction* currentInstruction, CompileOpStrictEqType type) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src1 = currentInstruction[2].u.operand; >+ int src2 = currentInstruction[3].u.operand; >+ >+ emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >+ >+ // Bail if the tags differ, or are double. >+ addSlowCase(branch32(NotEqual, regT1, regT3)); >+ addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >+ >+ // Jump to a slow case if both are strings or symbols (non object). >+ Jump notCell = branchIfNotCell(regT1); >+ Jump firstIsObject = branchIfObject(regT0); >+ addSlowCase(branchIfNotObject(regT2)); >+ notCell.link(this); >+ firstIsObject.link(this); >+ >+ // Simply compare the payloads. >+ if (type == CompileOpStrictEqType::StrictEq) >+ compare32(Equal, regT0, regT2, regT0); >+ else >+ compare32(NotEqual, regT0, regT2, regT0); >+ >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_op_stricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEq(currentInstruction, CompileOpStrictEqType::StrictEq); >+} >+ >+void JIT::emit_op_nstricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEq(currentInstruction, CompileOpStrictEqType::NStrictEq); >+} >+ >+void JIT::compileOpStrictEqJump(Instruction* currentInstruction, CompileOpStrictEqType type) >+{ >+ int target = currentInstruction[3].u.operand; >+ int src1 = currentInstruction[1].u.operand; >+ int src2 = currentInstruction[2].u.operand; >+ >+ emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >+ >+ // Bail if the tags differ, or are double. >+ addSlowCase(branch32(NotEqual, regT1, regT3)); >+ addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >+ >+ // Jump to a slow case if both are strings or symbols (non object). >+ Jump notCell = branchIfNotCell(regT1); >+ Jump firstIsObject = branchIfObject(regT0); >+ addSlowCase(branchIfNotObject(regT2)); >+ notCell.link(this); >+ firstIsObject.link(this); >+ >+ // Simply compare the payloads. >+ if (type == CompileOpStrictEqType::StrictEq) >+ addJump(branch32(Equal, regT0, regT2), target); >+ else >+ addJump(branch32(NotEqual, regT0, regT2), target); >+} >+ >+void JIT::emit_op_jstricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::StrictEq); >+} >+ >+void JIT::emit_op_jnstricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::NStrictEq); >+} >+ >+void JIT::emitSlow_op_jstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ unsigned target = currentInstruction[3].u.operand; >+ callOperation(operationCompareStrictEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >+ emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); >+} >+ >+void JIT::emitSlow_op_jnstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ unsigned target = currentInstruction[3].u.operand; >+ callOperation(operationCompareStrictEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >+ emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); >+} >+ >+void JIT::emit_op_eq_null(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ Jump isImmediate = branchIfNotCell(regT1); >+ >+ Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ move(TrustedImm32(0), regT1); >+ Jump wasNotMasqueradesAsUndefined = jump(); >+ >+ isMasqueradesAsUndefined.link(this); >+ loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >+ compare32(Equal, regT0, regT2, regT1); >+ Jump wasNotImmediate = jump(); >+ >+ isImmediate.link(this); >+ >+ compare32(Equal, regT1, TrustedImm32(JSValue::NullTag), regT2); >+ compare32(Equal, regT1, TrustedImm32(JSValue::UndefinedTag), regT1); >+ or32(regT2, regT1); >+ >+ wasNotImmediate.link(this); >+ wasNotMasqueradesAsUndefined.link(this); >+ >+ emitStoreBool(dst, regT1); >+} >+ >+void JIT::emit_op_neq_null(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ Jump isImmediate = branchIfNotCell(regT1); >+ >+ Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ move(TrustedImm32(1), regT1); >+ Jump wasNotMasqueradesAsUndefined = jump(); >+ >+ isMasqueradesAsUndefined.link(this); >+ loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >+ compare32(NotEqual, regT0, regT2, regT1); >+ Jump wasNotImmediate = jump(); >+ >+ isImmediate.link(this); >+ >+ compare32(NotEqual, regT1, TrustedImm32(JSValue::NullTag), regT2); >+ compare32(NotEqual, regT1, TrustedImm32(JSValue::UndefinedTag), regT1); >+ and32(regT2, regT1); >+ >+ wasNotImmediate.link(this); >+ wasNotMasqueradesAsUndefined.link(this); >+ >+ emitStoreBool(dst, regT1); >+} >+ >+void JIT::emit_op_throw(Instruction* currentInstruction) >+{ >+ ASSERT(regT0 == returnValueGPR); >+ copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >+ emitLoad(currentInstruction[1].u.operand, regT1, regT0); >+ callOperationNoExceptionCheck(operationThrow, JSValueRegs(regT1, regT0)); >+ jumpToExceptionHandler(*vm()); >+} >+ >+void JIT::emit_op_to_number(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ >+ Jump isInt32 = branchIfInt32(regT1); >+ addSlowCase(branch32(AboveOrEqual, regT1, TrustedImm32(JSValue::LowestTag))); >+ isInt32.link(this); >+ >+ emitValueProfilingSite(); >+ if (src != dst) >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_to_string(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ >+ addSlowCase(branchIfNotCell(regT1)); >+ addSlowCase(branchIfNotString(regT0)); >+ >+ if (src != dst) >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_to_object(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitLoad(src, regT1, regT0); >+ >+ addSlowCase(branchIfNotCell(regT1)); >+ addSlowCase(branchIfNotObject(regT0)); >+ >+ emitValueProfilingSite(); >+ if (src != dst) >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_catch(Instruction* currentInstruction) >+{ >+ restoreCalleeSavesFromEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >+ >+ move(TrustedImmPtr(m_vm), regT3); >+ // operationThrow returns the callFrame for the handler. >+ load32(Address(regT3, VM::callFrameForCatchOffset()), callFrameRegister); >+ storePtr(TrustedImmPtr(nullptr), Address(regT3, VM::callFrameForCatchOffset())); >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(codeBlock()) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ >+ callOperationNoExceptionCheck(operationCheckIfExceptionIsUncatchableAndNotifyProfiler); >+ Jump isCatchableException = branchTest32(Zero, returnValueGPR); >+ jumpToExceptionHandler(*vm()); >+ isCatchableException.link(this); >+ >+ move(TrustedImmPtr(m_vm), regT3); >+ >+ // Now store the exception returned by operationThrow. >+ load32(Address(regT3, VM::exceptionOffset()), regT2); >+ move(TrustedImm32(JSValue::CellTag), regT1); >+ >+ store32(TrustedImm32(0), Address(regT3, VM::exceptionOffset())); >+ >+ unsigned exception = currentInstruction[1].u.operand; >+ emitStore(exception, regT1, regT2); >+ >+ load32(Address(regT2, Exception::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); >+ load32(Address(regT2, Exception::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); >+ >+ unsigned thrownValue = currentInstruction[2].u.operand; >+ emitStore(thrownValue, regT1, regT0); >+ >+#if ENABLE(DFG_JIT) >+ // FIXME: consider inline caching the process of doing OSR entry, including >+ // argument type proofs, storing locals to the buffer, etc >+ // https://bugs.webkit.org/show_bug.cgi?id=175598 >+ >+ ValueProfileAndOperandBuffer* buffer = static_cast<ValueProfileAndOperandBuffer*>(currentInstruction[3].u.pointer); >+ if (buffer || !shouldEmitProfiling()) >+ callOperation(operationTryOSREnterAtCatch, m_bytecodeOffset); >+ else >+ callOperation(operationTryOSREnterAtCatchAndValueProfile, m_bytecodeOffset); >+ auto skipOSREntry = branchTestPtr(Zero, returnValueGPR); >+ emitRestoreCalleeSaves(); >+ jump(returnValueGPR, NoPtrTag); >+ skipOSREntry.link(this); >+ if (buffer && shouldEmitProfiling()) { >+ buffer->forEach([&] (ValueProfileAndOperand& profile) { >+ JSValueRegs regs(regT1, regT0); >+ emitGetVirtualRegister(profile.m_operand, regs); >+ emitValueProfilingSite(profile.m_profile); >+ }); >+ } >+#endif // ENABLE(DFG_JIT) >+} >+ >+void JIT::emit_op_identity_with_profile(Instruction*) >+{ >+ // We don't need to do anything here... >+} >+ >+void JIT::emit_op_get_parent_scope(Instruction* currentInstruction) >+{ >+ int currentScope = currentInstruction[2].u.operand; >+ emitLoadPayload(currentScope, regT0); >+ loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >+ emitStoreCell(currentInstruction[1].u.operand, regT0); >+} >+ >+void JIT::emit_op_switch_imm(Instruction* currentInstruction) >+{ >+ size_t tableIndex = currentInstruction[1].u.operand; >+ unsigned defaultOffset = currentInstruction[2].u.operand; >+ unsigned scrutinee = currentInstruction[3].u.operand; >+ >+ // create jump table for switch destinations, track this switch statement. >+ SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >+ m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Immediate)); >+ jumpTable->ensureCTITable(); >+ >+ emitLoad(scrutinee, regT1, regT0); >+ callOperation(operationSwitchImmWithUnknownKeyType, JSValueRegs(regT1, regT0), tableIndex); >+ jump(returnValueGPR, NoPtrTag); >+} >+ >+void JIT::emit_op_switch_char(Instruction* currentInstruction) >+{ >+ size_t tableIndex = currentInstruction[1].u.operand; >+ unsigned defaultOffset = currentInstruction[2].u.operand; >+ unsigned scrutinee = currentInstruction[3].u.operand; >+ >+ // create jump table for switch destinations, track this switch statement. >+ SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >+ m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Character)); >+ jumpTable->ensureCTITable(); >+ >+ emitLoad(scrutinee, regT1, regT0); >+ callOperation(operationSwitchCharWithUnknownKeyType, JSValueRegs(regT1, regT0), tableIndex); >+ jump(returnValueGPR, NoPtrTag); >+} >+ >+void JIT::emit_op_switch_string(Instruction* currentInstruction) >+{ >+ size_t tableIndex = currentInstruction[1].u.operand; >+ unsigned defaultOffset = currentInstruction[2].u.operand; >+ unsigned scrutinee = currentInstruction[3].u.operand; >+ >+ // create jump table for switch destinations, track this switch statement. >+ StringJumpTable* jumpTable = &m_codeBlock->stringSwitchJumpTable(tableIndex); >+ m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset)); >+ >+ emitLoad(scrutinee, regT1, regT0); >+ callOperation(operationSwitchStringWithUnknownKeyType, JSValueRegs(regT1, regT0), tableIndex); >+ jump(returnValueGPR, NoPtrTag); >+} >+ >+void JIT::emit_op_debug(Instruction* currentInstruction) >+{ >+ load32(codeBlock()->debuggerRequestsAddress(), regT0); >+ Jump noDebuggerRequests = branchTest32(Zero, regT0); >+ callOperation(operationDebug, currentInstruction[1].u.operand); >+ noDebuggerRequests.link(this); >+} >+ >+ >+void JIT::emit_op_enter(Instruction* currentInstruction) >+{ >+ emitEnterOptimizationCheck(); >+ >+ // Even though JIT code doesn't use them, we initialize our constant >+ // registers to zap stale pointers, to avoid unnecessarily prolonging >+ // object lifetime and increasing GC pressure. >+ for (int i = 0; i < m_codeBlock->m_numVars; ++i) >+ emitStore(virtualRegisterForLocal(i).offset(), jsUndefined()); >+ >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_enter); >+ slowPathCall.call(); >+} >+ >+void JIT::emit_op_get_scope(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ emitGetFromCallFrameHeaderPtr(CallFrameSlot::callee, regT0); >+ loadPtr(Address(regT0, JSFunction::offsetOfScopeChain()), regT0); >+ emitStoreCell(dst, regT0); >+} >+ >+void JIT::emit_op_create_this(Instruction* currentInstruction) >+{ >+ int callee = currentInstruction[2].u.operand; >+ WriteBarrierBase<JSCell>* cachedFunction = ¤tInstruction[4].u.jsCell; >+ RegisterID calleeReg = regT0; >+ RegisterID rareDataReg = regT4; >+ RegisterID resultReg = regT0; >+ RegisterID allocatorReg = regT1; >+ RegisterID structureReg = regT2; >+ RegisterID cachedFunctionReg = regT4; >+ RegisterID scratchReg = regT3; >+ >+ emitLoadPayload(callee, calleeReg); >+ addSlowCase(branchIfNotFunction(calleeReg)); >+ loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); >+ addSlowCase(branchTestPtr(Zero, rareDataReg)); >+ load32(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); >+ loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); >+ >+ loadPtr(cachedFunction, cachedFunctionReg); >+ Jump hasSeenMultipleCallees = branchPtr(Equal, cachedFunctionReg, TrustedImmPtr(JSCell::seenMultipleCalleeObjects())); >+ addSlowCase(branchPtr(NotEqual, calleeReg, cachedFunctionReg)); >+ hasSeenMultipleCallees.link(this); >+ >+ JumpList slowCases; >+ auto butterfly = TrustedImmPtr(nullptr); >+ emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases); >+ addSlowCase(slowCases); >+ emitStoreCell(currentInstruction[1].u.operand, resultReg); >+} >+ >+void JIT::emit_op_to_this(Instruction* currentInstruction) >+{ >+ WriteBarrierBase<Structure>* cachedStructure = ¤tInstruction[2].u.structure; >+ int thisRegister = currentInstruction[1].u.operand; >+ >+ emitLoad(thisRegister, regT3, regT2); >+ >+ addSlowCase(branchIfNotCell(regT3)); >+ addSlowCase(branchIfNotType(regT2, FinalObjectType)); >+ loadPtr(Address(regT2, JSCell::structureIDOffset()), regT0); >+ loadPtr(cachedStructure, regT2); >+ addSlowCase(branchPtr(NotEqual, regT0, regT2)); >+} >+ >+void JIT::emit_op_check_tdz(Instruction* currentInstruction) >+{ >+ emitLoadTag(currentInstruction[1].u.operand, regT0); >+ addSlowCase(branchIfEmpty(regT0)); >+} >+ >+void JIT::emit_op_has_structure_property(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int enumerator = currentInstruction[4].u.operand; >+ >+ emitLoadPayload(base, regT0); >+ emitJumpSlowCaseIfNotJSCell(base); >+ >+ emitLoadPayload(enumerator, regT1); >+ >+ load32(Address(regT0, JSCell::structureIDOffset()), regT0); >+ addSlowCase(branch32(NotEqual, regT0, Address(regT1, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >+ >+ move(TrustedImm32(1), regT0); >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::privateCompileHasIndexedProperty(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >+{ >+ Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >+ >+ PatchableJump badType; >+ >+ // FIXME: Add support for other types like TypedArrays and Arguments. >+ // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >+ JumpList slowCases = emitLoadForArrayMode(currentInstruction, arrayMode, badType); >+ move(TrustedImm32(1), regT0); >+ Jump done = jump(); >+ >+ LinkBuffer patchBuffer(*this, m_codeBlock); >+ >+ patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ >+ patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >+ >+ byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >+ m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >+ "Baseline has_indexed_property stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >+ >+ MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >+ MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationHasIndexedPropertyGeneric)); >+} >+ >+void JIT::emit_op_has_indexed_property(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >+ >+ emitLoadPayload(base, regT0); >+ emitJumpSlowCaseIfNotJSCell(base); >+ >+ emitLoadPayload(property, regT1); >+ >+ // This is technically incorrect - we're zero-extending an int32. On the hot path this doesn't matter. >+ // We check the value as if it was a uint32 against the m_vectorLength - which will always fail if >+ // number was signed since m_vectorLength is always less than intmax (since the total allocation >+ // size is always less than 4Gb). As such zero extending will have been correct (and extending the value >+ // to 64-bits is necessary since it's used in the address calculation. We zero extend rather than sign >+ // extending since it makes it easier to re-tag the value in the slow case. >+ zeroExtend32ToPtr(regT1, regT1); >+ >+ emitArrayProfilingSiteWithCell(regT0, regT2, profile); >+ and32(TrustedImm32(IndexingShapeMask), regT2); >+ >+ JITArrayMode mode = chooseArrayMode(profile); >+ PatchableJump badType; >+ >+ // FIXME: Add support for other types like TypedArrays and Arguments. >+ // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >+ JumpList slowCases = emitLoadForArrayMode(currentInstruction, mode, badType); >+ move(TrustedImm32(1), regT0); >+ >+ addSlowCase(badType); >+ addSlowCase(slowCases); >+ >+ Label done = label(); >+ >+ emitStoreBool(dst, regT0); >+ >+ Label nextHotPath = label(); >+ >+ m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, PatchableJump(), badType, mode, profile, done, nextHotPath)); >+} >+ >+void JIT::emitSlow_op_has_indexed_property(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >+ >+ Label slowPath = label(); >+ >+ emitLoad(base, regT1, regT0); >+ emitLoad(property, regT3, regT2); >+ Call call = callOperation(operationHasIndexedPropertyDefault, dst, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2), byValInfo); >+ >+ m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >+ m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >+ m_byValInstructionIndex++; >+} >+ >+void JIT::emit_op_get_direct_pname(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int index = currentInstruction[4].u.operand; >+ int enumerator = currentInstruction[5].u.operand; >+ >+ // Check that base is a cell >+ emitLoadPayload(base, regT0); >+ emitJumpSlowCaseIfNotJSCell(base); >+ >+ // Check the structure >+ emitLoadPayload(enumerator, regT1); >+ load32(Address(regT0, JSCell::structureIDOffset()), regT2); >+ addSlowCase(branch32(NotEqual, regT2, Address(regT1, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >+ >+ // Compute the offset >+ emitLoadPayload(index, regT2); >+ // If index is less than the enumerator's cached inline storage, then it's an inline access >+ Jump outOfLineAccess = branch32(AboveOrEqual, regT2, Address(regT1, JSPropertyNameEnumerator::cachedInlineCapacityOffset())); >+ addPtr(TrustedImm32(JSObject::offsetOfInlineStorage()), regT0); >+ load32(BaseIndex(regT0, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); >+ load32(BaseIndex(regT0, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); >+ >+ Jump done = jump(); >+ >+ // Otherwise it's out of line >+ outOfLineAccess.link(this); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >+ sub32(Address(regT1, JSPropertyNameEnumerator::cachedInlineCapacityOffset()), regT2); >+ neg32(regT2); >+ int32_t offsetOfFirstProperty = static_cast<int32_t>(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue); >+ load32(BaseIndex(regT0, regT2, TimesEight, offsetOfFirstProperty + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); >+ load32(BaseIndex(regT0, regT2, TimesEight, offsetOfFirstProperty + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); >+ >+ done.link(this); >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_enumerator_structure_pname(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int enumerator = currentInstruction[2].u.operand; >+ int index = currentInstruction[3].u.operand; >+ >+ emitLoadPayload(index, regT0); >+ emitLoadPayload(enumerator, regT1); >+ Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endStructurePropertyIndexOffset())); >+ >+ move(TrustedImm32(JSValue::NullTag), regT2); >+ move(TrustedImm32(0), regT0); >+ >+ Jump done = jump(); >+ inBounds.link(this); >+ >+ loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >+ loadPtr(BaseIndex(regT1, regT0, timesPtr()), regT0); >+ move(TrustedImm32(JSValue::CellTag), regT2); >+ >+ done.link(this); >+ emitStore(dst, regT2, regT0); >+} >+ >+void JIT::emit_op_enumerator_generic_pname(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int enumerator = currentInstruction[2].u.operand; >+ int index = currentInstruction[3].u.operand; >+ >+ emitLoadPayload(index, regT0); >+ emitLoadPayload(enumerator, regT1); >+ Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endGenericPropertyIndexOffset())); >+ >+ move(TrustedImm32(JSValue::NullTag), regT2); >+ move(TrustedImm32(0), regT0); >+ >+ Jump done = jump(); >+ inBounds.link(this); >+ >+ loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >+ loadPtr(BaseIndex(regT1, regT0, timesPtr()), regT0); >+ move(TrustedImm32(JSValue::CellTag), regT2); >+ >+ done.link(this); >+ emitStore(dst, regT2, regT0); >+} >+ >+void JIT::emit_op_profile_type(Instruction* currentInstruction) >+{ >+ TypeLocation* cachedTypeLocation = currentInstruction[2].u.location; >+ int valueToProfile = currentInstruction[1].u.operand; >+ >+ // Load payload in T0. Load tag in T3. >+ emitLoadPayload(valueToProfile, regT0); >+ emitLoadTag(valueToProfile, regT3); >+ >+ JumpList jumpToEnd; >+ >+ jumpToEnd.append(branchIfEmpty(regT3)); >+ >+ // Compile in a predictive type check, if possible, to see if we can skip writing to the log. >+ // These typechecks are inlined to match those of the 32-bit JSValue type checks. >+ if (cachedTypeLocation->m_lastSeenType == TypeUndefined) >+ jumpToEnd.append(branchIfUndefined(regT3)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeNull) >+ jumpToEnd.append(branchIfNull(regT3)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeBoolean) >+ jumpToEnd.append(branchIfBoolean(regT3, InvalidGPRReg)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeAnyInt) >+ jumpToEnd.append(branchIfInt32(regT3)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeNumber) { >+ jumpToEnd.append(branchIfNumber(JSValueRegs(regT3, regT0), regT1)); >+ } else if (cachedTypeLocation->m_lastSeenType == TypeString) { >+ Jump isNotCell = branchIfNotCell(regT3); >+ jumpToEnd.append(branchIfString(regT0)); >+ isNotCell.link(this); >+ } >+ >+ // Load the type profiling log into T2. >+ TypeProfilerLog* cachedTypeProfilerLog = m_vm->typeProfilerLog(); >+ move(TrustedImmPtr(cachedTypeProfilerLog), regT2); >+ >+ // Load the next log entry into T1. >+ loadPtr(Address(regT2, TypeProfilerLog::currentLogEntryOffset()), regT1); >+ >+ // Store the JSValue onto the log entry. >+ store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload))); >+ store32(regT3, Address(regT1, TypeProfilerLog::LogEntry::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag))); >+ >+ // Store the structureID of the cell if argument is a cell, otherwise, store 0 on the log entry. >+ Jump notCell = branchIfNotCell(regT3); >+ load32(Address(regT0, JSCell::structureIDOffset()), regT0); >+ store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >+ Jump skipNotCell = jump(); >+ notCell.link(this); >+ store32(TrustedImm32(0), Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >+ skipNotCell.link(this); >+ >+ // Store the typeLocation on the log entry. >+ move(TrustedImmPtr(cachedTypeLocation), regT0); >+ store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::locationOffset())); >+ >+ // Increment the current log entry. >+ addPtr(TrustedImm32(sizeof(TypeProfilerLog::LogEntry)), regT1); >+ store32(regT1, Address(regT2, TypeProfilerLog::currentLogEntryOffset())); >+ jumpToEnd.append(branchPtr(NotEqual, regT1, TrustedImmPtr(cachedTypeProfilerLog->logEndPtr()))); >+ // Clear the log if we're at the end of the log. >+ callOperation(operationProcessTypeProfilerLog); >+ >+ jumpToEnd.link(this); >+} >+ >+void JIT::emit_op_log_shadow_chicken_prologue(Instruction* currentInstruction) >+{ >+ updateTopCallFrame(); >+ static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >+ GPRReg shadowPacketReg = regT0; >+ GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >+ GPRReg scratch2Reg = regT2; >+ ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >+ >+ scratch1Reg = regT4; >+ emitLoadPayload(currentInstruction[1].u.operand, regT3); >+ logShadowChickenProloguePacket(shadowPacketReg, scratch1Reg, regT3); >+} >+ >+void JIT::emit_op_log_shadow_chicken_tail(Instruction* currentInstruction) >+{ >+ updateTopCallFrame(); >+ static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >+ GPRReg shadowPacketReg = regT0; >+ GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >+ GPRReg scratch2Reg = regT2; >+ ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >+ >+ emitLoadPayload(currentInstruction[1].u.operand, regT2); >+ emitLoadTag(currentInstruction[1].u.operand, regT1); >+ JSValueRegs thisRegs(regT1, regT2); >+ emitLoadPayload(currentInstruction[2].u.operand, regT3); >+ logShadowChickenTailPacket(shadowPacketReg, thisRegs, regT3, m_codeBlock, CallSiteIndex(currentInstruction)); >+} >+ >+void JIT::emit_compareAndJump(OpcodeID opcode, int op1, int op2, unsigned target, RelationalCondition condition) >+{ >+ JumpList notInt32Op1; >+ JumpList notInt32Op2; >+ >+ // Character less. >+ if (isOperandConstantChar(op1)) { >+ emitLoad(op2, regT1, regT0); >+ addSlowCase(branchIfNotCell(regT1)); >+ JumpList failures; >+ emitLoadCharacterString(regT0, regT0, failures); >+ addSlowCase(failures); >+ addJump(branch32(commute(condition), regT0, Imm32(asString(getConstantOperand(op1))->tryGetValue()[0])), target); >+ return; >+ } >+ if (isOperandConstantChar(op2)) { >+ emitLoad(op1, regT1, regT0); >+ addSlowCase(branchIfNotCell(regT1)); >+ JumpList failures; >+ emitLoadCharacterString(regT0, regT0, failures); >+ addSlowCase(failures); >+ addJump(branch32(condition, regT0, Imm32(asString(getConstantOperand(op2))->tryGetValue()[0])), target); >+ return; >+ } >+ if (isOperandConstantInt(op1)) { >+ emitLoad(op2, regT3, regT2); >+ notInt32Op2.append(branchIfNotInt32(regT3)); >+ addJump(branch32(commute(condition), regT2, Imm32(getConstantOperand(op1).asInt32())), target); >+ } else if (isOperandConstantInt(op2)) { >+ emitLoad(op1, regT1, regT0); >+ notInt32Op1.append(branchIfNotInt32(regT1)); >+ addJump(branch32(condition, regT0, Imm32(getConstantOperand(op2).asInt32())), target); >+ } else { >+ emitLoad2(op1, regT1, regT0, op2, regT3, regT2); >+ notInt32Op1.append(branchIfNotInt32(regT1)); >+ notInt32Op2.append(branchIfNotInt32(regT3)); >+ addJump(branch32(condition, regT0, regT2), target); >+ } >+ >+ if (!supportsFloatingPoint()) { >+ addSlowCase(notInt32Op1); >+ addSlowCase(notInt32Op2); >+ return; >+ } >+ Jump end = jump(); >+ >+ // Double less. >+ emitBinaryDoubleOp(opcode, target, op1, op2, OperandTypes(), notInt32Op1, notInt32Op2, !isOperandConstantInt(op1), isOperandConstantInt(op1) || !isOperandConstantInt(op2)); >+ end.link(this); >+} >+ >+void JIT::emit_compareUnsignedAndJump(int op1, int op2, unsigned target, RelationalCondition condition) >+{ >+ if (isOperandConstantInt(op1)) { >+ emitLoad(op2, regT3, regT2); >+ addJump(branch32(commute(condition), regT2, Imm32(getConstantOperand(op1).asInt32())), target); >+ } else if (isOperandConstantInt(op2)) { >+ emitLoad(op1, regT1, regT0); >+ addJump(branch32(condition, regT0, Imm32(getConstantOperand(op2).asInt32())), target); >+ } else { >+ emitLoad2(op1, regT1, regT0, op2, regT3, regT2); >+ addJump(branch32(condition, regT0, regT2), target); >+ } >+} >+ >+ >+void JIT::emit_compareUnsigned(int dst, int op1, int op2, RelationalCondition condition) >+{ >+ if (isOperandConstantInt(op1)) { >+ emitLoad(op2, regT3, regT2); >+ compare32(commute(condition), regT2, Imm32(getConstantOperand(op1).asInt32()), regT0); >+ } else if (isOperandConstantInt(op2)) { >+ emitLoad(op1, regT1, regT0); >+ compare32(condition, regT0, Imm32(getConstantOperand(op2).asInt32()), regT0); >+ } else { >+ emitLoad2(op1, regT1, regT0, op2, regT3, regT2); >+ compare32(condition, regT0, regT2, regT0); >+ } >+ emitStoreBool(dst, regT0); >+} >+ >+void JIT::emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondition, size_t (JIT_OPERATION *operation)(ExecState*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ emitLoad(op1, regT1, regT0); >+ emitLoad(op2, regT3, regT2); >+ callOperation(operation, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >+ emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >+} >+ >+void JIT::emit_op_unsigned(Instruction* currentInstruction) >+{ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ >+ emitLoad(op1, regT1, regT0); >+ >+ addSlowCase(branchIfNotInt32(regT1)); >+ addSlowCase(branch32(LessThan, regT0, TrustedImm32(0))); >+ emitStoreInt32(result, regT0, result == op1); >+} >+ >+void JIT::emit_op_inc(Instruction* currentInstruction) >+{ >+ int srcDst = currentInstruction[1].u.operand; >+ >+ emitLoad(srcDst, regT1, regT0); >+ >+ addSlowCase(branchIfNotInt32(regT1)); >+ addSlowCase(branchAdd32(Overflow, TrustedImm32(1), regT0)); >+ emitStoreInt32(srcDst, regT0, true); >+} >+ >+void JIT::emit_op_dec(Instruction* currentInstruction) >+{ >+ int srcDst = currentInstruction[1].u.operand; >+ >+ emitLoad(srcDst, regT1, regT0); >+ >+ addSlowCase(branchIfNotInt32(regT1)); >+ addSlowCase(branchSub32(Overflow, TrustedImm32(1), regT0)); >+ emitStoreInt32(srcDst, regT0, true); >+} >+ >+void JIT::emitBinaryDoubleOp(OpcodeID opcodeID, int dst, int op1, int op2, OperandTypes types, JumpList& notInt32Op1, JumpList& notInt32Op2, bool op1IsInRegisters, bool op2IsInRegisters) >+{ >+ JumpList end; >+ >+ if (!notInt32Op1.empty()) { >+ // Double case 1: Op1 is not int32; Op2 is unknown. >+ notInt32Op1.link(this); >+ >+ ASSERT(op1IsInRegisters); >+ >+ // Verify Op1 is double. >+ if (!types.first().definitelyIsNumber()) >+ addSlowCase(branch32(Above, regT1, TrustedImm32(JSValue::LowestTag))); >+ >+ if (!op2IsInRegisters) >+ emitLoad(op2, regT3, regT2); >+ >+ Jump doubleOp2 = branch32(Below, regT3, TrustedImm32(JSValue::LowestTag)); >+ >+ if (!types.second().definitelyIsNumber()) >+ addSlowCase(branchIfNotInt32(regT3)); >+ >+ convertInt32ToDouble(regT2, fpRegT0); >+ Jump doTheMath = jump(); >+ >+ // Load Op2 as double into double register. >+ doubleOp2.link(this); >+ emitLoadDouble(op2, fpRegT0); >+ >+ // Do the math. >+ doTheMath.link(this); >+ switch (opcodeID) { >+ case op_jless: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleLessThan, fpRegT2, fpRegT0), dst); >+ break; >+ case op_jlesseq: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleLessThanOrEqual, fpRegT2, fpRegT0), dst); >+ break; >+ case op_jgreater: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleGreaterThan, fpRegT2, fpRegT0), dst); >+ break; >+ case op_jgreatereq: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleGreaterThanOrEqual, fpRegT2, fpRegT0), dst); >+ break; >+ case op_jnless: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleLessThanOrEqualOrUnordered, fpRegT0, fpRegT2), dst); >+ break; >+ case op_jnlesseq: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleLessThanOrUnordered, fpRegT0, fpRegT2), dst); >+ break; >+ case op_jngreater: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleGreaterThanOrEqualOrUnordered, fpRegT0, fpRegT2), dst); >+ break; >+ case op_jngreatereq: >+ emitLoadDouble(op1, fpRegT2); >+ addJump(branchDouble(DoubleGreaterThanOrUnordered, fpRegT0, fpRegT2), dst); >+ break; >+ default: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ >+ if (!notInt32Op2.empty()) >+ end.append(jump()); >+ } >+ >+ if (!notInt32Op2.empty()) { >+ // Double case 2: Op1 is int32; Op2 is not int32. >+ notInt32Op2.link(this); >+ >+ ASSERT(op2IsInRegisters); >+ >+ if (!op1IsInRegisters) >+ emitLoadPayload(op1, regT0); >+ >+ convertInt32ToDouble(regT0, fpRegT0); >+ >+ // Verify op2 is double. >+ if (!types.second().definitelyIsNumber()) >+ addSlowCase(branch32(Above, regT3, TrustedImm32(JSValue::LowestTag))); >+ >+ // Do the math. >+ switch (opcodeID) { >+ case op_jless: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleLessThan, fpRegT0, fpRegT1), dst); >+ break; >+ case op_jlesseq: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleLessThanOrEqual, fpRegT0, fpRegT1), dst); >+ break; >+ case op_jgreater: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleGreaterThan, fpRegT0, fpRegT1), dst); >+ break; >+ case op_jgreatereq: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleGreaterThanOrEqual, fpRegT0, fpRegT1), dst); >+ break; >+ case op_jnless: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleLessThanOrEqualOrUnordered, fpRegT1, fpRegT0), dst); >+ break; >+ case op_jnlesseq: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleLessThanOrUnordered, fpRegT1, fpRegT0), dst); >+ break; >+ case op_jngreater: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleGreaterThanOrEqualOrUnordered, fpRegT1, fpRegT0), dst); >+ break; >+ case op_jngreatereq: >+ emitLoadDouble(op2, fpRegT1); >+ addJump(branchDouble(DoubleGreaterThanOrUnordered, fpRegT1, fpRegT0), dst); >+ break; >+ default: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ } >+ >+ end.link(this); >+} >+ >+// Mod (%) >+ >+/* ------------------------------ BEGIN: OP_MOD ------------------------------ */ >+ >+void JIT::emit_op_mod(Instruction* currentInstruction) >+{ >+#if CPU(X86) >+ int dst = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ >+ // Make sure registers are correct for x86 IDIV instructions. >+ ASSERT(regT0 == X86Registers::eax); >+ ASSERT(regT1 == X86Registers::edx); >+ ASSERT(regT2 == X86Registers::ecx); >+ ASSERT(regT3 == X86Registers::ebx); >+ >+ emitLoad2(op1, regT0, regT3, op2, regT1, regT2); >+ addSlowCase(branchIfNotInt32(regT1)); >+ addSlowCase(branchIfNotInt32(regT0)); >+ >+ move(regT3, regT0); >+ addSlowCase(branchTest32(Zero, regT2)); >+ Jump denominatorNotNeg1 = branch32(NotEqual, regT2, TrustedImm32(-1)); >+ addSlowCase(branch32(Equal, regT0, TrustedImm32(-2147483647-1))); >+ denominatorNotNeg1.link(this); >+ x86ConvertToDoubleWord32(); >+ x86Div32(regT2); >+ Jump numeratorPositive = branch32(GreaterThanOrEqual, regT3, TrustedImm32(0)); >+ addSlowCase(branchTest32(Zero, regT1)); >+ numeratorPositive.link(this); >+ emitStoreInt32(dst, regT1, (op1 == dst || op2 == dst)); >+#else >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >+ slowPathCall.call(); >+#endif >+} >+ >+void JIT::emitSlow_op_mod(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+#if CPU(X86) >+ linkAllSlowCases(iter); >+ >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >+ slowPathCall.call(); >+#else >+ UNUSED_PARAM(currentInstruction); >+ UNUSED_PARAM(iter); >+ // We would have really useful assertions here if it wasn't for the compiler's >+ // insistence on attribute noreturn. >+ // RELEASE_ASSERT_NOT_REACHED(); >+#endif >+} >+ >+/* ------------------------------ END: OP_MOD ------------------------------ */ >+ >+void JIT::emit_op_put_getter_by_id(Instruction* currentInstruction) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ int options = currentInstruction[3].u.operand; >+ int getter = currentInstruction[4].u.operand; >+ >+ emitLoadPayload(base, regT1); >+ emitLoadPayload(getter, regT3); >+ callOperation(operationPutGetterById, regT1, m_codeBlock->identifier(property).impl(), options, regT3); >+} >+ >+void JIT::emit_op_put_setter_by_id(Instruction* currentInstruction) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ int options = currentInstruction[3].u.operand; >+ int setter = currentInstruction[4].u.operand; >+ >+ emitLoadPayload(base, regT1); >+ emitLoadPayload(setter, regT3); >+ callOperation(operationPutSetterById, regT1, m_codeBlock->identifier(property).impl(), options, regT3); >+} >+ >+void JIT::emit_op_put_getter_setter_by_id(Instruction* currentInstruction) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ int attribute = currentInstruction[3].u.operand; >+ int getter = currentInstruction[4].u.operand; >+ int setter = currentInstruction[5].u.operand; >+ >+ emitLoadPayload(base, regT1); >+ emitLoadPayload(getter, regT3); >+ emitLoadPayload(setter, regT4); >+ callOperation(operationPutGetterSetter, regT1, m_codeBlock->identifier(property).impl(), attribute, regT3, regT4); >+} >+ >+void JIT::emit_op_put_getter_by_val(Instruction* currentInstruction) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ int32_t attributes = currentInstruction[3].u.operand; >+ int getter = currentInstruction[4].u.operand; >+ >+ emitLoadPayload(base, regT2); >+ emitLoad(property, regT1, regT0); >+ emitLoadPayload(getter, regT3); >+ callOperation(operationPutGetterByVal, regT2, JSValueRegs(regT1, regT0), attributes, regT3); >+} >+ >+void JIT::emit_op_put_setter_by_val(Instruction* currentInstruction) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ int32_t attributes = currentInstruction[3].u.operand; >+ int getter = currentInstruction[4].u.operand; >+ >+ emitLoadPayload(base, regT2); >+ emitLoad(property, regT1, regT0); >+ emitLoadPayload(getter, regT3); >+ callOperation(operationPutSetterByVal, regT2, JSValueRegs(regT1, regT0), attributes, regT3); >+} >+ >+void JIT::emit_op_del_by_id(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ emitLoad(base, regT1, regT0); >+ callOperation(operationDeleteByIdJSResult, dst, JSValueRegs(regT1, regT0), m_codeBlock->identifier(property).impl()); >+} >+ >+void JIT::emit_op_del_by_val(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ emitLoad2(base, regT1, regT0, property, regT3, regT2); >+ callOperation(operationDeleteByValJSResult, dst, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >+} >+ >+void JIT::emit_op_get_by_val(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >+ >+ emitLoad2(base, regT1, regT0, property, regT3, regT2); >+ >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ PatchableJump notIndex = patchableBranch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)); >+ addSlowCase(notIndex); >+ emitArrayProfilingSiteWithCell(regT0, regT1, profile); >+ and32(TrustedImm32(IndexingShapeMask), regT1); >+ >+ PatchableJump badType; >+ JumpList slowCases; >+ >+ JITArrayMode mode = chooseArrayMode(profile); >+ switch (mode) { >+ case JITInt32: >+ slowCases = emitInt32GetByVal(currentInstruction, badType); >+ break; >+ case JITDouble: >+ slowCases = emitDoubleGetByVal(currentInstruction, badType); >+ break; >+ case JITContiguous: >+ slowCases = emitContiguousGetByVal(currentInstruction, badType); >+ break; >+ case JITArrayStorage: >+ slowCases = emitArrayStorageGetByVal(currentInstruction, badType); >+ break; >+ default: >+ CRASH(); >+ } >+ >+ addSlowCase(badType); >+ addSlowCase(slowCases); >+ >+ Label done = label(); >+ >+ if (!ASSERT_DISABLED) { >+ Jump resultOK = branchIfNotEmpty(regT1); >+ abortWithReason(JITGetByValResultIsNotEmpty); >+ resultOK.link(this); >+ } >+ >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+ >+ Label nextHotPath = label(); >+ >+ m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, nextHotPath)); >+} >+ >+JIT::JumpList JIT::emitContiguousLoad(Instruction*, PatchableJump& badType, IndexingType expectedShape) >+{ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT1, TrustedImm32(expectedShape)); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >+ slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfPublicLength()))); >+ load32(BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); // tag >+ load32(BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); // payload >+ slowCases.append(branchIfEmpty(regT1)); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitDoubleLoad(Instruction*, PatchableJump& badType) >+{ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT1, TrustedImm32(DoubleShape)); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >+ slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfPublicLength()))); >+ loadDouble(BaseIndex(regT3, regT2, TimesEight), fpRegT0); >+ slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitArrayStorageLoad(Instruction*, PatchableJump& badType) >+{ >+ JumpList slowCases; >+ >+ add32(TrustedImm32(-ArrayStorageShape), regT1, regT3); >+ badType = patchableBranch32(Above, regT3, TrustedImm32(SlowPutArrayStorageShape - ArrayStorageShape)); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >+ slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, ArrayStorage::vectorLengthOffset()))); >+ load32(BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); // tag >+ load32(BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); // payload >+ slowCases.append(branchIfEmpty(regT1)); >+ >+ return slowCases; >+} >+ >+JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, const Identifier& propertyName, Jump& fastDoneCase, Jump& slowDoneCase, JumpList& slowCases) >+{ >+ int dst = currentInstruction[1].u.operand; >+ >+ // base: tag(regT1), payload(regT0) >+ // property: tag(regT3), payload(regT2) >+ // scratch: regT4 >+ >+ slowCases.append(branchIfNotCell(regT3)); >+ emitByValIdentifierCheck(byValInfo, regT2, regT4, propertyName, slowCases); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get); >+ gen.generateFastPath(*this); >+ >+ fastDoneCase = jump(); >+ >+ Label coldPathBegin = label(); >+ gen.slowPathJump().link(this); >+ >+ Call call = callOperationWithProfile(operationGetByIdOptimize, dst, gen.stubInfo(), JSValueRegs(regT1, regT0), propertyName.impl()); >+ gen.reportSlowPathCall(coldPathBegin, call); >+ slowDoneCase = jump(); >+ >+ return gen; >+} >+ >+void JIT::emitSlow_op_get_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >+ >+ linkSlowCaseIfNotJSCell(iter, base); // base cell check >+ linkSlowCase(iter); // property int32 check >+ >+ Jump nonCell = jump(); >+ linkSlowCase(iter); // base array check >+ Jump notString = branchIfNotString(regT0); >+ emitNakedCall(CodeLocationLabel<NoPtrTag>(m_vm->getCTIStub(stringGetByValGenerator).retaggedCode<NoPtrTag>())); >+ Jump failed = branchTestPtr(Zero, regT0); >+ emitStoreCell(dst, regT0); >+ emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_get_by_val)); >+ failed.link(this); >+ notString.link(this); >+ nonCell.link(this); >+ >+ linkSlowCase(iter); // vector length check >+ linkSlowCase(iter); // empty value >+ >+ Label slowPath = label(); >+ >+ emitLoad(base, regT1, regT0); >+ emitLoad(property, regT3, regT2); >+ Call call = callOperation(operationGetByValOptimize, dst, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2), byValInfo); >+ >+ m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >+ m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >+ m_byValInstructionIndex++; >+ >+ emitValueProfilingSite(); >+} >+ >+void JIT::emit_op_put_by_val(Instruction* currentInstruction) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >+ >+ emitLoad2(base, regT1, regT0, property, regT3, regT2); >+ >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ PatchableJump notIndex = patchableBranch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)); >+ addSlowCase(notIndex); >+ emitArrayProfilingSiteWithCell(regT0, regT1, profile); >+ and32(TrustedImm32(IndexingShapeMask), regT1); >+ >+ PatchableJump badType; >+ JumpList slowCases; >+ >+ JITArrayMode mode = chooseArrayMode(profile); >+ switch (mode) { >+ case JITInt32: >+ slowCases = emitInt32PutByVal(currentInstruction, badType); >+ break; >+ case JITDouble: >+ slowCases = emitDoublePutByVal(currentInstruction, badType); >+ break; >+ case JITContiguous: >+ slowCases = emitContiguousPutByVal(currentInstruction, badType); >+ break; >+ case JITArrayStorage: >+ slowCases = emitArrayStoragePutByVal(currentInstruction, badType); >+ break; >+ default: >+ CRASH(); >+ break; >+ } >+ >+ addSlowCase(badType); >+ addSlowCase(slowCases); >+ >+ Label done = label(); >+ >+ m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, done)); >+} >+ >+JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction, PatchableJump& badType, IndexingType indexingShape) >+{ >+ int base = currentInstruction[1].u.operand; >+ int value = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT1, TrustedImm32(ContiguousShape)); >+ >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >+ Jump outOfBounds = branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfPublicLength())); >+ >+ Label storeResult = label(); >+ emitLoad(value, regT1, regT0); >+ switch (indexingShape) { >+ case Int32Shape: >+ slowCases.append(branchIfNotInt32(regT1)); >+ store32(regT0, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload))); >+ store32(regT1, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag))); >+ break; >+ case ContiguousShape: >+ store32(regT0, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload))); >+ store32(regT1, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag))); >+ emitLoad(base, regT2, regT3); >+ emitWriteBarrier(base, value, ShouldFilterValue); >+ break; >+ case DoubleShape: { >+ Jump notInt = branchIfNotInt32(regT1); >+ convertInt32ToDouble(regT0, fpRegT0); >+ Jump ready = jump(); >+ notInt.link(this); >+ moveIntsToDouble(regT0, regT1, fpRegT0, fpRegT1); >+ slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >+ ready.link(this); >+ storeDouble(fpRegT0, BaseIndex(regT3, regT2, TimesEight)); >+ break; >+ } >+ default: >+ CRASH(); >+ break; >+ } >+ >+ Jump done = jump(); >+ >+ outOfBounds.link(this); >+ slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfVectorLength()))); >+ >+ emitArrayProfileStoreToHoleSpecialCase(profile); >+ >+ add32(TrustedImm32(1), regT2, regT1); >+ store32(regT1, Address(regT3, Butterfly::offsetOfPublicLength())); >+ jump().linkTo(storeResult, this); >+ >+ done.link(this); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitArrayStoragePutByVal(Instruction* currentInstruction, PatchableJump& badType) >+{ >+ int base = currentInstruction[1].u.operand; >+ int value = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT1, TrustedImm32(ArrayStorageShape)); >+ >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >+ slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, ArrayStorage::vectorLengthOffset()))); >+ >+ Jump empty = branch32(Equal, BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::EmptyValueTag)); >+ >+ Label storeResult(this); >+ emitLoad(value, regT1, regT0); >+ store32(regT0, BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload))); // payload >+ store32(regT1, BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag))); // tag >+ Jump end = jump(); >+ >+ empty.link(this); >+ emitArrayProfileStoreToHoleSpecialCase(profile); >+ add32(TrustedImm32(1), Address(regT3, OBJECT_OFFSETOF(ArrayStorage, m_numValuesInVector))); >+ branch32(Below, regT2, Address(regT3, ArrayStorage::lengthOffset())).linkTo(storeResult, this); >+ >+ add32(TrustedImm32(1), regT2, regT0); >+ store32(regT0, Address(regT3, ArrayStorage::lengthOffset())); >+ jump().linkTo(storeResult, this); >+ >+ end.link(this); >+ >+ emitWriteBarrier(base, value, ShouldFilterValue); >+ >+ return slowCases; >+} >+ >+JITPutByIdGenerator JIT::emitPutByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, PutKind putKind, const Identifier& propertyName, JumpList& doneCases, JumpList& slowCases) >+{ >+ // base: tag(regT1), payload(regT0) >+ // property: tag(regT3), payload(regT2) >+ >+ int base = currentInstruction[1].u.operand; >+ int value = currentInstruction[3].u.operand; >+ >+ slowCases.append(branchIfNotCell(regT3)); >+ emitByValIdentifierCheck(byValInfo, regT2, regT2, propertyName, slowCases); >+ >+ // Write barrier breaks the registers. So after issuing the write barrier, >+ // reload the registers. >+ emitWriteBarrier(base, value, ShouldFilterBase); >+ emitLoadPayload(base, regT0); >+ emitLoad(value, regT3, regT2); >+ >+ JITPutByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2), regT1, m_codeBlock->ecmaMode(), putKind); >+ gen.generateFastPath(*this); >+ doneCases.append(jump()); >+ >+ Label coldPathBegin = label(); >+ gen.slowPathJump().link(this); >+ >+ // JITPutByIdGenerator only preserve the value and the base's payload, we have to reload the tag. >+ emitLoadTag(base, regT1); >+ >+ Call call = callOperation(gen.slowPathFunction(), gen.stubInfo(), JSValueRegs(regT3, regT2), JSValueRegs(regT1, regT0), propertyName.impl()); >+ gen.reportSlowPathCall(coldPathBegin, call); >+ doneCases.append(jump()); >+ >+ return gen; >+} >+ >+void JIT::emitSlow_op_put_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ int value = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >+ >+ linkSlowCaseIfNotJSCell(iter, base); // base cell check >+ linkSlowCase(iter); // property int32 check >+ linkSlowCase(iter); // base not array check >+ >+ JITArrayMode mode = chooseArrayMode(profile); >+ switch (mode) { >+ case JITInt32: >+ case JITDouble: >+ linkSlowCase(iter); // value type check >+ break; >+ default: >+ break; >+ } >+ >+ Jump skipProfiling = jump(); >+ linkSlowCase(iter); // out of bounds >+ emitArrayProfileOutOfBoundsSpecialCase(profile); >+ skipProfiling.link(this); >+ >+ Label slowPath = label(); >+ >+ bool isDirect = Interpreter::getOpcodeID(currentInstruction->u.opcode) == op_put_by_val_direct; >+ >+#if CPU(X86) >+ // FIXME: We only have 5 temp registers, but need 6 to make this call, therefore we materialize >+ // our own call. When we finish moving JSC to the C call stack, we'll get another register so >+ // we can use the normal case. >+ unsigned pokeOffset = 0; >+ poke(GPRInfo::callFrameRegister, pokeOffset++); >+ emitLoad(base, regT0, regT1); >+ poke(regT1, pokeOffset++); >+ poke(regT0, pokeOffset++); >+ emitLoad(property, regT0, regT1); >+ poke(regT1, pokeOffset++); >+ poke(regT0, pokeOffset++); >+ emitLoad(value, regT0, regT1); >+ poke(regT1, pokeOffset++); >+ poke(regT0, pokeOffset++); >+ poke(TrustedImmPtr(byValInfo), pokeOffset++); >+ Call call = appendCallWithExceptionCheck(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize); >+#else >+ // The register selection below is chosen to reduce register swapping on ARM. >+ // Swapping shouldn't happen on other platforms. >+ emitLoad(base, regT2, regT1); >+ emitLoad(property, regT3, regT0); >+ emitLoad(value, regT5, regT4); >+ Call call = callOperation(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize, JSValueRegs(regT2, regT1), JSValueRegs(regT3, regT0), JSValueRegs(regT5, regT4), byValInfo); >+#endif >+ >+ m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >+ m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >+ m_byValInstructionIndex++; >+} >+ >+void JIT::emit_op_try_get_by_id(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitLoad(base, regT1, regT0); >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::TryGet); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIds.append(gen); >+ >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emitSlow_op_try_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperation(operationTryGetByIdOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+ >+void JIT::emit_op_get_by_id_direct(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitLoad(base, regT1, regT0); >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetDirect); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIds.append(gen); >+ >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emitSlow_op_get_by_id_direct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperationWithProfile(operationGetByIdDirectOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+ >+void JIT::emit_op_get_by_id(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitLoad(base, regT1, regT0); >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ >+ if (*ident == m_vm->propertyNames->length && shouldEmitProfiling()) >+ emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT2, m_bytecodeOffset); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIds.append(gen); >+ >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emitSlow_op_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperationWithProfile(operationGetByIdOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_get_by_id_with_this(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int thisVReg = currentInstruction[3].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >+ >+ emitLoad(base, regT1, regT0); >+ emitLoad(thisVReg, regT4, regT3); >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ emitJumpSlowCaseIfNotJSCell(thisVReg, regT4); >+ >+ JITGetByIdWithThisGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs(regT1, regT0), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT4, regT3), AccessType::GetWithThis); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIdsWithThis.append(gen); >+ >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emitSlow_op_get_by_id_with_this(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >+ >+ JITGetByIdWithThisGenerator& gen = m_getByIdsWithThis[m_getByIdWithThisIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperationWithProfile(operationGetByIdWithThisOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), JSValueRegs(regT4, regT3), ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_put_by_id(Instruction* currentInstruction) >+{ >+ // In order to be able to patch both the Structure, and the object offset, we store one pointer, >+ // to just after the arguments have been loaded into registers 'hotPathBegin', and we generate code >+ // such that the Structure & offset are always at the same distance from this. >+ >+ int base = currentInstruction[1].u.operand; >+ int value = currentInstruction[3].u.operand; >+ int direct = currentInstruction[8].u.putByIdFlags & PutByIdIsDirect; >+ >+ emitLoad2(base, regT1, regT0, value, regT3, regT2); >+ >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ >+ JITPutByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2), >+ regT1, m_codeBlock->ecmaMode(), direct ? Direct : NotDirect); >+ >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ >+ emitWriteBarrier(base, value, ShouldFilterBase); >+ >+ m_putByIds.append(gen); >+} >+ >+void JIT::emitSlow_op_put_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int base = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[2].u.operand)); >+ >+ Label coldPathBegin(this); >+ >+ // JITPutByIdGenerator only preserve the value and the base's payload, we have to reload the tag. >+ emitLoadTag(base, regT1); >+ >+ JITPutByIdGenerator& gen = m_putByIds[m_putByIdIndex++]; >+ >+ Call call = callOperation( >+ gen.slowPathFunction(), gen.stubInfo(), JSValueRegs(regT3, regT2), JSValueRegs(regT1, regT0), ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_in_by_id(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitLoad(base, regT1, regT0); >+ emitJumpSlowCaseIfNotJSCell(base, regT1); >+ >+ JITInByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0)); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_inByIds.append(gen); >+ >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emitSlow_op_in_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITInByIdGenerator& gen = m_inByIds[m_inByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperation(operationInByIdOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emitVarInjectionCheck(bool needsVarInjectionChecks) >+{ >+ if (!needsVarInjectionChecks) >+ return; >+ addSlowCase(branch8(Equal, AbsoluteAddress(m_codeBlock->globalObject()->varInjectionWatchpoint()->addressOfState()), TrustedImm32(IsInvalidated))); >+} >+ >+void JIT::emitResolveClosure(int dst, int scope, bool needsVarInjectionChecks, unsigned depth) >+{ >+ emitVarInjectionCheck(needsVarInjectionChecks); >+ move(TrustedImm32(JSValue::CellTag), regT1); >+ emitLoadPayload(scope, regT0); >+ for (unsigned i = 0; i < depth; ++i) >+ loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_resolve_scope(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int scope = currentInstruction[2].u.operand; >+ ResolveType resolveType = static_cast<ResolveType>(currentInstruction[4].u.operand); >+ unsigned depth = currentInstruction[5].u.operand; >+ auto emitCode = [&] (ResolveType resolveType) { >+ switch (resolveType) { >+ case GlobalProperty: >+ case GlobalVar: >+ case GlobalLexicalVar: >+ case GlobalPropertyWithVarInjectionChecks: >+ case GlobalVarWithVarInjectionChecks: >+ case GlobalLexicalVarWithVarInjectionChecks: { >+ JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >+ RELEASE_ASSERT(constantScope); >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ move(TrustedImm32(JSValue::CellTag), regT1); >+ move(TrustedImmPtr(constantScope), regT0); >+ emitStore(dst, regT1, regT0); >+ break; >+ } >+ case ClosureVar: >+ case ClosureVarWithVarInjectionChecks: >+ emitResolveClosure(dst, scope, needsVarInjectionChecks(resolveType), depth); >+ break; >+ case ModuleVar: >+ move(TrustedImm32(JSValue::CellTag), regT1); >+ move(TrustedImmPtr(currentInstruction[6].u.jsCell.get()), regT0); >+ emitStore(dst, regT1, regT0); >+ break; >+ case Dynamic: >+ addSlowCase(jump()); >+ break; >+ case LocalClosureVar: >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ }; >+ switch (resolveType) { >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: { >+ JumpList skipToEnd; >+ load32(¤tInstruction[4], regT0); >+ >+ Jump notGlobalProperty = branch32(NotEqual, regT0, TrustedImm32(GlobalProperty)); >+ emitCode(GlobalProperty); >+ skipToEnd.append(jump()); >+ notGlobalProperty.link(this); >+ >+ Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >+ emitCode(GlobalPropertyWithVarInjectionChecks); >+ skipToEnd.append(jump()); >+ notGlobalPropertyWithVarInjections.link(this); >+ >+ Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >+ emitCode(GlobalLexicalVar); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVar.link(this); >+ >+ Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >+ emitCode(GlobalLexicalVarWithVarInjectionChecks); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVarWithVarInjections.link(this); >+ >+ addSlowCase(jump()); >+ skipToEnd.link(this); >+ break; >+ } >+ >+ default: >+ emitCode(resolveType); >+ break; >+ } >+} >+ >+void JIT::emitLoadWithStructureCheck(int scope, Structure** structureSlot) >+{ >+ emitLoad(scope, regT1, regT0); >+ loadPtr(structureSlot, regT2); >+ addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), regT2)); >+} >+ >+void JIT::emitGetVarFromPointer(JSValue* operand, GPRReg tag, GPRReg payload) >+{ >+ uintptr_t rawAddress = bitwise_cast<uintptr_t>(operand); >+ load32(bitwise_cast<void*>(rawAddress + TagOffset), tag); >+ load32(bitwise_cast<void*>(rawAddress + PayloadOffset), payload); >+} >+void JIT::emitGetVarFromIndirectPointer(JSValue** operand, GPRReg tag, GPRReg payload) >+{ >+ loadPtr(operand, payload); >+ load32(Address(payload, TagOffset), tag); >+ load32(Address(payload, PayloadOffset), payload); >+} >+ >+void JIT::emitGetClosureVar(int scope, uintptr_t operand) >+{ >+ emitLoad(scope, regT1, regT0); >+ load32(Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + TagOffset), regT1); >+ load32(Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + PayloadOffset), regT0); >+} >+ >+void JIT::emit_op_get_from_scope(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int scope = currentInstruction[2].u.operand; >+ ResolveType resolveType = GetPutInfo(currentInstruction[4].u.operand).resolveType(); >+ Structure** structureSlot = currentInstruction[5].u.structure.slot(); >+ uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >+ >+ auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >+ switch (resolveType) { >+ case GlobalProperty: >+ case GlobalPropertyWithVarInjectionChecks: { >+ emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection. >+ GPRReg base = regT2; >+ GPRReg resultTag = regT1; >+ GPRReg resultPayload = regT0; >+ GPRReg offset = regT3; >+ >+ move(regT0, base); >+ load32(operandSlot, offset); >+ if (!ASSERT_DISABLED) { >+ Jump isOutOfLine = branch32(GreaterThanOrEqual, offset, TrustedImm32(firstOutOfLineOffset)); >+ abortWithReason(JITOffsetIsNotOutOfLine); >+ isOutOfLine.link(this); >+ } >+ loadPtr(Address(base, JSObject::butterflyOffset()), base); >+ neg32(offset); >+ load32(BaseIndex(base, offset, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload) + (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue)), resultPayload); >+ load32(BaseIndex(base, offset, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag) + (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue)), resultTag); >+ break; >+ } >+ case GlobalVar: >+ case GlobalVarWithVarInjectionChecks: >+ case GlobalLexicalVar: >+ case GlobalLexicalVarWithVarInjectionChecks: >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ if (indirectLoadForOperand) >+ emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT1, regT0); >+ else >+ emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT1, regT0); >+ if (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks) // TDZ check. >+ addSlowCase(branchIfEmpty(regT1)); >+ break; >+ case ClosureVar: >+ case ClosureVarWithVarInjectionChecks: >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ emitGetClosureVar(scope, *operandSlot); >+ break; >+ case Dynamic: >+ addSlowCase(jump()); >+ break; >+ case ModuleVar: >+ case LocalClosureVar: >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ }; >+ >+ switch (resolveType) { >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: { >+ JumpList skipToEnd; >+ load32(¤tInstruction[4], regT0); >+ and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >+ >+ Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >+ Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >+ isGlobalProperty.link(this); >+ emitCode(GlobalProperty, false); >+ skipToEnd.append(jump()); >+ notGlobalPropertyWithVarInjections.link(this); >+ >+ Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >+ emitCode(GlobalLexicalVar, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVar.link(this); >+ >+ Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >+ emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVarWithVarInjections.link(this); >+ >+ addSlowCase(jump()); >+ >+ skipToEnd.link(this); >+ break; >+ } >+ >+ default: >+ emitCode(resolveType, false); >+ break; >+ } >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emitSlow_op_get_from_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int dst = currentInstruction[1].u.operand; >+ callOperationWithProfile(operationGetFromScope, dst, currentInstruction); >+} >+ >+void JIT::emitPutGlobalVariable(JSValue* operand, int value, WatchpointSet* set) >+{ >+ emitLoad(value, regT1, regT0); >+ emitNotifyWrite(set); >+ uintptr_t rawAddress = bitwise_cast<uintptr_t>(operand); >+ store32(regT1, bitwise_cast<void*>(rawAddress + TagOffset)); >+ store32(regT0, bitwise_cast<void*>(rawAddress + PayloadOffset)); >+} >+ >+void JIT::emitPutGlobalVariableIndirect(JSValue** addressOfOperand, int value, WatchpointSet** indirectWatchpointSet) >+{ >+ emitLoad(value, regT1, regT0); >+ loadPtr(indirectWatchpointSet, regT2); >+ emitNotifyWrite(regT2); >+ loadPtr(addressOfOperand, regT2); >+ store32(regT1, Address(regT2, TagOffset)); >+ store32(regT0, Address(regT2, PayloadOffset)); >+} >+ >+void JIT::emitPutClosureVar(int scope, uintptr_t operand, int value, WatchpointSet* set) >+{ >+ emitLoad(value, regT3, regT2); >+ emitLoad(scope, regT1, regT0); >+ emitNotifyWrite(set); >+ store32(regT3, Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + TagOffset)); >+ store32(regT2, Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + PayloadOffset)); >+} >+ >+void JIT::emit_op_put_to_scope(Instruction* currentInstruction) >+{ >+ int scope = currentInstruction[1].u.operand; >+ int value = currentInstruction[3].u.operand; >+ GetPutInfo getPutInfo = GetPutInfo(currentInstruction[4].u.operand); >+ ResolveType resolveType = getPutInfo.resolveType(); >+ Structure** structureSlot = currentInstruction[5].u.structure.slot(); >+ uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >+ >+ auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >+ switch (resolveType) { >+ case GlobalProperty: >+ case GlobalPropertyWithVarInjectionChecks: { >+ emitWriteBarrier(m_codeBlock->globalObject(), value, ShouldFilterValue); >+ emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection. >+ emitLoad(value, regT3, regT2); >+ >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >+ loadPtr(operandSlot, regT1); >+ negPtr(regT1); >+ store32(regT3, BaseIndex(regT0, regT1, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag))); >+ store32(regT2, BaseIndex(regT0, regT1, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload))); >+ break; >+ } >+ case GlobalVar: >+ case GlobalVarWithVarInjectionChecks: >+ case GlobalLexicalVar: >+ case GlobalLexicalVarWithVarInjectionChecks: { >+ JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >+ RELEASE_ASSERT(constantScope); >+ emitWriteBarrier(constantScope, value, ShouldFilterValue); >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ if (!isInitialization(getPutInfo.initializationMode()) && (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks)) { >+ // We need to do a TDZ check here because we can't always prove we need to emit TDZ checks statically. >+ if (indirectLoadForOperand) >+ emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT1, regT0); >+ else >+ emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT1, regT0); >+ addSlowCase(branchIfEmpty(regT1)); >+ } >+ if (indirectLoadForOperand) >+ emitPutGlobalVariableIndirect(bitwise_cast<JSValue**>(operandSlot), value, bitwise_cast<WatchpointSet**>(¤tInstruction[5])); >+ else >+ emitPutGlobalVariable(bitwise_cast<JSValue*>(*operandSlot), value, currentInstruction[5].u.watchpointSet); >+ break; >+ } >+ case LocalClosureVar: >+ case ClosureVar: >+ case ClosureVarWithVarInjectionChecks: >+ emitWriteBarrier(scope, value, ShouldFilterValue); >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ emitPutClosureVar(scope, *operandSlot, value, currentInstruction[5].u.watchpointSet); >+ break; >+ case ModuleVar: >+ case Dynamic: >+ addSlowCase(jump()); >+ break; >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ }; >+ >+ switch (resolveType) { >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: { >+ JumpList skipToEnd; >+ load32(¤tInstruction[4], regT0); >+ and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >+ >+ Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >+ Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >+ isGlobalProperty.link(this); >+ emitCode(GlobalProperty, false); >+ skipToEnd.append(jump()); >+ notGlobalPropertyWithVarInjections.link(this); >+ >+ Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >+ emitCode(GlobalLexicalVar, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVar.link(this); >+ >+ Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >+ emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVarWithVarInjections.link(this); >+ >+ addSlowCase(jump()); >+ >+ skipToEnd.link(this); >+ break; >+ } >+ >+ default: >+ emitCode(resolveType, false); >+ break; >+ } >+} >+ >+void JIT::emitSlow_op_put_to_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ GetPutInfo getPutInfo = GetPutInfo(currentInstruction[4].u.operand); >+ ResolveType resolveType = getPutInfo.resolveType(); >+ if (resolveType == ModuleVar) { >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_throw_strict_mode_readonly_property_write_error); >+ slowPathCall.call(); >+ } else >+ callOperation(operationPutToScope, currentInstruction); >+} >+ >+void JIT::emit_op_get_from_arguments(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int arguments = currentInstruction[2].u.operand; >+ int index = currentInstruction[3].u.operand; >+ >+ emitLoadPayload(arguments, regT0); >+ load32(Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + TagOffset), regT1); >+ load32(Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + PayloadOffset), regT0); >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_put_to_arguments(Instruction* currentInstruction) >+{ >+ int arguments = currentInstruction[1].u.operand; >+ int index = currentInstruction[2].u.operand; >+ int value = currentInstruction[3].u.operand; >+ >+ emitWriteBarrier(arguments, value, ShouldFilterValue); >+ >+ emitLoadPayload(arguments, regT0); >+ emitLoad(value, regT1, regT2); >+ store32(regT1, Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + TagOffset)); >+ store32(regT2, Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + PayloadOffset)); >+} >+ >+void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode) >+{ >+ Jump valueNotCell; >+ if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) { >+ emitLoadTag(value, regT0); >+ valueNotCell = branchIfNotCell(regT0); >+ } >+ >+ emitLoad(owner, regT0, regT1); >+ Jump ownerNotCell; >+ if (mode == ShouldFilterBase || mode == ShouldFilterBaseAndValue) >+ ownerNotCell = branchIfNotCell(regT0); >+ >+ Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), regT1, regT2); >+ callOperation(operationWriteBarrierSlowPath, regT1); >+ ownerIsRememberedOrInEden.link(this); >+ >+ if (mode == ShouldFilterBase || mode == ShouldFilterBaseAndValue) >+ ownerNotCell.link(this); >+ if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) >+ valueNotCell.link(this); >+} >+ >+void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode) >+{ >+ Jump valueNotCell; >+ if (mode == ShouldFilterValue) { >+ emitLoadTag(value, regT0); >+ valueNotCell = branchIfNotCell(regT0); >+ } >+ >+ emitWriteBarrier(owner); >+ >+ if (mode == ShouldFilterValue) >+ valueNotCell.link(this); >+} >+ >+void JIT::emitPutCallResult(Instruction* instruction) >+{ >+ int dst = instruction[1].u.operand; >+ emitValueProfilingSite(); >+ emitStore(dst, regT1, regT0); >+} >+ >+void JIT::emit_op_ret(Instruction* currentInstruction) >+{ >+ unsigned dst = currentInstruction[1].u.operand; >+ >+ emitLoad(dst, regT1, regT0); >+ >+ checkStackPointerAlignment(); >+ emitRestoreCalleeSaves(); >+ emitFunctionEpilogue(); >+ ret(); >+} >+ >+void JIT::emitSlow_op_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_call_eval, currentInstruction, iter, m_callLinkInfoIndex); >+} >+ >+void JIT::emitSlow_op_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_tail_call_forward_arguments(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_tail_call_forward_arguments, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_construct_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_construct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_construct, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_call(Instruction* currentInstruction) >+{ >+ compileOpCall(op_call, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_tail_call(Instruction* currentInstruction) >+{ >+ compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_call_eval(Instruction* currentInstruction) >+{ >+ compileOpCall(op_call_eval, currentInstruction, m_callLinkInfoIndex); >+} >+ >+void JIT::emit_op_call_varargs(Instruction* currentInstruction) >+{ >+ compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction) >+{ >+ compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_tail_call_forward_arguments(Instruction* currentInstruction) >+{ >+ compileOpCall(op_tail_call_forward_arguments, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_construct_varargs(Instruction* currentInstruction) >+{ >+ compileOpCall(op_construct_varargs, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_construct(Instruction* currentInstruction) >+{ >+ compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::compileSetupVarargsFrame(OpcodeID opcode, Instruction* instruction, CallLinkInfo* info) >+{ >+ int thisValue = instruction[3].u.operand; >+ int arguments = instruction[4].u.operand; >+ int firstFreeRegister = instruction[5].u.operand; >+ int firstVarArgOffset = instruction[6].u.operand; >+ >+ emitLoad(arguments, regT1, regT0); >+ Z_JITOperation_EJZZ sizeOperation; >+ if (opcode == op_tail_call_forward_arguments) >+ sizeOperation = operationSizeFrameForForwardArguments; >+ else >+ sizeOperation = operationSizeFrameForVarargs; >+ callOperation(sizeOperation, JSValueRegs(regT1, regT0), -firstFreeRegister, firstVarArgOffset); >+ move(TrustedImm32(-firstFreeRegister), regT1); >+ emitSetVarargsFrame(*this, returnValueGPR, false, regT1, regT1); >+ addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 6 * sizeof(void*)))), regT1, stackPointerRegister); >+ emitLoad(arguments, regT2, regT4); >+ F_JITOperation_EFJZZ setupOperation; >+ if (opcode == op_tail_call_forward_arguments) >+ setupOperation = operationSetupForwardArgumentsFrame; >+ else >+ setupOperation = operationSetupVarargsFrame; >+ callOperation(setupOperation, regT1, JSValueRegs(regT2, regT4), firstVarArgOffset, regT0); >+ move(returnValueGPR, regT1); >+ >+ // Profile the argument count. >+ load32(Address(regT1, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2); >+ load32(info->addressOfMaxNumArguments(), regT0); >+ Jump notBiggest = branch32(Above, regT0, regT2); >+ store32(regT2, info->addressOfMaxNumArguments()); >+ notBiggest.link(this); >+ >+ // Initialize 'this'. >+ emitLoad(thisValue, regT2, regT0); >+ store32(regT0, Address(regT1, PayloadOffset + (CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))))); >+ store32(regT2, Address(regT1, TagOffset + (CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))))); >+ >+ addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), regT1, stackPointerRegister); >+} >+ >+void JIT::compileCallEval(Instruction* instruction) >+{ >+ addPtr(TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), stackPointerRegister, regT1); >+ storePtr(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset())); >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ >+ callOperation(operationCallEval, regT1); >+ >+ addSlowCase(branchIfEmpty(regT1)); >+ >+ sampleCodeBlock(m_codeBlock); >+ >+ emitPutCallResult(instruction); >+} >+ >+void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); >+ info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0); >+ >+ int registerOffset = -instruction[4].u.operand; >+ int callee = instruction[2].u.operand; >+ >+ addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >+ >+ emitLoad(callee, regT1, regT0); >+ emitDumbVirtualCall(*vm(), info); >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ checkStackPointerAlignment(); >+ >+ sampleCodeBlock(m_codeBlock); >+ >+ emitPutCallResult(instruction); >+} >+ >+void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex) >+{ >+ int callee = instruction[2].u.operand; >+ >+ /* Caller always: >+ - Updates callFrameRegister to callee callFrame. >+ - Initializes ArgumentCount; CallerFrame; Callee. >+ >+ For a JS call: >+ - Callee initializes ReturnPC; CodeBlock. >+ - Callee restores callFrameRegister before return. >+ >+ For a non-JS call: >+ - Caller initializes ReturnPC; CodeBlock. >+ - Caller restores callFrameRegister after return. >+ */ >+ CallLinkInfo* info = nullptr; >+ if (opcodeID != op_call_eval) >+ info = m_codeBlock->addCallLinkInfo(); >+ if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) >+ compileSetupVarargsFrame(opcodeID, instruction, info); >+ else { >+ int argCount = instruction[3].u.operand; >+ int registerOffset = -instruction[4].u.operand; >+ >+ if (opcodeID == op_call && shouldEmitProfiling()) { >+ emitLoad(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0, regT1); >+ Jump done = branchIfNotCell(regT0); >+ loadPtr(Address(regT1, JSCell::structureIDOffset()), regT1); >+ storePtr(regT1, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID()); >+ done.link(this); >+ } >+ >+ addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >+ >+ store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); >+ } // SP holds newCallFrame + sizeof(CallerFrameAndPC), with ArgumentCount initialized. >+ >+ uint32_t locationBits = CallSiteIndex(instruction).bits(); >+ store32(TrustedImm32(locationBits), tagFor(CallFrameSlot::argumentCount)); >+ emitLoad(callee, regT1, regT0); // regT1, regT0 holds callee. >+ >+ store32(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); >+ store32(regT1, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) + TagOffset - sizeof(CallerFrameAndPC))); >+ >+ if (opcodeID == op_call_eval) { >+ compileCallEval(instruction); >+ return; >+ } >+ >+ if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) >+ emitRestoreCalleeSaves(); >+ >+ addSlowCase(branchIfNotCell(regT1)); >+ >+ DataLabelPtr addressOfLinkedFunctionCheck; >+ Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(nullptr)); >+ >+ addSlowCase(slowCase); >+ >+ ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); >+ info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0); >+ m_callCompilationInfo.append(CallCompilationInfo()); >+ m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck; >+ m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info; >+ >+ checkStackPointerAlignment(); >+ if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) { >+ prepareForTailCallSlow(); >+ m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); >+ return; >+ } >+ >+ m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ checkStackPointerAlignment(); >+ >+ sampleCodeBlock(m_codeBlock); >+ emitPutCallResult(instruction); >+} >+ >+void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex) >+{ >+ if (opcodeID == op_call_eval) { >+ compileCallEvalSlowCase(instruction, iter); >+ return; >+ } >+ >+ linkAllSlowCases(iter); >+ >+ move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); >+ >+ if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) >+ emitRestoreCalleeSaves(); >+ >+ m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).retaggedCode<NoPtrTag>()); >+ >+ if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { >+ abortWithReason(JITDidReturnFromTailCall); >+ return; >+ } >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ checkStackPointerAlignment(); >+ >+ sampleCodeBlock(m_codeBlock); >+ emitPutCallResult(instruction); >+} >+ >+#endif // USE(JSVALUE32_64) >+ >+} // namespace JSC >+ >+#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JIT64.cpp b/Source/JavaScriptCore/jit/JIT64.cpp >new file mode 100644 >index 0000000000000000000000000000000000000000..f0f5a8acd5d7208a990c396fcd15d1f978450ec7 >--- /dev/null >+++ b/Source/JavaScriptCore/jit/JIT64.cpp >@@ -0,0 +1,2893 @@ >+/* >+ * Copyright (C) 2008-2018 Apple Inc. All rights reserved. >+ * Copyright (C) 2010 Patrick Gansterer <paroga@paroga.com> >+ * Copyright (C) 2018 Yusuke Suzuki <utatane.tea@gmail.com> >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >+ */ >+ >+#include "config.h" >+#include "JIT.h" >+ >+#if ENABLE(JIT) >+ >+#include "BytecodeStructs.h" >+#include "CallFrameShuffler.h" >+#include "CodeBlock.h" >+#include "DirectArguments.h" >+#include "Exception.h" >+#include "GCAwareJITStubRoutine.h" >+#include "InterpreterInlines.h" >+#include "JITInlines.h" >+#include "JSArray.h" >+#include "JSCast.h" >+#include "JSFunction.h" >+#include "JSLexicalEnvironment.h" >+#include "JSPropertyNameEnumerator.h" >+#include "LinkBuffer.h" >+#include "SetupVarargsFrame.h" >+#include "SlowPathCall.h" >+#include "StackAlignment.h" >+#include "StructureStubInfo.h" >+#include "ThunkGenerators.h" >+#include "TypeLocation.h" >+#include "TypeProfilerLog.h" >+#include "VirtualRegister.h" >+#include <wtf/ScopedLambda.h> >+#include <wtf/StringPrintStream.h> >+ >+namespace JSC { >+ >+#if USE(JSVALUE64) >+ >+void JIT::emit_op_mov(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(src, regT0); >+ emitPutVirtualRegister(dst); >+} >+ >+ >+void JIT::emit_op_end(Instruction* currentInstruction) >+{ >+ RELEASE_ASSERT(returnValueGPR != callFrameRegister); >+ emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >+ emitRestoreCalleeSaves(); >+ emitFunctionEpilogue(); >+ ret(); >+} >+ >+void JIT::emit_op_jmp(Instruction* currentInstruction) >+{ >+ unsigned target = currentInstruction[1].u.operand; >+ addJump(jump(), target); >+} >+ >+void JIT::emit_op_new_object(Instruction* currentInstruction) >+{ >+ Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >+ size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); >+ Allocator allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); >+ >+ RegisterID resultReg = regT0; >+ RegisterID allocatorReg = regT1; >+ RegisterID scratchReg = regT2; >+ >+ if (!allocator) >+ addSlowCase(jump()); >+ else { >+ JumpList slowCases; >+ auto butterfly = TrustedImmPtr(nullptr); >+ emitAllocateJSObject(resultReg, JITAllocator::constant(allocator), allocatorReg, TrustedImmPtr(structure), butterfly, scratchReg, slowCases); >+ emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); >+ addSlowCase(slowCases); >+ emitPutVirtualRegister(currentInstruction[1].u.operand); >+ } >+} >+ >+void JIT::emitSlow_op_new_object(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int dst = currentInstruction[1].u.operand; >+ Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >+ callOperation(operationNewObject, structure); >+ emitStoreCell(dst, returnValueGPR); >+} >+ >+void JIT::emit_op_overrides_has_instance(Instruction* currentInstruction) >+{ >+ auto& bytecode = *reinterpret_cast<OpOverridesHasInstance*>(currentInstruction); >+ int dst = bytecode.dst(); >+ int constructor = bytecode.constructor(); >+ int hasInstanceValue = bytecode.hasInstanceValue(); >+ >+ emitGetVirtualRegister(hasInstanceValue, regT0); >+ >+ // We don't jump if we know what Symbol.hasInstance would do. >+ Jump customhasInstanceValue = branchPtr(NotEqual, regT0, TrustedImmPtr(m_codeBlock->globalObject()->functionProtoHasInstanceSymbolFunction())); >+ >+ emitGetVirtualRegister(constructor, regT0); >+ >+ // Check that constructor 'ImplementsDefaultHasInstance' i.e. the object is not a C-API user nor a bound function. >+ test8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance), regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ Jump done = jump(); >+ >+ customhasInstanceValue.link(this); >+ move(TrustedImm32(ValueTrue), regT0); >+ >+ done.link(this); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_instanceof(Instruction* currentInstruction) >+{ >+ auto& bytecode = *reinterpret_cast<OpInstanceof*>(currentInstruction); >+ int dst = bytecode.dst(); >+ int value = bytecode.value(); >+ int proto = bytecode.prototype(); >+ >+ // Load the operands (baseVal, proto, and value respectively) into registers. >+ // We use regT0 for baseVal since we will be done with this first, and we can then use it for the result. >+ emitGetVirtualRegister(value, regT2); >+ emitGetVirtualRegister(proto, regT1); >+ >+ // Check that proto are cells. baseVal must be a cell - this is checked by the get_by_id for Symbol.hasInstance. >+ emitJumpSlowCaseIfNotJSCell(regT2, value); >+ emitJumpSlowCaseIfNotJSCell(regT1, proto); >+ >+ JITInstanceOfGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), >+ RegisterSet::stubUnavailableRegisters(), >+ regT0, // result >+ regT2, // value >+ regT1, // proto >+ regT3, regT4); // scratch >+ gen.generateFastPath(*this); >+ m_instanceOfs.append(gen); >+ >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emitSlow_op_instanceof(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ >+ JITInstanceOfGenerator& gen = m_instanceOfs[m_instanceOfIndex++]; >+ >+ Label coldPathBegin = label(); >+ Call call = callOperation(operationInstanceOfOptimize, resultVReg, gen.stubInfo(), regT2, regT1); >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_instanceof_custom(Instruction*) >+{ >+ // This always goes to slow path since we expect it to be rare. >+ addSlowCase(jump()); >+} >+ >+void JIT::emit_op_is_empty(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(value, regT0); >+ compare64(Equal, regT0, TrustedImm32(JSValue::encode(JSValue())), regT0); >+ >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_is_undefined(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(value, regT0); >+ Jump isCell = branchIfCell(regT0); >+ >+ compare64(Equal, regT0, TrustedImm32(ValueUndefined), regT0); >+ Jump done = jump(); >+ >+ isCell.link(this); >+ Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ move(TrustedImm32(0), regT0); >+ Jump notMasqueradesAsUndefined = jump(); >+ >+ isMasqueradesAsUndefined.link(this); >+ emitLoadStructure(*vm(), regT0, regT1, regT2); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ loadPtr(Address(regT1, Structure::globalObjectOffset()), regT1); >+ comparePtr(Equal, regT0, regT1, regT0); >+ >+ notMasqueradesAsUndefined.link(this); >+ done.link(this); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_is_boolean(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(value, regT0); >+ xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), regT0); >+ test64(Zero, regT0, TrustedImm32(static_cast<int32_t>(~1)), regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_is_number(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(value, regT0); >+ test64(NonZero, regT0, tagTypeNumberRegister, regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_is_cell_with_type(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ int type = currentInstruction[3].u.operand; >+ >+ emitGetVirtualRegister(value, regT0); >+ Jump isNotCell = branchIfNotCell(regT0); >+ >+ compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(type), regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ Jump done = jump(); >+ >+ isNotCell.link(this); >+ move(TrustedImm32(ValueFalse), regT0); >+ >+ done.link(this); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_is_object(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int value = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(value, regT0); >+ Jump isNotCell = branchIfNotCell(regT0); >+ >+ compare8(AboveOrEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType), regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ Jump done = jump(); >+ >+ isNotCell.link(this); >+ move(TrustedImm32(ValueFalse), regT0); >+ >+ done.link(this); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_ret(Instruction* currentInstruction) >+{ >+ ASSERT(callFrameRegister != regT1); >+ ASSERT(regT1 != returnValueGPR); >+ ASSERT(returnValueGPR != callFrameRegister); >+ >+ // Return the result in %eax. >+ emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >+ >+ checkStackPointerAlignment(); >+ emitRestoreCalleeSaves(); >+ emitFunctionEpilogue(); >+ ret(); >+} >+ >+void JIT::emit_op_to_primitive(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(src, regT0); >+ >+ Jump isImm = branchIfNotCell(regT0); >+ addSlowCase(branchIfObject(regT0)); >+ isImm.link(this); >+ >+ if (dst != src) >+ emitPutVirtualRegister(dst); >+ >+} >+ >+void JIT::emit_op_set_function_name(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ emitGetVirtualRegister(currentInstruction[2].u.operand, regT1); >+ callOperation(operationSetFunctionName, regT0, regT1); >+} >+ >+void JIT::emit_op_not(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[2].u.operand, regT0); >+ >+ // Invert against JSValue(false); if the value was tagged as a boolean, then all bits will be >+ // clear other than the low bit (which will be 0 or 1 for false or true inputs respectively). >+ // Then invert against JSValue(true), which will add the tag back in, and flip the low bit. >+ xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), regT0); >+ addSlowCase(branchTestPtr(NonZero, regT0, TrustedImm32(static_cast<int32_t>(~1)))); >+ xor64(TrustedImm32(static_cast<int32_t>(ValueTrue)), regT0); >+ >+ emitPutVirtualRegister(currentInstruction[1].u.operand); >+} >+ >+void JIT::emit_op_jfalse(Instruction* currentInstruction) >+{ >+ unsigned target = currentInstruction[2].u.operand; >+ >+ GPRReg value = regT0; >+ GPRReg result = regT1; >+ GPRReg scratch = regT2; >+ bool shouldCheckMasqueradesAsUndefined = true; >+ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, value); >+ emitConvertValueToBoolean(*vm(), JSValueRegs(value), result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >+ >+ addJump(branchTest32(Zero, result), target); >+} >+ >+void JIT::emit_op_jeq_null(Instruction* currentInstruction) >+{ >+ int src = currentInstruction[1].u.operand; >+ unsigned target = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(src, regT0); >+ Jump isImmediate = branchIfNotCell(regT0); >+ >+ // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure. >+ Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ emitLoadStructure(*vm(), regT0, regT2, regT1); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ addJump(branchPtr(Equal, Address(regT2, Structure::globalObjectOffset()), regT0), target); >+ Jump masqueradesGlobalObjectIsForeign = jump(); >+ >+ // Now handle the immediate cases - undefined & null >+ isImmediate.link(this); >+ and64(TrustedImm32(~TagBitUndefined), regT0); >+ addJump(branch64(Equal, regT0, TrustedImm64(JSValue::encode(jsNull()))), target); >+ >+ isNotMasqueradesAsUndefined.link(this); >+ masqueradesGlobalObjectIsForeign.link(this); >+}; >+void JIT::emit_op_jneq_null(Instruction* currentInstruction) >+{ >+ int src = currentInstruction[1].u.operand; >+ unsigned target = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(src, regT0); >+ Jump isImmediate = branchIfNotCell(regT0); >+ >+ // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure. >+ addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target); >+ emitLoadStructure(*vm(), regT0, regT2, regT1); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ addJump(branchPtr(NotEqual, Address(regT2, Structure::globalObjectOffset()), regT0), target); >+ Jump wasNotImmediate = jump(); >+ >+ // Now handle the immediate cases - undefined & null >+ isImmediate.link(this); >+ and64(TrustedImm32(~TagBitUndefined), regT0); >+ addJump(branch64(NotEqual, regT0, TrustedImm64(JSValue::encode(jsNull()))), target); >+ >+ wasNotImmediate.link(this); >+} >+ >+void JIT::emit_op_jneq_ptr(Instruction* currentInstruction) >+{ >+ int src = currentInstruction[1].u.operand; >+ Special::Pointer ptr = currentInstruction[2].u.specialPointer; >+ unsigned target = currentInstruction[3].u.operand; >+ >+ emitGetVirtualRegister(src, regT0); >+ CCallHelpers::Jump equal = branchPtr(Equal, regT0, TrustedImmPtr(actualPointerFor(m_codeBlock, ptr))); >+ store32(TrustedImm32(1), ¤tInstruction[4].u.operand); >+ addJump(jump(), target); >+ equal.link(this); >+} >+ >+void JIT::emit_op_eq(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegisters(currentInstruction[2].u.operand, regT0, currentInstruction[3].u.operand, regT1); >+ emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >+ compare32(Equal, regT1, regT0, regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(currentInstruction[1].u.operand); >+} >+ >+void JIT::emit_op_jeq(Instruction* currentInstruction) >+{ >+ unsigned target = currentInstruction[3].u.operand; >+ emitGetVirtualRegisters(currentInstruction[1].u.operand, regT0, currentInstruction[2].u.operand, regT1); >+ emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >+ addJump(branch32(Equal, regT0, regT1), target); >+} >+ >+void JIT::emit_op_jtrue(Instruction* currentInstruction) >+{ >+ unsigned target = currentInstruction[2].u.operand; >+ >+ GPRReg value = regT0; >+ GPRReg result = regT1; >+ GPRReg scratch = regT2; >+ bool shouldCheckMasqueradesAsUndefined = true; >+ emitGetVirtualRegister(currentInstruction[1].u.operand, value); >+ emitConvertValueToBoolean(*vm(), JSValueRegs(value), result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >+ addJump(branchTest32(NonZero, result), target); >+} >+ >+void JIT::emit_op_neq(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegisters(currentInstruction[2].u.operand, regT0, currentInstruction[3].u.operand, regT1); >+ emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >+ compare32(NotEqual, regT1, regT0, regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ >+ emitPutVirtualRegister(currentInstruction[1].u.operand); >+} >+ >+void JIT::emit_op_jneq(Instruction* currentInstruction) >+{ >+ unsigned target = currentInstruction[3].u.operand; >+ emitGetVirtualRegisters(currentInstruction[1].u.operand, regT0, currentInstruction[2].u.operand, regT1); >+ emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >+ addJump(branch32(NotEqual, regT0, regT1), target); >+} >+ >+void JIT::emit_op_throw(Instruction* currentInstruction) >+{ >+ ASSERT(regT0 == returnValueGPR); >+ copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ callOperationNoExceptionCheck(operationThrow, regT0); >+ jumpToExceptionHandler(*vm()); >+} >+ >+void JIT::compileOpStrictEq(Instruction* currentInstruction, CompileOpStrictEqType type) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src1 = currentInstruction[2].u.operand; >+ int src2 = currentInstruction[3].u.operand; >+ >+ emitGetVirtualRegisters(src1, regT0, src2, regT1); >+ >+ // Jump slow if both are cells (to cover strings). >+ move(regT0, regT2); >+ or64(regT1, regT2); >+ addSlowCase(branchIfCell(regT2)); >+ >+ // Jump slow if either is a double. First test if it's an integer, which is fine, and then test >+ // if it's a double. >+ Jump leftOK = branchIfInt32(regT0); >+ addSlowCase(branchIfNumber(regT0)); >+ leftOK.link(this); >+ Jump rightOK = branchIfInt32(regT1); >+ addSlowCase(branchIfNumber(regT1)); >+ rightOK.link(this); >+ >+ if (type == CompileOpStrictEqType::StrictEq) >+ compare64(Equal, regT1, regT0, regT0); >+ else >+ compare64(NotEqual, regT1, regT0, regT0); >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_stricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEq(currentInstruction, CompileOpStrictEqType::StrictEq); >+} >+ >+void JIT::emit_op_nstricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEq(currentInstruction, CompileOpStrictEqType::NStrictEq); >+} >+ >+void JIT::compileOpStrictEqJump(Instruction* currentInstruction, CompileOpStrictEqType type) >+{ >+ int target = currentInstruction[3].u.operand; >+ int src1 = currentInstruction[1].u.operand; >+ int src2 = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegisters(src1, regT0, src2, regT1); >+ >+ // Jump slow if both are cells (to cover strings). >+ move(regT0, regT2); >+ or64(regT1, regT2); >+ addSlowCase(branchIfCell(regT2)); >+ >+ // Jump slow if either is a double. First test if it's an integer, which is fine, and then test >+ // if it's a double. >+ Jump leftOK = branchIfInt32(regT0); >+ addSlowCase(branchIfNumber(regT0)); >+ leftOK.link(this); >+ Jump rightOK = branchIfInt32(regT1); >+ addSlowCase(branchIfNumber(regT1)); >+ rightOK.link(this); >+ >+ if (type == CompileOpStrictEqType::StrictEq) >+ addJump(branch64(Equal, regT1, regT0), target); >+ else >+ addJump(branch64(NotEqual, regT1, regT0), target); >+} >+ >+void JIT::emit_op_jstricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::StrictEq); >+} >+ >+void JIT::emit_op_jnstricteq(Instruction* currentInstruction) >+{ >+ compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::NStrictEq); >+} >+ >+void JIT::emitSlow_op_jstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ unsigned target = currentInstruction[3].u.operand; >+ callOperation(operationCompareStrictEq, regT0, regT1); >+ emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); >+} >+ >+void JIT::emitSlow_op_jnstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ unsigned target = currentInstruction[3].u.operand; >+ callOperation(operationCompareStrictEq, regT0, regT1); >+ emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); >+} >+ >+void JIT::emit_op_to_number(Instruction* currentInstruction) >+{ >+ int dstVReg = currentInstruction[1].u.operand; >+ int srcVReg = currentInstruction[2].u.operand; >+ emitGetVirtualRegister(srcVReg, regT0); >+ >+ addSlowCase(branchIfNotNumber(regT0)); >+ >+ emitValueProfilingSite(); >+ if (srcVReg != dstVReg) >+ emitPutVirtualRegister(dstVReg); >+} >+ >+void JIT::emit_op_to_string(Instruction* currentInstruction) >+{ >+ int srcVReg = currentInstruction[2].u.operand; >+ emitGetVirtualRegister(srcVReg, regT0); >+ >+ addSlowCase(branchIfNotCell(regT0)); >+ addSlowCase(branchIfNotString(regT0)); >+ >+ emitPutVirtualRegister(currentInstruction[1].u.operand); >+} >+ >+void JIT::emit_op_to_object(Instruction* currentInstruction) >+{ >+ int dstVReg = currentInstruction[1].u.operand; >+ int srcVReg = currentInstruction[2].u.operand; >+ emitGetVirtualRegister(srcVReg, regT0); >+ >+ addSlowCase(branchIfNotCell(regT0)); >+ addSlowCase(branchIfNotObject(regT0)); >+ >+ emitValueProfilingSite(); >+ if (srcVReg != dstVReg) >+ emitPutVirtualRegister(dstVReg); >+} >+ >+void JIT::emit_op_catch(Instruction* currentInstruction) >+{ >+ restoreCalleeSavesFromEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >+ >+ move(TrustedImmPtr(m_vm), regT3); >+ load64(Address(regT3, VM::callFrameForCatchOffset()), callFrameRegister); >+ storePtr(TrustedImmPtr(nullptr), Address(regT3, VM::callFrameForCatchOffset())); >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(codeBlock()) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ >+ callOperationNoExceptionCheck(operationCheckIfExceptionIsUncatchableAndNotifyProfiler); >+ Jump isCatchableException = branchTest32(Zero, returnValueGPR); >+ jumpToExceptionHandler(*vm()); >+ isCatchableException.link(this); >+ >+ move(TrustedImmPtr(m_vm), regT3); >+ load64(Address(regT3, VM::exceptionOffset()), regT0); >+ store64(TrustedImm64(JSValue::encode(JSValue())), Address(regT3, VM::exceptionOffset())); >+ emitPutVirtualRegister(currentInstruction[1].u.operand); >+ >+ load64(Address(regT0, Exception::valueOffset()), regT0); >+ emitPutVirtualRegister(currentInstruction[2].u.operand); >+ >+#if ENABLE(DFG_JIT) >+ // FIXME: consider inline caching the process of doing OSR entry, including >+ // argument type proofs, storing locals to the buffer, etc >+ // https://bugs.webkit.org/show_bug.cgi?id=175598 >+ >+ ValueProfileAndOperandBuffer* buffer = static_cast<ValueProfileAndOperandBuffer*>(currentInstruction[3].u.pointer); >+ if (buffer || !shouldEmitProfiling()) >+ callOperation(operationTryOSREnterAtCatch, m_bytecodeOffset); >+ else >+ callOperation(operationTryOSREnterAtCatchAndValueProfile, m_bytecodeOffset); >+ auto skipOSREntry = branchTestPtr(Zero, returnValueGPR); >+ emitRestoreCalleeSaves(); >+ jump(returnValueGPR, ExceptionHandlerPtrTag); >+ skipOSREntry.link(this); >+ if (buffer && shouldEmitProfiling()) { >+ buffer->forEach([&] (ValueProfileAndOperand& profile) { >+ JSValueRegs regs(regT0); >+ emitGetVirtualRegister(profile.m_operand, regs); >+ emitValueProfilingSite(profile.m_profile); >+ }); >+ } >+#endif // ENABLE(DFG_JIT) >+} >+ >+void JIT::emit_op_identity_with_profile(Instruction*) >+{ >+ // We don't need to do anything here... >+} >+ >+void JIT::emit_op_get_parent_scope(Instruction* currentInstruction) >+{ >+ int currentScope = currentInstruction[2].u.operand; >+ emitGetVirtualRegister(currentScope, regT0); >+ loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >+ emitStoreCell(currentInstruction[1].u.operand, regT0); >+} >+ >+void JIT::emit_op_switch_imm(Instruction* currentInstruction) >+{ >+ size_t tableIndex = currentInstruction[1].u.operand; >+ unsigned defaultOffset = currentInstruction[2].u.operand; >+ unsigned scrutinee = currentInstruction[3].u.operand; >+ >+ // create jump table for switch destinations, track this switch statement. >+ SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >+ m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Immediate)); >+ jumpTable->ensureCTITable(); >+ >+ emitGetVirtualRegister(scrutinee, regT0); >+ callOperation(operationSwitchImmWithUnknownKeyType, regT0, tableIndex); >+ jump(returnValueGPR, JSSwitchPtrTag); >+} >+ >+void JIT::emit_op_switch_char(Instruction* currentInstruction) >+{ >+ size_t tableIndex = currentInstruction[1].u.operand; >+ unsigned defaultOffset = currentInstruction[2].u.operand; >+ unsigned scrutinee = currentInstruction[3].u.operand; >+ >+ // create jump table for switch destinations, track this switch statement. >+ SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >+ m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Character)); >+ jumpTable->ensureCTITable(); >+ >+ emitGetVirtualRegister(scrutinee, regT0); >+ callOperation(operationSwitchCharWithUnknownKeyType, regT0, tableIndex); >+ jump(returnValueGPR, JSSwitchPtrTag); >+} >+ >+void JIT::emit_op_switch_string(Instruction* currentInstruction) >+{ >+ size_t tableIndex = currentInstruction[1].u.operand; >+ unsigned defaultOffset = currentInstruction[2].u.operand; >+ unsigned scrutinee = currentInstruction[3].u.operand; >+ >+ // create jump table for switch destinations, track this switch statement. >+ StringJumpTable* jumpTable = &m_codeBlock->stringSwitchJumpTable(tableIndex); >+ m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset)); >+ >+ emitGetVirtualRegister(scrutinee, regT0); >+ callOperation(operationSwitchStringWithUnknownKeyType, regT0, tableIndex); >+ jump(returnValueGPR, JSSwitchPtrTag); >+} >+ >+void JIT::emit_op_debug(Instruction* currentInstruction) >+{ >+ load32(codeBlock()->debuggerRequestsAddress(), regT0); >+ Jump noDebuggerRequests = branchTest32(Zero, regT0); >+ callOperation(operationDebug, currentInstruction[1].u.operand); >+ noDebuggerRequests.link(this); >+} >+ >+void JIT::emit_op_eq_null(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src1 = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(src1, regT0); >+ Jump isImmediate = branchIfNotCell(regT0); >+ >+ Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ move(TrustedImm32(0), regT0); >+ Jump wasNotMasqueradesAsUndefined = jump(); >+ >+ isMasqueradesAsUndefined.link(this); >+ emitLoadStructure(*vm(), regT0, regT2, regT1); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >+ comparePtr(Equal, regT0, regT2, regT0); >+ Jump wasNotImmediate = jump(); >+ >+ isImmediate.link(this); >+ >+ and64(TrustedImm32(~TagBitUndefined), regT0); >+ compare64(Equal, regT0, TrustedImm32(ValueNull), regT0); >+ >+ wasNotImmediate.link(this); >+ wasNotMasqueradesAsUndefined.link(this); >+ >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(dst); >+ >+} >+ >+void JIT::emit_op_neq_null(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int src1 = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(src1, regT0); >+ Jump isImmediate = branchIfNotCell(regT0); >+ >+ Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >+ move(TrustedImm32(1), regT0); >+ Jump wasNotMasqueradesAsUndefined = jump(); >+ >+ isMasqueradesAsUndefined.link(this); >+ emitLoadStructure(*vm(), regT0, regT2, regT1); >+ move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >+ loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >+ comparePtr(NotEqual, regT0, regT2, regT0); >+ Jump wasNotImmediate = jump(); >+ >+ isImmediate.link(this); >+ >+ and64(TrustedImm32(~TagBitUndefined), regT0); >+ compare64(NotEqual, regT0, TrustedImm32(ValueNull), regT0); >+ >+ wasNotImmediate.link(this); >+ wasNotMasqueradesAsUndefined.link(this); >+ >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_enter(Instruction*) >+{ >+ // Even though CTI doesn't use them, we initialize our constant >+ // registers to zap stale pointers, to avoid unnecessarily prolonging >+ // object lifetime and increasing GC pressure. >+ size_t count = m_codeBlock->m_numVars; >+ for (size_t j = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); j < count; ++j) >+ emitInitRegister(virtualRegisterForLocal(j).offset()); >+ >+ emitWriteBarrier(m_codeBlock); >+ >+ emitEnterOptimizationCheck(); >+} >+ >+void JIT::emit_op_get_scope(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ emitGetFromCallFrameHeaderPtr(CallFrameSlot::callee, regT0); >+ loadPtr(Address(regT0, JSFunction::offsetOfScopeChain()), regT0); >+ emitStoreCell(dst, regT0); >+} >+ >+void JIT::emit_op_to_this(Instruction* currentInstruction) >+{ >+ WriteBarrierBase<Structure>* cachedStructure = ¤tInstruction[2].u.structure; >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT1); >+ >+ emitJumpSlowCaseIfNotJSCell(regT1); >+ >+ addSlowCase(branchIfNotType(regT1, FinalObjectType)); >+ loadPtr(cachedStructure, regT2); >+ addSlowCase(branchTestPtr(Zero, regT2)); >+ load32(Address(regT2, Structure::structureIDOffset()), regT2); >+ addSlowCase(branch32(NotEqual, Address(regT1, JSCell::structureIDOffset()), regT2)); >+} >+ >+void JIT::emit_op_create_this(Instruction* currentInstruction) >+{ >+ int callee = currentInstruction[2].u.operand; >+ WriteBarrierBase<JSCell>* cachedFunction = ¤tInstruction[4].u.jsCell; >+ RegisterID calleeReg = regT0; >+ RegisterID rareDataReg = regT4; >+ RegisterID resultReg = regT0; >+ RegisterID allocatorReg = regT1; >+ RegisterID structureReg = regT2; >+ RegisterID cachedFunctionReg = regT4; >+ RegisterID scratchReg = regT3; >+ >+ emitGetVirtualRegister(callee, calleeReg); >+ addSlowCase(branchIfNotFunction(calleeReg)); >+ loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); >+ addSlowCase(branchTestPtr(Zero, rareDataReg)); >+ xorPtr(TrustedImmPtr(JSFunctionPoison::key()), rareDataReg); >+ loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); >+ loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); >+ >+ loadPtr(cachedFunction, cachedFunctionReg); >+ Jump hasSeenMultipleCallees = branchPtr(Equal, cachedFunctionReg, TrustedImmPtr(JSCell::seenMultipleCalleeObjects())); >+ addSlowCase(branchPtr(NotEqual, calleeReg, cachedFunctionReg)); >+ hasSeenMultipleCallees.link(this); >+ >+ JumpList slowCases; >+ auto butterfly = TrustedImmPtr(nullptr); >+ emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases); >+ emitGetVirtualRegister(callee, scratchReg); >+ loadPtr(Address(scratchReg, JSFunction::offsetOfRareData()), scratchReg); >+ xorPtr(TrustedImmPtr(JSFunctionPoison::key()), scratchReg); >+ load32(Address(scratchReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfInlineCapacity()), scratchReg); >+ emitInitializeInlineStorage(resultReg, scratchReg); >+ addSlowCase(slowCases); >+ emitPutVirtualRegister(currentInstruction[1].u.operand); >+} >+ >+void JIT::emit_op_check_tdz(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ addSlowCase(branchIfEmpty(regT0)); >+} >+ >+ >+// Slow cases >+ >+void JIT::emitSlow_op_eq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ callOperation(operationCompareEq, regT0, regT1); >+ boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR }); >+ emitPutVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >+} >+ >+void JIT::emitSlow_op_neq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ callOperation(operationCompareEq, regT0, regT1); >+ xor32(TrustedImm32(0x1), regT0); >+ boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR }); >+ emitPutVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >+} >+ >+void JIT::emitSlow_op_jeq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ unsigned target = currentInstruction[3].u.operand; >+ callOperation(operationCompareEq, regT0, regT1); >+ emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); >+} >+ >+void JIT::emitSlow_op_jneq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ unsigned target = currentInstruction[3].u.operand; >+ callOperation(operationCompareEq, regT0, regT1); >+ emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); >+} >+ >+void JIT::emitSlow_op_instanceof_custom(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ auto& bytecode = *reinterpret_cast<OpInstanceofCustom*>(currentInstruction); >+ int dst = bytecode.dst(); >+ int value = bytecode.value(); >+ int constructor = bytecode.constructor(); >+ int hasInstanceValue = bytecode.hasInstanceValue(); >+ >+ emitGetVirtualRegister(value, regT0); >+ emitGetVirtualRegister(constructor, regT1); >+ emitGetVirtualRegister(hasInstanceValue, regT2); >+ callOperation(operationInstanceOfCustom, regT0, regT1, regT2); >+ boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR }); >+ emitPutVirtualRegister(dst, returnValueGPR); >+} >+ >+void JIT::emit_op_has_structure_property(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int enumerator = currentInstruction[4].u.operand; >+ >+ emitGetVirtualRegister(base, regT0); >+ emitGetVirtualRegister(enumerator, regT1); >+ emitJumpSlowCaseIfNotJSCell(regT0, base); >+ >+ load32(Address(regT0, JSCell::structureIDOffset()), regT0); >+ addSlowCase(branch32(NotEqual, regT0, Address(regT1, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >+ >+ move(TrustedImm64(JSValue::encode(jsBoolean(true))), regT0); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::privateCompileHasIndexedProperty(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >+{ >+ Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >+ >+ PatchableJump badType; >+ >+ // FIXME: Add support for other types like TypedArrays and Arguments. >+ // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >+ JumpList slowCases = emitLoadForArrayMode(currentInstruction, arrayMode, badType); >+ move(TrustedImm64(JSValue::encode(jsBoolean(true))), regT0); >+ Jump done = jump(); >+ >+ LinkBuffer patchBuffer(*this, m_codeBlock); >+ >+ patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >+ >+ patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >+ >+ byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >+ m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >+ "Baseline has_indexed_property stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >+ >+ MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >+ MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationHasIndexedPropertyGeneric)); >+} >+ >+void JIT::emit_op_has_indexed_property(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >+ >+ emitGetVirtualRegisters(base, regT0, property, regT1); >+ >+ // This is technically incorrect - we're zero-extending an int32. On the hot path this doesn't matter. >+ // We check the value as if it was a uint32 against the m_vectorLength - which will always fail if >+ // number was signed since m_vectorLength is always less than intmax (since the total allocation >+ // size is always less than 4Gb). As such zero extending will have been correct (and extending the value >+ // to 64-bits is necessary since it's used in the address calculation. We zero extend rather than sign >+ // extending since it makes it easier to re-tag the value in the slow case. >+ zeroExtend32ToPtr(regT1, regT1); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, base); >+ emitArrayProfilingSiteWithCell(regT0, regT2, profile); >+ and32(TrustedImm32(IndexingShapeMask), regT2); >+ >+ JITArrayMode mode = chooseArrayMode(profile); >+ PatchableJump badType; >+ >+ // FIXME: Add support for other types like TypedArrays and Arguments. >+ // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >+ JumpList slowCases = emitLoadForArrayMode(currentInstruction, mode, badType); >+ >+ move(TrustedImm64(JSValue::encode(jsBoolean(true))), regT0); >+ >+ addSlowCase(badType); >+ addSlowCase(slowCases); >+ >+ Label done = label(); >+ >+ emitPutVirtualRegister(dst); >+ >+ Label nextHotPath = label(); >+ >+ m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, PatchableJump(), badType, mode, profile, done, nextHotPath)); >+} >+ >+void JIT::emitSlow_op_has_indexed_property(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >+ >+ Label slowPath = label(); >+ >+ emitGetVirtualRegister(base, regT0); >+ emitGetVirtualRegister(property, regT1); >+ Call call = callOperation(operationHasIndexedPropertyDefault, dst, regT0, regT1, byValInfo); >+ >+ m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >+ m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >+ m_byValInstructionIndex++; >+} >+ >+void JIT::emit_op_get_direct_pname(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int index = currentInstruction[4].u.operand; >+ int enumerator = currentInstruction[5].u.operand; >+ >+ // Check that base is a cell >+ emitGetVirtualRegister(base, regT0); >+ emitJumpSlowCaseIfNotJSCell(regT0, base); >+ >+ // Check the structure >+ emitGetVirtualRegister(enumerator, regT2); >+ load32(Address(regT0, JSCell::structureIDOffset()), regT1); >+ addSlowCase(branch32(NotEqual, regT1, Address(regT2, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >+ >+ // Compute the offset >+ emitGetVirtualRegister(index, regT1); >+ // If index is less than the enumerator's cached inline storage, then it's an inline access >+ Jump outOfLineAccess = branch32(AboveOrEqual, regT1, Address(regT2, JSPropertyNameEnumerator::cachedInlineCapacityOffset())); >+ addPtr(TrustedImm32(JSObject::offsetOfInlineStorage()), regT0); >+ signExtend32ToPtr(regT1, regT1); >+ load64(BaseIndex(regT0, regT1, TimesEight), regT0); >+ >+ Jump done = jump(); >+ >+ // Otherwise it's out of line >+ outOfLineAccess.link(this); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >+ sub32(Address(regT2, JSPropertyNameEnumerator::cachedInlineCapacityOffset()), regT1); >+ neg32(regT1); >+ signExtend32ToPtr(regT1, regT1); >+ int32_t offsetOfFirstProperty = static_cast<int32_t>(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue); >+ load64(BaseIndex(regT0, regT1, TimesEight, offsetOfFirstProperty), regT0); >+ >+ done.link(this); >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(dst, regT0); >+} >+ >+void JIT::emit_op_enumerator_structure_pname(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int enumerator = currentInstruction[2].u.operand; >+ int index = currentInstruction[3].u.operand; >+ >+ emitGetVirtualRegister(index, regT0); >+ emitGetVirtualRegister(enumerator, regT1); >+ Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endStructurePropertyIndexOffset())); >+ >+ move(TrustedImm64(JSValue::encode(jsNull())), regT0); >+ >+ Jump done = jump(); >+ inBounds.link(this); >+ >+ loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >+ signExtend32ToPtr(regT0, regT0); >+ load64(BaseIndex(regT1, regT0, TimesEight), regT0); >+ >+ done.link(this); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_enumerator_generic_pname(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int enumerator = currentInstruction[2].u.operand; >+ int index = currentInstruction[3].u.operand; >+ >+ emitGetVirtualRegister(index, regT0); >+ emitGetVirtualRegister(enumerator, regT1); >+ Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endGenericPropertyIndexOffset())); >+ >+ move(TrustedImm64(JSValue::encode(jsNull())), regT0); >+ >+ Jump done = jump(); >+ inBounds.link(this); >+ >+ loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >+ signExtend32ToPtr(regT0, regT0); >+ load64(BaseIndex(regT1, regT0, TimesEight), regT0); >+ >+ done.link(this); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_profile_type(Instruction* currentInstruction) >+{ >+ TypeLocation* cachedTypeLocation = currentInstruction[2].u.location; >+ int valueToProfile = currentInstruction[1].u.operand; >+ >+ emitGetVirtualRegister(valueToProfile, regT0); >+ >+ JumpList jumpToEnd; >+ >+ jumpToEnd.append(branchIfEmpty(regT0)); >+ >+ // Compile in a predictive type check, if possible, to see if we can skip writing to the log. >+ // These typechecks are inlined to match those of the 64-bit JSValue type checks. >+ if (cachedTypeLocation->m_lastSeenType == TypeUndefined) >+ jumpToEnd.append(branchIfUndefined(regT0)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeNull) >+ jumpToEnd.append(branchIfNull(regT0)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeBoolean) >+ jumpToEnd.append(branchIfBoolean(regT0, regT1)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeAnyInt) >+ jumpToEnd.append(branchIfInt32(regT0)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeNumber) >+ jumpToEnd.append(branchIfNumber(regT0)); >+ else if (cachedTypeLocation->m_lastSeenType == TypeString) { >+ Jump isNotCell = branchIfNotCell(regT0); >+ jumpToEnd.append(branchIfString(regT0)); >+ isNotCell.link(this); >+ } >+ >+ // Load the type profiling log into T2. >+ TypeProfilerLog* cachedTypeProfilerLog = m_vm->typeProfilerLog(); >+ move(TrustedImmPtr(cachedTypeProfilerLog), regT2); >+ // Load the next log entry into T1. >+ loadPtr(Address(regT2, TypeProfilerLog::currentLogEntryOffset()), regT1); >+ >+ // Store the JSValue onto the log entry. >+ store64(regT0, Address(regT1, TypeProfilerLog::LogEntry::valueOffset())); >+ >+ // Store the structureID of the cell if T0 is a cell, otherwise, store 0 on the log entry. >+ Jump notCell = branchIfNotCell(regT0); >+ load32(Address(regT0, JSCell::structureIDOffset()), regT0); >+ store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >+ Jump skipIsCell = jump(); >+ notCell.link(this); >+ store32(TrustedImm32(0), Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >+ skipIsCell.link(this); >+ >+ // Store the typeLocation on the log entry. >+ move(TrustedImmPtr(cachedTypeLocation), regT0); >+ store64(regT0, Address(regT1, TypeProfilerLog::LogEntry::locationOffset())); >+ >+ // Increment the current log entry. >+ addPtr(TrustedImm32(sizeof(TypeProfilerLog::LogEntry)), regT1); >+ store64(regT1, Address(regT2, TypeProfilerLog::currentLogEntryOffset())); >+ Jump skipClearLog = branchPtr(NotEqual, regT1, TrustedImmPtr(cachedTypeProfilerLog->logEndPtr())); >+ // Clear the log if we're at the end of the log. >+ callOperation(operationProcessTypeProfilerLog); >+ skipClearLog.link(this); >+ >+ jumpToEnd.link(this); >+} >+ >+void JIT::emit_op_log_shadow_chicken_prologue(Instruction* currentInstruction) >+{ >+ updateTopCallFrame(); >+ static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >+ GPRReg shadowPacketReg = regT0; >+ GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >+ GPRReg scratch2Reg = regT2; >+ ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT3); >+ logShadowChickenProloguePacket(shadowPacketReg, scratch1Reg, regT3); >+} >+ >+void JIT::emit_op_log_shadow_chicken_tail(Instruction* currentInstruction) >+{ >+ updateTopCallFrame(); >+ static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >+ GPRReg shadowPacketReg = regT0; >+ GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >+ GPRReg scratch2Reg = regT2; >+ ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT2); >+ emitGetVirtualRegister(currentInstruction[2].u.operand, regT3); >+ logShadowChickenTailPacket(shadowPacketReg, JSValueRegs(regT2), regT3, m_codeBlock, CallSiteIndex(m_bytecodeOffset)); >+} >+ >+void JIT::emit_op_unsigned(Instruction* currentInstruction) >+{ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ >+ emitGetVirtualRegister(op1, regT0); >+ emitJumpSlowCaseIfNotInt(regT0); >+ addSlowCase(branch32(LessThan, regT0, TrustedImm32(0))); >+ boxInt32(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(result, regT0); >+} >+ >+void JIT::emit_compareAndJump(OpcodeID, int op1, int op2, unsigned target, RelationalCondition condition) >+{ >+ // We generate inline code for the following cases in the fast path: >+ // - int immediate to constant int immediate >+ // - constant int immediate to int immediate >+ // - int immediate to int immediate >+ >+ if (isOperandConstantChar(op1)) { >+ emitGetVirtualRegister(op2, regT0); >+ addSlowCase(branchIfNotCell(regT0)); >+ JumpList failures; >+ emitLoadCharacterString(regT0, regT0, failures); >+ addSlowCase(failures); >+ addJump(branch32(commute(condition), regT0, Imm32(asString(getConstantOperand(op1))->tryGetValue()[0])), target); >+ return; >+ } >+ if (isOperandConstantChar(op2)) { >+ emitGetVirtualRegister(op1, regT0); >+ addSlowCase(branchIfNotCell(regT0)); >+ JumpList failures; >+ emitLoadCharacterString(regT0, regT0, failures); >+ addSlowCase(failures); >+ addJump(branch32(condition, regT0, Imm32(asString(getConstantOperand(op2))->tryGetValue()[0])), target); >+ return; >+ } >+ if (isOperandConstantInt(op2)) { >+ emitGetVirtualRegister(op1, regT0); >+ emitJumpSlowCaseIfNotInt(regT0); >+ int32_t op2imm = getOperandConstantInt(op2); >+ addJump(branch32(condition, regT0, Imm32(op2imm)), target); >+ return; >+ } >+ if (isOperandConstantInt(op1)) { >+ emitGetVirtualRegister(op2, regT1); >+ emitJumpSlowCaseIfNotInt(regT1); >+ int32_t op1imm = getOperandConstantInt(op1); >+ addJump(branch32(commute(condition), regT1, Imm32(op1imm)), target); >+ return; >+ } >+ >+ emitGetVirtualRegisters(op1, regT0, op2, regT1); >+ emitJumpSlowCaseIfNotInt(regT0); >+ emitJumpSlowCaseIfNotInt(regT1); >+ >+ addJump(branch32(condition, regT0, regT1), target); >+} >+ >+void JIT::emit_compareUnsignedAndJump(int op1, int op2, unsigned target, RelationalCondition condition) >+{ >+ if (isOperandConstantInt(op2)) { >+ emitGetVirtualRegister(op1, regT0); >+ int32_t op2imm = getOperandConstantInt(op2); >+ addJump(branch32(condition, regT0, Imm32(op2imm)), target); >+ } else if (isOperandConstantInt(op1)) { >+ emitGetVirtualRegister(op2, regT1); >+ int32_t op1imm = getOperandConstantInt(op1); >+ addJump(branch32(commute(condition), regT1, Imm32(op1imm)), target); >+ } else { >+ emitGetVirtualRegisters(op1, regT0, op2, regT1); >+ addJump(branch32(condition, regT0, regT1), target); >+ } >+} >+ >+void JIT::emit_compareUnsigned(int dst, int op1, int op2, RelationalCondition condition) >+{ >+ if (isOperandConstantInt(op2)) { >+ emitGetVirtualRegister(op1, regT0); >+ int32_t op2imm = getOperandConstantInt(op2); >+ compare32(condition, regT0, Imm32(op2imm), regT0); >+ } else if (isOperandConstantInt(op1)) { >+ emitGetVirtualRegister(op2, regT0); >+ int32_t op1imm = getOperandConstantInt(op1); >+ compare32(commute(condition), regT0, Imm32(op1imm), regT0); >+ } else { >+ emitGetVirtualRegisters(op1, regT0, op2, regT1); >+ compare32(condition, regT0, regT1, regT0); >+ } >+ boxBoolean(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondition condition, size_t (JIT_OPERATION *operation)(ExecState*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jlesseq), OPCODE_LENGTH_op_jlesseq_equals_op_jless); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jnless), OPCODE_LENGTH_op_jnless_equals_op_jless); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jnlesseq), OPCODE_LENGTH_op_jnlesseq_equals_op_jless); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jgreater), OPCODE_LENGTH_op_jgreater_equals_op_jless); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jgreatereq), OPCODE_LENGTH_op_jgreatereq_equals_op_jless); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jngreater), OPCODE_LENGTH_op_jngreater_equals_op_jless); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jngreatereq), OPCODE_LENGTH_op_jngreatereq_equals_op_jless); >+ >+ // We generate inline code for the following cases in the slow path: >+ // - floating-point number to constant int immediate >+ // - constant int immediate to floating-point number >+ // - floating-point number to floating-point number. >+ if (isOperandConstantChar(op1) || isOperandConstantChar(op2)) { >+ linkAllSlowCases(iter); >+ >+ emitGetVirtualRegister(op1, argumentGPR0); >+ emitGetVirtualRegister(op2, argumentGPR1); >+ callOperation(operation, argumentGPR0, argumentGPR1); >+ emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >+ return; >+ } >+ >+ if (isOperandConstantInt(op2)) { >+ linkAllSlowCases(iter); >+ >+ if (supportsFloatingPoint()) { >+ Jump fail1 = branchIfNotNumber(regT0); >+ add64(tagTypeNumberRegister, regT0); >+ move64ToDouble(regT0, fpRegT0); >+ >+ int32_t op2imm = getConstantOperand(op2).asInt32(); >+ >+ move(Imm32(op2imm), regT1); >+ convertInt32ToDouble(regT1, fpRegT1); >+ >+ emitJumpSlowToHot(branchDouble(condition, fpRegT0, fpRegT1), target); >+ >+ emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_jless)); >+ >+ fail1.link(this); >+ } >+ >+ emitGetVirtualRegister(op2, regT1); >+ callOperation(operation, regT0, regT1); >+ emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >+ return; >+ } >+ >+ if (isOperandConstantInt(op1)) { >+ linkAllSlowCases(iter); >+ >+ if (supportsFloatingPoint()) { >+ Jump fail1 = branchIfNotNumber(regT1); >+ add64(tagTypeNumberRegister, regT1); >+ move64ToDouble(regT1, fpRegT1); >+ >+ int32_t op1imm = getConstantOperand(op1).asInt32(); >+ >+ move(Imm32(op1imm), regT0); >+ convertInt32ToDouble(regT0, fpRegT0); >+ >+ emitJumpSlowToHot(branchDouble(condition, fpRegT0, fpRegT1), target); >+ >+ emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_jless)); >+ >+ fail1.link(this); >+ } >+ >+ emitGetVirtualRegister(op1, regT2); >+ callOperation(operation, regT2, regT1); >+ emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >+ return; >+ } >+ >+ linkSlowCase(iter); // LHS is not Int. >+ >+ if (supportsFloatingPoint()) { >+ Jump fail1 = branchIfNotNumber(regT0); >+ Jump fail2 = branchIfNotNumber(regT1); >+ Jump fail3 = branchIfInt32(regT1); >+ add64(tagTypeNumberRegister, regT0); >+ add64(tagTypeNumberRegister, regT1); >+ move64ToDouble(regT0, fpRegT0); >+ move64ToDouble(regT1, fpRegT1); >+ >+ emitJumpSlowToHot(branchDouble(condition, fpRegT0, fpRegT1), target); >+ >+ emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_jless)); >+ >+ fail1.link(this); >+ fail2.link(this); >+ fail3.link(this); >+ } >+ >+ linkSlowCase(iter); // RHS is not Int. >+ callOperation(operation, regT0, regT1); >+ emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >+} >+ >+void JIT::emit_op_inc(Instruction* currentInstruction) >+{ >+ int srcDst = currentInstruction[1].u.operand; >+ >+ emitGetVirtualRegister(srcDst, regT0); >+ emitJumpSlowCaseIfNotInt(regT0); >+ addSlowCase(branchAdd32(Overflow, TrustedImm32(1), regT0)); >+ boxInt32(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(srcDst); >+} >+ >+void JIT::emit_op_dec(Instruction* currentInstruction) >+{ >+ int srcDst = currentInstruction[1].u.operand; >+ >+ emitGetVirtualRegister(srcDst, regT0); >+ emitJumpSlowCaseIfNotInt(regT0); >+ addSlowCase(branchSub32(Overflow, TrustedImm32(1), regT0)); >+ boxInt32(regT0, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(srcDst); >+} >+ >+/* ------------------------------ BEGIN: OP_MOD ------------------------------ */ >+ >+#if CPU(X86_64) >+ >+void JIT::emit_op_mod(Instruction* currentInstruction) >+{ >+ int result = currentInstruction[1].u.operand; >+ int op1 = currentInstruction[2].u.operand; >+ int op2 = currentInstruction[3].u.operand; >+ >+ // Make sure registers are correct for x86 IDIV instructions. >+ ASSERT(regT0 == X86Registers::eax); >+ auto edx = X86Registers::edx; >+ auto ecx = X86Registers::ecx; >+ ASSERT(regT4 != edx); >+ ASSERT(regT4 != ecx); >+ >+ emitGetVirtualRegisters(op1, regT4, op2, ecx); >+ emitJumpSlowCaseIfNotInt(regT4); >+ emitJumpSlowCaseIfNotInt(ecx); >+ >+ move(regT4, regT0); >+ addSlowCase(branchTest32(Zero, ecx)); >+ Jump denominatorNotNeg1 = branch32(NotEqual, ecx, TrustedImm32(-1)); >+ addSlowCase(branch32(Equal, regT0, TrustedImm32(-2147483647-1))); >+ denominatorNotNeg1.link(this); >+ x86ConvertToDoubleWord32(); >+ x86Div32(ecx); >+ Jump numeratorPositive = branch32(GreaterThanOrEqual, regT4, TrustedImm32(0)); >+ addSlowCase(branchTest32(Zero, edx)); >+ numeratorPositive.link(this); >+ boxInt32(edx, JSValueRegs { regT0 }); >+ emitPutVirtualRegister(result); >+} >+ >+void JIT::emitSlow_op_mod(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >+ slowPathCall.call(); >+} >+ >+#else // CPU(X86_64) >+ >+void JIT::emit_op_mod(Instruction* currentInstruction) >+{ >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >+ slowPathCall.call(); >+} >+ >+void JIT::emitSlow_op_mod(Instruction*, Vector<SlowCaseEntry>::iterator&) >+{ >+ UNREACHABLE_FOR_PLATFORM(); >+} >+ >+#endif // CPU(X86_64) >+ >+/* ------------------------------ END: OP_MOD ------------------------------ */ >+ >+void JIT::emit_op_get_by_val(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >+ >+ emitGetVirtualRegister(base, regT0); >+ bool propertyNameIsIntegerConstant = isOperandConstantInt(property); >+ if (propertyNameIsIntegerConstant) >+ move(Imm32(getOperandConstantInt(property)), regT1); >+ else >+ emitGetVirtualRegister(property, regT1); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, base); >+ >+ PatchableJump notIndex; >+ if (!propertyNameIsIntegerConstant) { >+ notIndex = emitPatchableJumpIfNotInt(regT1); >+ addSlowCase(notIndex); >+ >+ // This is technically incorrect - we're zero-extending an int32. On the hot path this doesn't matter. >+ // We check the value as if it was a uint32 against the m_vectorLength - which will always fail if >+ // number was signed since m_vectorLength is always less than intmax (since the total allocation >+ // size is always less than 4Gb). As such zero extending will have been correct (and extending the value >+ // to 64-bits is necessary since it's used in the address calculation). We zero extend rather than sign >+ // extending since it makes it easier to re-tag the value in the slow case. >+ zeroExtend32ToPtr(regT1, regT1); >+ } >+ >+ emitArrayProfilingSiteWithCell(regT0, regT2, profile); >+ and32(TrustedImm32(IndexingShapeMask), regT2); >+ >+ PatchableJump badType; >+ JumpList slowCases; >+ >+ JITArrayMode mode = chooseArrayMode(profile); >+ switch (mode) { >+ case JITInt32: >+ slowCases = emitInt32GetByVal(currentInstruction, badType); >+ break; >+ case JITDouble: >+ slowCases = emitDoubleGetByVal(currentInstruction, badType); >+ break; >+ case JITContiguous: >+ slowCases = emitContiguousGetByVal(currentInstruction, badType); >+ break; >+ case JITArrayStorage: >+ slowCases = emitArrayStorageGetByVal(currentInstruction, badType); >+ break; >+ default: >+ CRASH(); >+ break; >+ } >+ >+ addSlowCase(badType); >+ addSlowCase(slowCases); >+ >+ Label done = label(); >+ >+ if (!ASSERT_DISABLED) { >+ Jump resultOK = branchIfNotEmpty(regT0); >+ abortWithReason(JITGetByValResultIsNotEmpty); >+ resultOK.link(this); >+ } >+ >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(dst); >+ >+ Label nextHotPath = label(); >+ >+ m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, nextHotPath)); >+} >+ >+JIT::JumpList JIT::emitDoubleLoad(Instruction*, PatchableJump& badType) >+{ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT2, TrustedImm32(DoubleShape)); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >+ slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength()))); >+ loadDouble(BaseIndex(regT2, regT1, TimesEight), fpRegT0); >+ slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitContiguousLoad(Instruction*, PatchableJump& badType, IndexingType expectedShape) >+{ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT2, TrustedImm32(expectedShape)); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >+ slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength()))); >+ load64(BaseIndex(regT2, regT1, TimesEight), regT0); >+ slowCases.append(branchTest64(Zero, regT0)); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitArrayStorageLoad(Instruction*, PatchableJump& badType) >+{ >+ JumpList slowCases; >+ >+ add32(TrustedImm32(-ArrayStorageShape), regT2, regT3); >+ badType = patchableBranch32(Above, regT3, TrustedImm32(SlowPutArrayStorageShape - ArrayStorageShape)); >+ >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >+ slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset()))); >+ >+ load64(BaseIndex(regT2, regT1, TimesEight, ArrayStorage::vectorOffset()), regT0); >+ slowCases.append(branchTest64(Zero, regT0)); >+ >+ return slowCases; >+} >+ >+JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, const Identifier& propertyName, Jump& fastDoneCase, Jump& slowDoneCase, JumpList& slowCases) >+{ >+ // base: regT0 >+ // property: regT1 >+ // scratch: regT3 >+ >+ int dst = currentInstruction[1].u.operand; >+ >+ slowCases.append(branchIfNotCell(regT1)); >+ emitByValIdentifierCheck(byValInfo, regT1, regT3, propertyName, slowCases); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get); >+ gen.generateFastPath(*this); >+ >+ fastDoneCase = jump(); >+ >+ Label coldPathBegin = label(); >+ gen.slowPathJump().link(this); >+ >+ Call call = callOperationWithProfile(operationGetByIdOptimize, dst, gen.stubInfo(), regT0, propertyName.impl()); >+ gen.reportSlowPathCall(coldPathBegin, call); >+ slowDoneCase = jump(); >+ >+ return gen; >+} >+ >+void JIT::emitSlow_op_get_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >+ >+ linkSlowCaseIfNotJSCell(iter, base); // base cell check >+ >+ if (!isOperandConstantInt(property)) >+ linkSlowCase(iter); // property int32 check >+ Jump nonCell = jump(); >+ linkSlowCase(iter); // base array check >+ Jump notString = branchIfNotString(regT0); >+ emitNakedCall(CodeLocationLabel<NoPtrTag>(m_vm->getCTIStub(stringGetByValGenerator).retaggedCode<NoPtrTag>())); >+ Jump failed = branchTest64(Zero, regT0); >+ emitPutVirtualRegister(dst, regT0); >+ emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_get_by_val)); >+ failed.link(this); >+ notString.link(this); >+ nonCell.link(this); >+ >+ linkSlowCase(iter); // vector length check >+ linkSlowCase(iter); // empty value >+ >+ Label slowPath = label(); >+ >+ emitGetVirtualRegister(base, regT0); >+ emitGetVirtualRegister(property, regT1); >+ Call call = callOperation(operationGetByValOptimize, dst, regT0, regT1, byValInfo); >+ >+ m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >+ m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >+ m_byValInstructionIndex++; >+ >+ emitValueProfilingSite(); >+} >+ >+void JIT::emit_op_put_by_val(Instruction* currentInstruction) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >+ >+ emitGetVirtualRegister(base, regT0); >+ bool propertyNameIsIntegerConstant = isOperandConstantInt(property); >+ if (propertyNameIsIntegerConstant) >+ move(Imm32(getOperandConstantInt(property)), regT1); >+ else >+ emitGetVirtualRegister(property, regT1); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, base); >+ PatchableJump notIndex; >+ if (!propertyNameIsIntegerConstant) { >+ notIndex = emitPatchableJumpIfNotInt(regT1); >+ addSlowCase(notIndex); >+ // See comment in op_get_by_val. >+ zeroExtend32ToPtr(regT1, regT1); >+ } >+ emitArrayProfilingSiteWithCell(regT0, regT2, profile); >+ >+ PatchableJump badType; >+ JumpList slowCases; >+ >+ // TODO: Maybe we should do this inline? >+ addSlowCase(branchTest32(NonZero, regT2, TrustedImm32(CopyOnWrite))); >+ and32(TrustedImm32(IndexingShapeMask), regT2); >+ >+ JITArrayMode mode = chooseArrayMode(profile); >+ switch (mode) { >+ case JITInt32: >+ slowCases = emitInt32PutByVal(currentInstruction, badType); >+ break; >+ case JITDouble: >+ slowCases = emitDoublePutByVal(currentInstruction, badType); >+ break; >+ case JITContiguous: >+ slowCases = emitContiguousPutByVal(currentInstruction, badType); >+ break; >+ case JITArrayStorage: >+ slowCases = emitArrayStoragePutByVal(currentInstruction, badType); >+ break; >+ default: >+ CRASH(); >+ break; >+ } >+ >+ addSlowCase(badType); >+ addSlowCase(slowCases); >+ >+ Label done = label(); >+ >+ m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, done)); >+} >+ >+JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction, PatchableJump& badType, IndexingType indexingShape) >+{ >+ int value = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT2, TrustedImm32(indexingShape)); >+ >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >+ Jump outOfBounds = branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength())); >+ >+ Label storeResult = label(); >+ emitGetVirtualRegister(value, regT3); >+ switch (indexingShape) { >+ case Int32Shape: >+ slowCases.append(branchIfNotInt32(regT3)); >+ store64(regT3, BaseIndex(regT2, regT1, TimesEight)); >+ break; >+ case DoubleShape: { >+ Jump notInt = branchIfNotInt32(regT3); >+ convertInt32ToDouble(regT3, fpRegT0); >+ Jump ready = jump(); >+ notInt.link(this); >+ add64(tagTypeNumberRegister, regT3); >+ move64ToDouble(regT3, fpRegT0); >+ slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >+ ready.link(this); >+ storeDouble(fpRegT0, BaseIndex(regT2, regT1, TimesEight)); >+ break; >+ } >+ case ContiguousShape: >+ store64(regT3, BaseIndex(regT2, regT1, TimesEight)); >+ emitWriteBarrier(currentInstruction[1].u.operand, value, ShouldFilterValue); >+ break; >+ default: >+ CRASH(); >+ break; >+ } >+ >+ Jump done = jump(); >+ outOfBounds.link(this); >+ >+ slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfVectorLength()))); >+ >+ emitArrayProfileStoreToHoleSpecialCase(profile); >+ >+ add32(TrustedImm32(1), regT1, regT3); >+ store32(regT3, Address(regT2, Butterfly::offsetOfPublicLength())); >+ jump().linkTo(storeResult, this); >+ >+ done.link(this); >+ >+ return slowCases; >+} >+ >+JIT::JumpList JIT::emitArrayStoragePutByVal(Instruction* currentInstruction, PatchableJump& badType) >+{ >+ int value = currentInstruction[3].u.operand; >+ ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >+ >+ JumpList slowCases; >+ >+ badType = patchableBranch32(NotEqual, regT2, TrustedImm32(ArrayStorageShape)); >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >+ slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset()))); >+ >+ Jump empty = branchTest64(Zero, BaseIndex(regT2, regT1, TimesEight, ArrayStorage::vectorOffset())); >+ >+ Label storeResult(this); >+ emitGetVirtualRegister(value, regT3); >+ store64(regT3, BaseIndex(regT2, regT1, TimesEight, ArrayStorage::vectorOffset())); >+ emitWriteBarrier(currentInstruction[1].u.operand, value, ShouldFilterValue); >+ Jump end = jump(); >+ >+ empty.link(this); >+ emitArrayProfileStoreToHoleSpecialCase(profile); >+ add32(TrustedImm32(1), Address(regT2, ArrayStorage::numValuesInVectorOffset())); >+ branch32(Below, regT1, Address(regT2, ArrayStorage::lengthOffset())).linkTo(storeResult, this); >+ >+ add32(TrustedImm32(1), regT1); >+ store32(regT1, Address(regT2, ArrayStorage::lengthOffset())); >+ sub32(TrustedImm32(1), regT1); >+ jump().linkTo(storeResult, this); >+ >+ end.link(this); >+ >+ return slowCases; >+} >+ >+JITPutByIdGenerator JIT::emitPutByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, PutKind putKind, const Identifier& propertyName, JumpList& doneCases, JumpList& slowCases) >+{ >+ // base: regT0 >+ // property: regT1 >+ // scratch: regT2 >+ >+ int base = currentInstruction[1].u.operand; >+ int value = currentInstruction[3].u.operand; >+ >+ slowCases.append(branchIfNotCell(regT1)); >+ emitByValIdentifierCheck(byValInfo, regT1, regT1, propertyName, slowCases); >+ >+ // Write barrier breaks the registers. So after issuing the write barrier, >+ // reload the registers. >+ emitGetVirtualRegisters(base, regT0, value, regT1); >+ >+ JITPutByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ JSValueRegs(regT0), JSValueRegs(regT1), regT2, m_codeBlock->ecmaMode(), putKind); >+ gen.generateFastPath(*this); >+ emitWriteBarrier(base, value, ShouldFilterBase); >+ doneCases.append(jump()); >+ >+ Label coldPathBegin = label(); >+ gen.slowPathJump().link(this); >+ >+ Call call = callOperation(gen.slowPathFunction(), gen.stubInfo(), regT1, regT0, propertyName.impl()); >+ gen.reportSlowPathCall(coldPathBegin, call); >+ doneCases.append(jump()); >+ >+ return gen; >+} >+ >+void JIT::emitSlow_op_put_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ int base = currentInstruction[1].u.operand; >+ int property = currentInstruction[2].u.operand; >+ int value = currentInstruction[3].u.operand; >+ ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >+ >+ linkAllSlowCases(iter); >+ Label slowPath = label(); >+ >+ emitGetVirtualRegister(base, regT0); >+ emitGetVirtualRegister(property, regT1); >+ emitGetVirtualRegister(value, regT2); >+ bool isDirect = Interpreter::getOpcodeID(currentInstruction->u.opcode) == op_put_by_val_direct; >+ Call call = callOperation(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize, regT0, regT1, regT2, byValInfo); >+ >+ m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >+ m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >+ m_byValInstructionIndex++; >+} >+ >+void JIT::emit_op_put_getter_by_id(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ int32_t options = currentInstruction[3].u.operand; >+ emitGetVirtualRegister(currentInstruction[4].u.operand, regT1); >+ callOperation(operationPutGetterById, regT0, m_codeBlock->identifier(currentInstruction[2].u.operand).impl(), options, regT1); >+} >+ >+void JIT::emit_op_put_setter_by_id(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ int32_t options = currentInstruction[3].u.operand; >+ emitGetVirtualRegister(currentInstruction[4].u.operand, regT1); >+ callOperation(operationPutSetterById, regT0, m_codeBlock->identifier(currentInstruction[2].u.operand).impl(), options, regT1); >+} >+ >+void JIT::emit_op_put_getter_setter_by_id(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ int32_t attribute = currentInstruction[3].u.operand; >+ emitGetVirtualRegister(currentInstruction[4].u.operand, regT1); >+ emitGetVirtualRegister(currentInstruction[5].u.operand, regT2); >+ callOperation(operationPutGetterSetter, regT0, m_codeBlock->identifier(currentInstruction[2].u.operand).impl(), attribute, regT1, regT2); >+} >+ >+void JIT::emit_op_put_getter_by_val(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ emitGetVirtualRegister(currentInstruction[2].u.operand, regT1); >+ int32_t attributes = currentInstruction[3].u.operand; >+ emitGetVirtualRegister(currentInstruction[4].u.operand, regT2); >+ callOperation(operationPutGetterByVal, regT0, regT1, attributes, regT2); >+} >+ >+void JIT::emit_op_put_setter_by_val(Instruction* currentInstruction) >+{ >+ emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >+ emitGetVirtualRegister(currentInstruction[2].u.operand, regT1); >+ int32_t attributes = currentInstruction[3].u.operand; >+ emitGetVirtualRegister(currentInstruction[4].u.operand, regT2); >+ callOperation(operationPutSetterByVal, regT0, regT1, attributes, regT2); >+} >+ >+void JIT::emit_op_del_by_id(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ emitGetVirtualRegister(base, regT0); >+ callOperation(operationDeleteByIdJSResult, dst, regT0, m_codeBlock->identifier(property).impl()); >+} >+ >+void JIT::emit_op_del_by_val(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int base = currentInstruction[2].u.operand; >+ int property = currentInstruction[3].u.operand; >+ emitGetVirtualRegister(base, regT0); >+ emitGetVirtualRegister(property, regT1); >+ callOperation(operationDeleteByValJSResult, dst, regT0, regT1); >+} >+ >+void JIT::emit_op_try_get_by_id(Instruction* currentInstruction) >+{ >+ int resultVReg = currentInstruction[1].u.operand; >+ int baseVReg = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitGetVirtualRegister(baseVReg, regT0); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::TryGet); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIds.append(gen); >+ >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(resultVReg); >+} >+ >+void JIT::emitSlow_op_try_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperation(operationTryGetByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_get_by_id_direct(Instruction* currentInstruction) >+{ >+ int resultVReg = currentInstruction[1].u.operand; >+ int baseVReg = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitGetVirtualRegister(baseVReg, regT0); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetDirect); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIds.append(gen); >+ >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(resultVReg); >+} >+ >+void JIT::emitSlow_op_get_by_id_direct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperationWithProfile(operationGetByIdDirectOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_get_by_id(Instruction* currentInstruction) >+{ >+ int resultVReg = currentInstruction[1].u.operand; >+ int baseVReg = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitGetVirtualRegister(baseVReg, regT0); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >+ >+ if (*ident == m_vm->propertyNames->length && shouldEmitProfiling()) >+ emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT1, m_bytecodeOffset); >+ >+ JITGetByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIds.append(gen); >+ >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(resultVReg); >+} >+ >+void JIT::emit_op_get_by_id_with_this(Instruction* currentInstruction) >+{ >+ int resultVReg = currentInstruction[1].u.operand; >+ int baseVReg = currentInstruction[2].u.operand; >+ int thisVReg = currentInstruction[3].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >+ >+ emitGetVirtualRegister(baseVReg, regT0); >+ emitGetVirtualRegister(thisVReg, regT1); >+ emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >+ emitJumpSlowCaseIfNotJSCell(regT1, thisVReg); >+ >+ JITGetByIdWithThisGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), JSValueRegs(regT1), AccessType::GetWithThis); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_getByIdsWithThis.append(gen); >+ >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(resultVReg); >+} >+ >+void JIT::emitSlow_op_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperationWithProfile(operationGetByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emitSlow_op_get_by_id_with_this(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >+ >+ JITGetByIdWithThisGenerator& gen = m_getByIdsWithThis[m_getByIdWithThisIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperationWithProfile(operationGetByIdWithThisOptimize, resultVReg, gen.stubInfo(), regT0, regT1, ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_put_by_id(Instruction* currentInstruction) >+{ >+ int baseVReg = currentInstruction[1].u.operand; >+ int valueVReg = currentInstruction[3].u.operand; >+ unsigned direct = currentInstruction[8].u.putByIdFlags & PutByIdIsDirect; >+ >+ // In order to be able to patch both the Structure, and the object offset, we store one pointer, >+ // to just after the arguments have been loaded into registers 'hotPathBegin', and we generate code >+ // such that the Structure & offset are always at the same distance from this. >+ >+ emitGetVirtualRegisters(baseVReg, regT0, valueVReg, regT1); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >+ >+ JITPutByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ JSValueRegs(regT0), JSValueRegs(regT1), regT2, m_codeBlock->ecmaMode(), >+ direct ? Direct : NotDirect); >+ >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ >+ emitWriteBarrier(baseVReg, valueVReg, ShouldFilterBase); >+ >+ m_putByIds.append(gen); >+} >+ >+void JIT::emitSlow_op_put_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[2].u.operand)); >+ >+ Label coldPathBegin(this); >+ >+ JITPutByIdGenerator& gen = m_putByIds[m_putByIdIndex++]; >+ >+ Call call = callOperation(gen.slowPathFunction(), gen.stubInfo(), regT1, regT0, ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emit_op_in_by_id(Instruction* currentInstruction) >+{ >+ int resultVReg = currentInstruction[1].u.operand; >+ int baseVReg = currentInstruction[2].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ emitGetVirtualRegister(baseVReg, regT0); >+ >+ emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >+ >+ JITInByIdGenerator gen( >+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >+ ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0)); >+ gen.generateFastPath(*this); >+ addSlowCase(gen.slowPathJump()); >+ m_inByIds.append(gen); >+ >+ emitPutVirtualRegister(resultVReg); >+} >+ >+void JIT::emitSlow_op_in_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int resultVReg = currentInstruction[1].u.operand; >+ const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >+ >+ JITInByIdGenerator& gen = m_inByIds[m_inByIdIndex++]; >+ >+ Label coldPathBegin = label(); >+ >+ Call call = callOperation(operationInByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >+ >+ gen.reportSlowPathCall(coldPathBegin, call); >+} >+ >+void JIT::emitVarInjectionCheck(bool needsVarInjectionChecks) >+{ >+ if (!needsVarInjectionChecks) >+ return; >+ addSlowCase(branch8(Equal, AbsoluteAddress(m_codeBlock->globalObject()->varInjectionWatchpoint()->addressOfState()), TrustedImm32(IsInvalidated))); >+} >+ >+void JIT::emitResolveClosure(int dst, int scope, bool needsVarInjectionChecks, unsigned depth) >+{ >+ emitVarInjectionCheck(needsVarInjectionChecks); >+ emitGetVirtualRegister(scope, regT0); >+ for (unsigned i = 0; i < depth; ++i) >+ loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_resolve_scope(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int scope = currentInstruction[2].u.operand; >+ ResolveType resolveType = static_cast<ResolveType>(copiedInstruction(currentInstruction)[4].u.operand); >+ unsigned depth = currentInstruction[5].u.operand; >+ >+ auto emitCode = [&] (ResolveType resolveType) { >+ switch (resolveType) { >+ case GlobalProperty: >+ case GlobalVar: >+ case GlobalPropertyWithVarInjectionChecks: >+ case GlobalVarWithVarInjectionChecks: >+ case GlobalLexicalVar: >+ case GlobalLexicalVarWithVarInjectionChecks: { >+ JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >+ RELEASE_ASSERT(constantScope); >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ move(TrustedImmPtr(constantScope), regT0); >+ emitPutVirtualRegister(dst); >+ break; >+ } >+ case ClosureVar: >+ case ClosureVarWithVarInjectionChecks: >+ emitResolveClosure(dst, scope, needsVarInjectionChecks(resolveType), depth); >+ break; >+ case ModuleVar: >+ move(TrustedImmPtr(currentInstruction[6].u.jsCell.get()), regT0); >+ emitPutVirtualRegister(dst); >+ break; >+ case Dynamic: >+ addSlowCase(jump()); >+ break; >+ case LocalClosureVar: >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ }; >+ >+ switch (resolveType) { >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: { >+ JumpList skipToEnd; >+ load32(¤tInstruction[4], regT0); >+ >+ Jump notGlobalProperty = branch32(NotEqual, regT0, TrustedImm32(GlobalProperty)); >+ emitCode(GlobalProperty); >+ skipToEnd.append(jump()); >+ notGlobalProperty.link(this); >+ >+ Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >+ emitCode(GlobalPropertyWithVarInjectionChecks); >+ skipToEnd.append(jump()); >+ notGlobalPropertyWithVarInjections.link(this); >+ >+ Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >+ emitCode(GlobalLexicalVar); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVar.link(this); >+ >+ Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >+ emitCode(GlobalLexicalVarWithVarInjectionChecks); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVarWithVarInjections.link(this); >+ >+ addSlowCase(jump()); >+ skipToEnd.link(this); >+ break; >+ } >+ >+ default: >+ emitCode(resolveType); >+ break; >+ } >+} >+ >+void JIT::emitLoadWithStructureCheck(int scope, Structure** structureSlot) >+{ >+ loadPtr(structureSlot, regT1); >+ emitGetVirtualRegister(scope, regT0); >+ addSlowCase(branchTestPtr(Zero, regT1)); >+ load32(Address(regT1, Structure::structureIDOffset()), regT1); >+ addSlowCase(branch32(NotEqual, Address(regT0, JSCell::structureIDOffset()), regT1)); >+} >+ >+void JIT::emitGetVarFromPointer(JSValue* operand, GPRReg reg) >+{ >+ loadPtr(operand, reg); >+} >+ >+void JIT::emitGetVarFromIndirectPointer(JSValue** operand, GPRReg reg) >+{ >+ loadPtr(operand, reg); >+ loadPtr(reg, reg); >+} >+ >+void JIT::emitGetClosureVar(int scope, uintptr_t operand) >+{ >+ emitGetVirtualRegister(scope, regT0); >+ loadPtr(Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register)), regT0); >+} >+ >+void JIT::emit_op_get_from_scope(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int scope = currentInstruction[2].u.operand; >+ ResolveType resolveType = GetPutInfo(copiedInstruction(currentInstruction)[4].u.operand).resolveType(); >+ Structure** structureSlot = currentInstruction[5].u.structure.slot(); >+ uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >+ >+ auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >+ switch (resolveType) { >+ case GlobalProperty: >+ case GlobalPropertyWithVarInjectionChecks: { >+ emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection since we don't cache structures for anything but the GlobalObject. Additionally, resolve_scope handles checking for the var injection. >+ GPRReg base = regT0; >+ GPRReg result = regT0; >+ GPRReg offset = regT1; >+ GPRReg scratch = regT2; >+ >+ jitAssert(scopedLambda<Jump(void)>([&] () -> Jump { >+ return branchPtr(Equal, base, TrustedImmPtr(m_codeBlock->globalObject())); >+ })); >+ >+ load32(operandSlot, offset); >+ if (!ASSERT_DISABLED) { >+ Jump isOutOfLine = branch32(GreaterThanOrEqual, offset, TrustedImm32(firstOutOfLineOffset)); >+ abortWithReason(JITOffsetIsNotOutOfLine); >+ isOutOfLine.link(this); >+ } >+ loadPtr(Address(base, JSObject::butterflyOffset()), scratch); >+ neg32(offset); >+ signExtend32ToPtr(offset, offset); >+ load64(BaseIndex(scratch, offset, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue)), result); >+ break; >+ } >+ case GlobalVar: >+ case GlobalVarWithVarInjectionChecks: >+ case GlobalLexicalVar: >+ case GlobalLexicalVarWithVarInjectionChecks: >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ if (indirectLoadForOperand) >+ emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT0); >+ else >+ emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT0); >+ if (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks) // TDZ check. >+ addSlowCase(branchIfEmpty(regT0)); >+ break; >+ case ClosureVar: >+ case ClosureVarWithVarInjectionChecks: >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ emitGetClosureVar(scope, *operandSlot); >+ break; >+ case Dynamic: >+ addSlowCase(jump()); >+ break; >+ case LocalClosureVar: >+ case ModuleVar: >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ }; >+ >+ switch (resolveType) { >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: { >+ JumpList skipToEnd; >+ load32(¤tInstruction[4], regT0); >+ and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >+ >+ Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >+ Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >+ isGlobalProperty.link(this); >+ emitCode(GlobalProperty, false); >+ skipToEnd.append(jump()); >+ notGlobalPropertyWithVarInjections.link(this); >+ >+ Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >+ emitCode(GlobalLexicalVar, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVar.link(this); >+ >+ Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >+ emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVarWithVarInjections.link(this); >+ >+ addSlowCase(jump()); >+ >+ skipToEnd.link(this); >+ break; >+ } >+ >+ default: >+ emitCode(resolveType, false); >+ break; >+ } >+ emitPutVirtualRegister(dst); >+ emitValueProfilingSite(); >+} >+ >+void JIT::emitSlow_op_get_from_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ int dst = currentInstruction[1].u.operand; >+ callOperationWithProfile(operationGetFromScope, dst, currentInstruction); >+} >+ >+void JIT::emitPutGlobalVariable(JSValue* operand, int value, WatchpointSet* set) >+{ >+ emitGetVirtualRegister(value, regT0); >+ emitNotifyWrite(set); >+ storePtr(regT0, operand); >+} >+void JIT::emitPutGlobalVariableIndirect(JSValue** addressOfOperand, int value, WatchpointSet** indirectWatchpointSet) >+{ >+ emitGetVirtualRegister(value, regT0); >+ loadPtr(indirectWatchpointSet, regT1); >+ emitNotifyWrite(regT1); >+ loadPtr(addressOfOperand, regT1); >+ storePtr(regT0, regT1); >+} >+ >+void JIT::emitPutClosureVar(int scope, uintptr_t operand, int value, WatchpointSet* set) >+{ >+ emitGetVirtualRegister(value, regT1); >+ emitGetVirtualRegister(scope, regT0); >+ emitNotifyWrite(set); >+ storePtr(regT1, Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register))); >+} >+ >+void JIT::emit_op_put_to_scope(Instruction* currentInstruction) >+{ >+ int scope = currentInstruction[1].u.operand; >+ int value = currentInstruction[3].u.operand; >+ GetPutInfo getPutInfo = GetPutInfo(copiedInstruction(currentInstruction)[4].u.operand); >+ ResolveType resolveType = getPutInfo.resolveType(); >+ Structure** structureSlot = currentInstruction[5].u.structure.slot(); >+ uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >+ >+ auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >+ switch (resolveType) { >+ case GlobalProperty: >+ case GlobalPropertyWithVarInjectionChecks: { >+ emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection since we don't cache structures for anything but the GlobalObject. Additionally, resolve_scope handles checking for the var injection. >+ emitGetVirtualRegister(value, regT2); >+ >+ jitAssert(scopedLambda<Jump(void)>([&] () -> Jump { >+ return branchPtr(Equal, regT0, TrustedImmPtr(m_codeBlock->globalObject())); >+ })); >+ >+ loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >+ loadPtr(operandSlot, regT1); >+ negPtr(regT1); >+ storePtr(regT2, BaseIndex(regT0, regT1, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue))); >+ emitWriteBarrier(m_codeBlock->globalObject(), value, ShouldFilterValue); >+ break; >+ } >+ case GlobalVar: >+ case GlobalVarWithVarInjectionChecks: >+ case GlobalLexicalVar: >+ case GlobalLexicalVarWithVarInjectionChecks: { >+ JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >+ RELEASE_ASSERT(constantScope); >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ if (!isInitialization(getPutInfo.initializationMode()) && (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks)) { >+ // We need to do a TDZ check here because we can't always prove we need to emit TDZ checks statically. >+ if (indirectLoadForOperand) >+ emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT0); >+ else >+ emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT0); >+ addSlowCase(branchIfEmpty(regT0)); >+ } >+ if (indirectLoadForOperand) >+ emitPutGlobalVariableIndirect(bitwise_cast<JSValue**>(operandSlot), value, bitwise_cast<WatchpointSet**>(¤tInstruction[5])); >+ else >+ emitPutGlobalVariable(bitwise_cast<JSValue*>(*operandSlot), value, currentInstruction[5].u.watchpointSet); >+ emitWriteBarrier(constantScope, value, ShouldFilterValue); >+ break; >+ } >+ case LocalClosureVar: >+ case ClosureVar: >+ case ClosureVarWithVarInjectionChecks: >+ emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >+ emitPutClosureVar(scope, *operandSlot, value, currentInstruction[5].u.watchpointSet); >+ emitWriteBarrier(scope, value, ShouldFilterValue); >+ break; >+ case ModuleVar: >+ case Dynamic: >+ addSlowCase(jump()); >+ break; >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: >+ RELEASE_ASSERT_NOT_REACHED(); >+ break; >+ } >+ }; >+ >+ switch (resolveType) { >+ case UnresolvedProperty: >+ case UnresolvedPropertyWithVarInjectionChecks: { >+ JumpList skipToEnd; >+ load32(¤tInstruction[4], regT0); >+ and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >+ >+ Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >+ Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >+ isGlobalProperty.link(this); >+ emitCode(GlobalProperty, false); >+ skipToEnd.append(jump()); >+ notGlobalPropertyWithVarInjections.link(this); >+ >+ Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >+ emitCode(GlobalLexicalVar, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVar.link(this); >+ >+ Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >+ emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >+ skipToEnd.append(jump()); >+ notGlobalLexicalVarWithVarInjections.link(this); >+ >+ addSlowCase(jump()); >+ >+ skipToEnd.link(this); >+ break; >+ } >+ >+ default: >+ emitCode(resolveType, false); >+ break; >+ } >+} >+ >+void JIT::emitSlow_op_put_to_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ GetPutInfo getPutInfo = GetPutInfo(copiedInstruction(currentInstruction)[4].u.operand); >+ ResolveType resolveType = getPutInfo.resolveType(); >+ if (resolveType == ModuleVar) { >+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_throw_strict_mode_readonly_property_write_error); >+ slowPathCall.call(); >+ } else >+ callOperation(operationPutToScope, currentInstruction); >+} >+ >+void JIT::emit_op_get_from_arguments(Instruction* currentInstruction) >+{ >+ int dst = currentInstruction[1].u.operand; >+ int arguments = currentInstruction[2].u.operand; >+ int index = currentInstruction[3].u.operand; >+ >+ emitGetVirtualRegister(arguments, regT0); >+ load64(Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>)), regT0); >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::emit_op_put_to_arguments(Instruction* currentInstruction) >+{ >+ int arguments = currentInstruction[1].u.operand; >+ int index = currentInstruction[2].u.operand; >+ int value = currentInstruction[3].u.operand; >+ >+ emitGetVirtualRegister(arguments, regT0); >+ emitGetVirtualRegister(value, regT1); >+ store64(regT1, Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>))); >+ >+ emitWriteBarrier(arguments, value, ShouldFilterValue); >+} >+ >+void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode) >+{ >+ Jump valueNotCell; >+ if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) { >+ emitGetVirtualRegister(value, regT0); >+ valueNotCell = branchIfNotCell(regT0); >+ } >+ >+ emitGetVirtualRegister(owner, regT0); >+ Jump ownerNotCell; >+ if (mode == ShouldFilterBaseAndValue || mode == ShouldFilterBase) >+ ownerNotCell = branchIfNotCell(regT0); >+ >+ Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), regT0, regT1); >+ callOperation(operationWriteBarrierSlowPath, regT0); >+ ownerIsRememberedOrInEden.link(this); >+ >+ if (mode == ShouldFilterBaseAndValue || mode == ShouldFilterBase) >+ ownerNotCell.link(this); >+ if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) >+ valueNotCell.link(this); >+} >+ >+void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode) >+{ >+ emitGetVirtualRegister(value, regT0); >+ Jump valueNotCell; >+ if (mode == ShouldFilterValue) >+ valueNotCell = branchIfNotCell(regT0); >+ >+ emitWriteBarrier(owner); >+ >+ if (mode == ShouldFilterValue) >+ valueNotCell.link(this); >+} >+ >+void JIT::emitPutCallResult(Instruction* instruction) >+{ >+ int dst = instruction[1].u.operand; >+ emitValueProfilingSite(); >+ emitPutVirtualRegister(dst); >+} >+ >+void JIT::compileSetupVarargsFrame(OpcodeID opcode, Instruction* instruction, CallLinkInfo* info) >+{ >+ int thisValue = instruction[3].u.operand; >+ int arguments = instruction[4].u.operand; >+ int firstFreeRegister = instruction[5].u.operand; >+ int firstVarArgOffset = instruction[6].u.operand; >+ >+ emitGetVirtualRegister(arguments, regT1); >+ Z_JITOperation_EJZZ sizeOperation; >+ if (opcode == op_tail_call_forward_arguments) >+ sizeOperation = operationSizeFrameForForwardArguments; >+ else >+ sizeOperation = operationSizeFrameForVarargs; >+ callOperation(sizeOperation, regT1, -firstFreeRegister, firstVarArgOffset); >+ move(TrustedImm32(-firstFreeRegister), regT1); >+ emitSetVarargsFrame(*this, returnValueGPR, false, regT1, regT1); >+ addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(void*)))), regT1, stackPointerRegister); >+ emitGetVirtualRegister(arguments, regT2); >+ F_JITOperation_EFJZZ setupOperation; >+ if (opcode == op_tail_call_forward_arguments) >+ setupOperation = operationSetupForwardArgumentsFrame; >+ else >+ setupOperation = operationSetupVarargsFrame; >+ callOperation(setupOperation, regT1, regT2, firstVarArgOffset, regT0); >+ move(returnValueGPR, regT1); >+ >+ // Profile the argument count. >+ load32(Address(regT1, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2); >+ load32(info->addressOfMaxNumArguments(), regT0); >+ Jump notBiggest = branch32(Above, regT0, regT2); >+ store32(regT2, info->addressOfMaxNumArguments()); >+ notBiggest.link(this); >+ >+ // Initialize 'this'. >+ emitGetVirtualRegister(thisValue, regT0); >+ store64(regT0, Address(regT1, CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register)))); >+ >+ addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), regT1, stackPointerRegister); >+} >+ >+void JIT::compileCallEval(Instruction* instruction) >+{ >+ addPtr(TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), stackPointerRegister, regT1); >+ storePtr(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset())); >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ checkStackPointerAlignment(); >+ >+ callOperation(operationCallEval, regT1); >+ >+ addSlowCase(branchIfEmpty(regT0)); >+ >+ sampleCodeBlock(m_codeBlock); >+ >+ emitPutCallResult(instruction); >+} >+ >+void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ linkAllSlowCases(iter); >+ >+ CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); >+ info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0); >+ >+ int registerOffset = -instruction[4].u.operand; >+ >+ addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >+ >+ load64(Address(stackPointerRegister, sizeof(Register) * CallFrameSlot::callee - sizeof(CallerFrameAndPC)), regT0); >+ emitDumbVirtualCall(*vm(), info); >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ checkStackPointerAlignment(); >+ >+ sampleCodeBlock(m_codeBlock); >+ >+ emitPutCallResult(instruction); >+} >+ >+void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex) >+{ >+ int callee = instruction[2].u.operand; >+ >+ /* Caller always: >+ - Updates callFrameRegister to callee callFrame. >+ - Initializes ArgumentCount; CallerFrame; Callee. >+ >+ For a JS call: >+ - Callee initializes ReturnPC; CodeBlock. >+ - Callee restores callFrameRegister before return. >+ >+ For a non-JS call: >+ - Caller initializes ReturnPC; CodeBlock. >+ - Caller restores callFrameRegister after return. >+ */ >+ COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct), call_and_construct_opcodes_must_be_same_length); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_call_varargs), call_and_call_varargs_opcodes_must_be_same_length); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct_varargs), call_and_construct_varargs_opcodes_must_be_same_length); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call), call_and_tail_call_opcodes_must_be_same_length); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call_varargs), call_and_tail_call_varargs_opcodes_must_be_same_length); >+ COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call_forward_arguments), call_and_tail_call_forward_arguments_opcodes_must_be_same_length); >+ >+ CallLinkInfo* info = nullptr; >+ if (opcodeID != op_call_eval) >+ info = m_codeBlock->addCallLinkInfo(); >+ if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) >+ compileSetupVarargsFrame(opcodeID, instruction, info); >+ else { >+ int argCount = instruction[3].u.operand; >+ int registerOffset = -instruction[4].u.operand; >+ >+ if (opcodeID == op_call && shouldEmitProfiling()) { >+ emitGetVirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0); >+ Jump done = branchIfNotCell(regT0); >+ load32(Address(regT0, JSCell::structureIDOffset()), regT0); >+ store32(regT0, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID()); >+ done.link(this); >+ } >+ >+ addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >+ store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); >+ } // SP holds newCallFrame + sizeof(CallerFrameAndPC), with ArgumentCount initialized. >+ >+ uint32_t bytecodeOffset = m_codeBlock->bytecodeOffset(instruction); >+ uint32_t locationBits = CallSiteIndex(bytecodeOffset).bits(); >+ store32(TrustedImm32(locationBits), Address(callFrameRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + TagOffset)); >+ >+ emitGetVirtualRegister(callee, regT0); // regT0 holds callee. >+ store64(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); >+ >+ if (opcodeID == op_call_eval) { >+ compileCallEval(instruction); >+ return; >+ } >+ >+ DataLabelPtr addressOfLinkedFunctionCheck; >+ Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(nullptr)); >+ addSlowCase(slowCase); >+ >+ ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); >+ info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0); >+ m_callCompilationInfo.append(CallCompilationInfo()); >+ m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck; >+ m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info; >+ >+ if (opcodeID == op_tail_call) { >+ CallFrameShuffleData shuffleData; >+ shuffleData.numPassedArgs = instruction[3].u.operand; >+ shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister; >+ shuffleData.numLocals = >+ instruction[4].u.operand - sizeof(CallerFrameAndPC) / sizeof(Register); >+ shuffleData.args.resize(instruction[3].u.operand); >+ for (int i = 0; i < instruction[3].u.operand; ++i) { >+ shuffleData.args[i] = >+ ValueRecovery::displacedInJSStack( >+ virtualRegisterForArgument(i) - instruction[4].u.operand, >+ DataFormatJS); >+ } >+ shuffleData.callee = >+ ValueRecovery::inGPR(regT0, DataFormatJS); >+ shuffleData.setupCalleeSaveRegisters(m_codeBlock); >+ info->setFrameShuffleData(shuffleData); >+ CallFrameShuffler(*this, shuffleData).prepareForTailCall(); >+ m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); >+ return; >+ } >+ >+ if (opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) { >+ emitRestoreCalleeSaves(); >+ prepareForTailCallSlow(); >+ m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); >+ return; >+ } >+ >+ m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ checkStackPointerAlignment(); >+ >+ sampleCodeBlock(m_codeBlock); >+ >+ emitPutCallResult(instruction); >+} >+ >+void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex) >+{ >+ if (opcodeID == op_call_eval) { >+ compileCallEvalSlowCase(instruction, iter); >+ return; >+ } >+ >+ linkAllSlowCases(iter); >+ >+ if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) >+ emitRestoreCalleeSaves(); >+ >+ move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); >+ >+ m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = >+ emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).retaggedCode<NoPtrTag>()); >+ >+ if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { >+ abortWithReason(JITDidReturnFromTailCall); >+ return; >+ } >+ >+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >+ checkStackPointerAlignment(); >+ >+ sampleCodeBlock(m_codeBlock); >+ >+ emitPutCallResult(instruction); >+} >+ >+void JIT::emit_op_call(Instruction* currentInstruction) >+{ >+ compileOpCall(op_call, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_tail_call(Instruction* currentInstruction) >+{ >+ compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_call_eval(Instruction* currentInstruction) >+{ >+ compileOpCall(op_call_eval, currentInstruction, m_callLinkInfoIndex); >+} >+ >+void JIT::emit_op_call_varargs(Instruction* currentInstruction) >+{ >+ compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction) >+{ >+ compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_tail_call_forward_arguments(Instruction* currentInstruction) >+{ >+ compileOpCall(op_tail_call_forward_arguments, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_construct_varargs(Instruction* currentInstruction) >+{ >+ compileOpCall(op_construct_varargs, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emit_op_construct(Instruction* currentInstruction) >+{ >+ compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_call_eval, currentInstruction, iter, m_callLinkInfoIndex); >+} >+ >+void JIT::emitSlow_op_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_tail_call_forward_arguments(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_tail_call_forward_arguments, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_construct_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+void JIT::emitSlow_op_construct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >+{ >+ compileOpCallSlowCase(op_construct, currentInstruction, iter, m_callLinkInfoIndex++); >+} >+ >+#endif // USE(JSVALUE64) >+ >+} // namespace JSC >+ >+#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITArithmetic.cpp b/Source/JavaScriptCore/jit/JITArithmetic.cpp >deleted file mode 100644 >index 3981d0388189713f6b5b55de6a550be0024e024a..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITArithmetic.cpp >+++ /dev/null >@@ -1,1013 +0,0 @@ >-/* >- * Copyright (C) 2008-2018 Apple Inc. All rights reserved. >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >- >-#if ENABLE(JIT) >-#include "JIT.h" >- >-#include "ArithProfile.h" >-#include "CodeBlock.h" >-#include "JITAddGenerator.h" >-#include "JITBitAndGenerator.h" >-#include "JITBitOrGenerator.h" >-#include "JITBitXorGenerator.h" >-#include "JITDivGenerator.h" >-#include "JITInlines.h" >-#include "JITLeftShiftGenerator.h" >-#include "JITMathIC.h" >-#include "JITMulGenerator.h" >-#include "JITNegGenerator.h" >-#include "JITOperations.h" >-#include "JITRightShiftGenerator.h" >-#include "JITSubGenerator.h" >-#include "JSArray.h" >-#include "JSFunction.h" >-#include "Interpreter.h" >-#include "JSCInlines.h" >-#include "LinkBuffer.h" >-#include "ResultType.h" >-#include "SlowPathCall.h" >- >-namespace JSC { >- >-void JIT::emit_op_jless(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jless, op1, op2, target, LessThan); >-} >- >-void JIT::emit_op_jlesseq(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jlesseq, op1, op2, target, LessThanOrEqual); >-} >- >-void JIT::emit_op_jgreater(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jgreater, op1, op2, target, GreaterThan); >-} >- >-void JIT::emit_op_jgreatereq(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jgreatereq, op1, op2, target, GreaterThanOrEqual); >-} >- >-void JIT::emit_op_jnless(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jnless, op1, op2, target, GreaterThanOrEqual); >-} >- >-void JIT::emit_op_jnlesseq(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jnlesseq, op1, op2, target, GreaterThan); >-} >- >-void JIT::emit_op_jngreater(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jngreater, op1, op2, target, LessThanOrEqual); >-} >- >-void JIT::emit_op_jngreatereq(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJump(op_jngreatereq, op1, op2, target, LessThan); >-} >- >-void JIT::emitSlow_op_jless(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleLessThan, operationCompareLess, false, iter); >-} >- >-void JIT::emitSlow_op_jlesseq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleLessThanOrEqual, operationCompareLessEq, false, iter); >-} >- >-void JIT::emitSlow_op_jgreater(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThan, operationCompareGreater, false, iter); >-} >- >-void JIT::emitSlow_op_jgreatereq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThanOrEqual, operationCompareGreaterEq, false, iter); >-} >- >-void JIT::emitSlow_op_jnless(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThanOrEqualOrUnordered, operationCompareLess, true, iter); >-} >- >-void JIT::emitSlow_op_jnlesseq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleGreaterThanOrUnordered, operationCompareLessEq, true, iter); >-} >- >-void JIT::emitSlow_op_jngreater(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleLessThanOrEqualOrUnordered, operationCompareGreater, true, iter); >-} >- >-void JIT::emitSlow_op_jngreatereq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareAndJumpSlow(op1, op2, target, DoubleLessThanOrUnordered, operationCompareGreaterEq, true, iter); >-} >- >-void JIT::emit_op_below(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- emit_compareUnsigned(dst, op1, op2, Below); >-} >- >-void JIT::emit_op_beloweq(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- emit_compareUnsigned(dst, op1, op2, BelowOrEqual); >-} >- >-void JIT::emit_op_jbelow(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareUnsignedAndJump(op1, op2, target, Below); >-} >- >-void JIT::emit_op_jbeloweq(Instruction* currentInstruction) >-{ >- int op1 = currentInstruction[1].u.operand; >- int op2 = currentInstruction[2].u.operand; >- unsigned target = currentInstruction[3].u.operand; >- >- emit_compareUnsignedAndJump(op1, op2, target, BelowOrEqual); >-} >- >-#if USE(JSVALUE64) >- >-void JIT::emit_op_unsigned(Instruction* currentInstruction) >-{ >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(op1, regT0); >- emitJumpSlowCaseIfNotInt(regT0); >- addSlowCase(branch32(LessThan, regT0, TrustedImm32(0))); >- boxInt32(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(result, regT0); >-} >- >-void JIT::emit_compareAndJump(OpcodeID, int op1, int op2, unsigned target, RelationalCondition condition) >-{ >- // We generate inline code for the following cases in the fast path: >- // - int immediate to constant int immediate >- // - constant int immediate to int immediate >- // - int immediate to int immediate >- >- if (isOperandConstantChar(op1)) { >- emitGetVirtualRegister(op2, regT0); >- addSlowCase(branchIfNotCell(regT0)); >- JumpList failures; >- emitLoadCharacterString(regT0, regT0, failures); >- addSlowCase(failures); >- addJump(branch32(commute(condition), regT0, Imm32(asString(getConstantOperand(op1))->tryGetValue()[0])), target); >- return; >- } >- if (isOperandConstantChar(op2)) { >- emitGetVirtualRegister(op1, regT0); >- addSlowCase(branchIfNotCell(regT0)); >- JumpList failures; >- emitLoadCharacterString(regT0, regT0, failures); >- addSlowCase(failures); >- addJump(branch32(condition, regT0, Imm32(asString(getConstantOperand(op2))->tryGetValue()[0])), target); >- return; >- } >- if (isOperandConstantInt(op2)) { >- emitGetVirtualRegister(op1, regT0); >- emitJumpSlowCaseIfNotInt(regT0); >- int32_t op2imm = getOperandConstantInt(op2); >- addJump(branch32(condition, regT0, Imm32(op2imm)), target); >- return; >- } >- if (isOperandConstantInt(op1)) { >- emitGetVirtualRegister(op2, regT1); >- emitJumpSlowCaseIfNotInt(regT1); >- int32_t op1imm = getOperandConstantInt(op1); >- addJump(branch32(commute(condition), regT1, Imm32(op1imm)), target); >- return; >- } >- >- emitGetVirtualRegisters(op1, regT0, op2, regT1); >- emitJumpSlowCaseIfNotInt(regT0); >- emitJumpSlowCaseIfNotInt(regT1); >- >- addJump(branch32(condition, regT0, regT1), target); >-} >- >-void JIT::emit_compareUnsignedAndJump(int op1, int op2, unsigned target, RelationalCondition condition) >-{ >- if (isOperandConstantInt(op2)) { >- emitGetVirtualRegister(op1, regT0); >- int32_t op2imm = getOperandConstantInt(op2); >- addJump(branch32(condition, regT0, Imm32(op2imm)), target); >- } else if (isOperandConstantInt(op1)) { >- emitGetVirtualRegister(op2, regT1); >- int32_t op1imm = getOperandConstantInt(op1); >- addJump(branch32(commute(condition), regT1, Imm32(op1imm)), target); >- } else { >- emitGetVirtualRegisters(op1, regT0, op2, regT1); >- addJump(branch32(condition, regT0, regT1), target); >- } >-} >- >-void JIT::emit_compareUnsigned(int dst, int op1, int op2, RelationalCondition condition) >-{ >- if (isOperandConstantInt(op2)) { >- emitGetVirtualRegister(op1, regT0); >- int32_t op2imm = getOperandConstantInt(op2); >- compare32(condition, regT0, Imm32(op2imm), regT0); >- } else if (isOperandConstantInt(op1)) { >- emitGetVirtualRegister(op2, regT0); >- int32_t op1imm = getOperandConstantInt(op1); >- compare32(commute(condition), regT0, Imm32(op1imm), regT0); >- } else { >- emitGetVirtualRegisters(op1, regT0, op2, regT1); >- compare32(condition, regT0, regT1, regT0); >- } >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondition condition, size_t (JIT_OPERATION *operation)(ExecState*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator& iter) >-{ >- COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jlesseq), OPCODE_LENGTH_op_jlesseq_equals_op_jless); >- COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jnless), OPCODE_LENGTH_op_jnless_equals_op_jless); >- COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jnlesseq), OPCODE_LENGTH_op_jnlesseq_equals_op_jless); >- COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jgreater), OPCODE_LENGTH_op_jgreater_equals_op_jless); >- COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jgreatereq), OPCODE_LENGTH_op_jgreatereq_equals_op_jless); >- COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jngreater), OPCODE_LENGTH_op_jngreater_equals_op_jless); >- COMPILE_ASSERT(OPCODE_LENGTH(op_jless) == OPCODE_LENGTH(op_jngreatereq), OPCODE_LENGTH_op_jngreatereq_equals_op_jless); >- >- // We generate inline code for the following cases in the slow path: >- // - floating-point number to constant int immediate >- // - constant int immediate to floating-point number >- // - floating-point number to floating-point number. >- if (isOperandConstantChar(op1) || isOperandConstantChar(op2)) { >- linkAllSlowCases(iter); >- >- emitGetVirtualRegister(op1, argumentGPR0); >- emitGetVirtualRegister(op2, argumentGPR1); >- callOperation(operation, argumentGPR0, argumentGPR1); >- emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >- return; >- } >- >- if (isOperandConstantInt(op2)) { >- linkAllSlowCases(iter); >- >- if (supportsFloatingPoint()) { >- Jump fail1 = branchIfNotNumber(regT0); >- add64(tagTypeNumberRegister, regT0); >- move64ToDouble(regT0, fpRegT0); >- >- int32_t op2imm = getConstantOperand(op2).asInt32(); >- >- move(Imm32(op2imm), regT1); >- convertInt32ToDouble(regT1, fpRegT1); >- >- emitJumpSlowToHot(branchDouble(condition, fpRegT0, fpRegT1), target); >- >- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_jless)); >- >- fail1.link(this); >- } >- >- emitGetVirtualRegister(op2, regT1); >- callOperation(operation, regT0, regT1); >- emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >- return; >- } >- >- if (isOperandConstantInt(op1)) { >- linkAllSlowCases(iter); >- >- if (supportsFloatingPoint()) { >- Jump fail1 = branchIfNotNumber(regT1); >- add64(tagTypeNumberRegister, regT1); >- move64ToDouble(regT1, fpRegT1); >- >- int32_t op1imm = getConstantOperand(op1).asInt32(); >- >- move(Imm32(op1imm), regT0); >- convertInt32ToDouble(regT0, fpRegT0); >- >- emitJumpSlowToHot(branchDouble(condition, fpRegT0, fpRegT1), target); >- >- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_jless)); >- >- fail1.link(this); >- } >- >- emitGetVirtualRegister(op1, regT2); >- callOperation(operation, regT2, regT1); >- emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >- return; >- } >- >- linkSlowCase(iter); // LHS is not Int. >- >- if (supportsFloatingPoint()) { >- Jump fail1 = branchIfNotNumber(regT0); >- Jump fail2 = branchIfNotNumber(regT1); >- Jump fail3 = branchIfInt32(regT1); >- add64(tagTypeNumberRegister, regT0); >- add64(tagTypeNumberRegister, regT1); >- move64ToDouble(regT0, fpRegT0); >- move64ToDouble(regT1, fpRegT1); >- >- emitJumpSlowToHot(branchDouble(condition, fpRegT0, fpRegT1), target); >- >- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_jless)); >- >- fail1.link(this); >- fail2.link(this); >- fail3.link(this); >- } >- >- linkSlowCase(iter); // RHS is not Int. >- callOperation(operation, regT0, regT1); >- emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >-} >- >-void JIT::emit_op_inc(Instruction* currentInstruction) >-{ >- int srcDst = currentInstruction[1].u.operand; >- >- emitGetVirtualRegister(srcDst, regT0); >- emitJumpSlowCaseIfNotInt(regT0); >- addSlowCase(branchAdd32(Overflow, TrustedImm32(1), regT0)); >- boxInt32(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(srcDst); >-} >- >-void JIT::emit_op_dec(Instruction* currentInstruction) >-{ >- int srcDst = currentInstruction[1].u.operand; >- >- emitGetVirtualRegister(srcDst, regT0); >- emitJumpSlowCaseIfNotInt(regT0); >- addSlowCase(branchSub32(Overflow, TrustedImm32(1), regT0)); >- boxInt32(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(srcDst); >-} >- >-/* ------------------------------ BEGIN: OP_MOD ------------------------------ */ >- >-#if CPU(X86_64) >- >-void JIT::emit_op_mod(Instruction* currentInstruction) >-{ >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- >- // Make sure registers are correct for x86 IDIV instructions. >- ASSERT(regT0 == X86Registers::eax); >- auto edx = X86Registers::edx; >- auto ecx = X86Registers::ecx; >- ASSERT(regT4 != edx); >- ASSERT(regT4 != ecx); >- >- emitGetVirtualRegisters(op1, regT4, op2, ecx); >- emitJumpSlowCaseIfNotInt(regT4); >- emitJumpSlowCaseIfNotInt(ecx); >- >- move(regT4, regT0); >- addSlowCase(branchTest32(Zero, ecx)); >- Jump denominatorNotNeg1 = branch32(NotEqual, ecx, TrustedImm32(-1)); >- addSlowCase(branch32(Equal, regT0, TrustedImm32(-2147483647-1))); >- denominatorNotNeg1.link(this); >- x86ConvertToDoubleWord32(); >- x86Div32(ecx); >- Jump numeratorPositive = branch32(GreaterThanOrEqual, regT4, TrustedImm32(0)); >- addSlowCase(branchTest32(Zero, edx)); >- numeratorPositive.link(this); >- boxInt32(edx, JSValueRegs { regT0 }); >- emitPutVirtualRegister(result); >-} >- >-void JIT::emitSlow_op_mod(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >- slowPathCall.call(); >-} >- >-#else // CPU(X86_64) >- >-void JIT::emit_op_mod(Instruction* currentInstruction) >-{ >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >- slowPathCall.call(); >-} >- >-void JIT::emitSlow_op_mod(Instruction*, Vector<SlowCaseEntry>::iterator&) >-{ >- UNREACHABLE_FOR_PLATFORM(); >-} >- >-#endif // CPU(X86_64) >- >-/* ------------------------------ END: OP_MOD ------------------------------ */ >- >-#endif // USE(JSVALUE64) >- >-void JIT::emit_op_negate(Instruction* currentInstruction) >-{ >- ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >- JITNegIC* negateIC = m_codeBlock->addJITNegIC(arithProfile, currentInstruction); >- m_instructionToMathIC.add(currentInstruction, negateIC); >- emitMathICFast(negateIC, currentInstruction, operationArithNegateProfiled, operationArithNegate); >-} >- >-void JIT::emitSlow_op_negate(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- JITNegIC* negIC = bitwise_cast<JITNegIC*>(m_instructionToMathIC.get(currentInstruction)); >- emitMathICSlow(negIC, currentInstruction, operationArithNegateProfiledOptimize, operationArithNegateProfiled, operationArithNegateOptimize); >-} >- >-template<typename SnippetGenerator> >-void JIT::emitBitBinaryOpFastPath(Instruction* currentInstruction) >-{ >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- >-#if USE(JSVALUE64) >- JSValueRegs leftRegs = JSValueRegs(regT0); >- JSValueRegs rightRegs = JSValueRegs(regT1); >- JSValueRegs resultRegs = leftRegs; >- GPRReg scratchGPR = regT2; >-#else >- JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >- JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >- JSValueRegs resultRegs = leftRegs; >- GPRReg scratchGPR = regT4; >-#endif >- >- SnippetOperand leftOperand; >- SnippetOperand rightOperand; >- >- if (isOperandConstantInt(op1)) >- leftOperand.setConstInt32(getOperandConstantInt(op1)); >- else if (isOperandConstantInt(op2)) >- rightOperand.setConstInt32(getOperandConstantInt(op2)); >- >- RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >- >- if (!leftOperand.isConst()) >- emitGetVirtualRegister(op1, leftRegs); >- if (!rightOperand.isConst()) >- emitGetVirtualRegister(op2, rightRegs); >- >- SnippetGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, scratchGPR); >- >- gen.generateFastPath(*this); >- >- ASSERT(gen.didEmitFastPath()); >- gen.endJumpList().link(this); >- emitPutVirtualRegister(result, resultRegs); >- >- addSlowCase(gen.slowPathJumpList()); >-} >- >-void JIT::emit_op_bitand(Instruction* currentInstruction) >-{ >- emitBitBinaryOpFastPath<JITBitAndGenerator>(currentInstruction); >-} >- >-void JIT::emit_op_bitor(Instruction* currentInstruction) >-{ >- emitBitBinaryOpFastPath<JITBitOrGenerator>(currentInstruction); >-} >- >-void JIT::emit_op_bitxor(Instruction* currentInstruction) >-{ >- emitBitBinaryOpFastPath<JITBitXorGenerator>(currentInstruction); >-} >- >-void JIT::emit_op_lshift(Instruction* currentInstruction) >-{ >- emitBitBinaryOpFastPath<JITLeftShiftGenerator>(currentInstruction); >-} >- >-void JIT::emitRightShiftFastPath(Instruction* currentInstruction, OpcodeID opcodeID) >-{ >- ASSERT(opcodeID == op_rshift || opcodeID == op_urshift); >- >- JITRightShiftGenerator::ShiftType snippetShiftType = opcodeID == op_rshift ? >- JITRightShiftGenerator::SignedShift : JITRightShiftGenerator::UnsignedShift; >- >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- >-#if USE(JSVALUE64) >- JSValueRegs leftRegs = JSValueRegs(regT0); >- JSValueRegs rightRegs = JSValueRegs(regT1); >- JSValueRegs resultRegs = leftRegs; >- GPRReg scratchGPR = regT2; >- FPRReg scratchFPR = InvalidFPRReg; >-#else >- JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >- JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >- JSValueRegs resultRegs = leftRegs; >- GPRReg scratchGPR = regT4; >- FPRReg scratchFPR = fpRegT2; >-#endif >- >- SnippetOperand leftOperand; >- SnippetOperand rightOperand; >- >- if (isOperandConstantInt(op1)) >- leftOperand.setConstInt32(getOperandConstantInt(op1)); >- else if (isOperandConstantInt(op2)) >- rightOperand.setConstInt32(getOperandConstantInt(op2)); >- >- RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >- >- if (!leftOperand.isConst()) >- emitGetVirtualRegister(op1, leftRegs); >- if (!rightOperand.isConst()) >- emitGetVirtualRegister(op2, rightRegs); >- >- JITRightShiftGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, >- fpRegT0, scratchGPR, scratchFPR, snippetShiftType); >- >- gen.generateFastPath(*this); >- >- ASSERT(gen.didEmitFastPath()); >- gen.endJumpList().link(this); >- emitPutVirtualRegister(result, resultRegs); >- >- addSlowCase(gen.slowPathJumpList()); >-} >- >-void JIT::emit_op_rshift(Instruction* currentInstruction) >-{ >- emitRightShiftFastPath(currentInstruction, op_rshift); >-} >- >-void JIT::emit_op_urshift(Instruction* currentInstruction) >-{ >- emitRightShiftFastPath(currentInstruction, op_urshift); >-} >- >-ALWAYS_INLINE static OperandTypes getOperandTypes(Instruction* instruction) >-{ >- return OperandTypes(ArithProfile::fromInt(instruction[4].u.operand).lhsResultType(), ArithProfile::fromInt(instruction[4].u.operand).rhsResultType()); >-} >- >-void JIT::emit_op_add(Instruction* currentInstruction) >-{ >- ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >- JITAddIC* addIC = m_codeBlock->addJITAddIC(arithProfile, currentInstruction); >- m_instructionToMathIC.add(currentInstruction, addIC); >- emitMathICFast(addIC, currentInstruction, operationValueAddProfiled, operationValueAdd); >-} >- >-void JIT::emitSlow_op_add(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- JITAddIC* addIC = bitwise_cast<JITAddIC*>(m_instructionToMathIC.get(currentInstruction)); >- emitMathICSlow(addIC, currentInstruction, operationValueAddProfiledOptimize, operationValueAddProfiled, operationValueAddOptimize); >-} >- >-template <typename Generator, typename ProfiledFunction, typename NonProfiledFunction> >-void JIT::emitMathICFast(JITUnaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledFunction profiledFunction, NonProfiledFunction nonProfiledFunction) >-{ >- int result = currentInstruction[1].u.operand; >- int operand = currentInstruction[2].u.operand; >- >-#if USE(JSVALUE64) >- // ArithNegate benefits from using the same register as src and dst. >- // Since regT1==argumentGPR1, using regT1 avoid shuffling register to call the slow path. >- JSValueRegs srcRegs = JSValueRegs(regT1); >- JSValueRegs resultRegs = JSValueRegs(regT1); >- GPRReg scratchGPR = regT2; >-#else >- JSValueRegs srcRegs = JSValueRegs(regT1, regT0); >- JSValueRegs resultRegs = JSValueRegs(regT3, regT2); >- GPRReg scratchGPR = regT4; >-#endif >- >-#if ENABLE(MATH_IC_STATS) >- auto inlineStart = label(); >-#endif >- >- mathIC->m_generator = Generator(resultRegs, srcRegs, scratchGPR); >- >- emitGetVirtualRegister(operand, srcRegs); >- >- MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.add(currentInstruction, MathICGenerationState()).iterator->value; >- >- bool generatedInlineCode = mathIC->generateInline(*this, mathICGenerationState); >- if (!generatedInlineCode) { >- ArithProfile* arithProfile = mathIC->arithProfile(); >- if (arithProfile && shouldEmitProfiling()) >- callOperationWithResult(profiledFunction, resultRegs, srcRegs, arithProfile); >- else >- callOperationWithResult(nonProfiledFunction, resultRegs, srcRegs); >- } else >- addSlowCase(mathICGenerationState.slowPathJumps); >- >-#if ENABLE(MATH_IC_STATS) >- auto inlineEnd = label(); >- addLinkTask([=] (LinkBuffer& linkBuffer) { >- size_t size = linkBuffer.locationOf(inlineEnd).executableAddress<char*>() - linkBuffer.locationOf(inlineStart).executableAddress<char*>(); >- mathIC->m_generatedCodeSize += size; >- }); >-#endif >- >- emitPutVirtualRegister(result, resultRegs); >-} >- >-template <typename Generator, typename ProfiledFunction, typename NonProfiledFunction> >-void JIT::emitMathICFast(JITBinaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledFunction profiledFunction, NonProfiledFunction nonProfiledFunction) >-{ >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- >-#if USE(JSVALUE64) >- OperandTypes types = getOperandTypes(copiedInstruction(currentInstruction)); >- JSValueRegs leftRegs = JSValueRegs(regT1); >- JSValueRegs rightRegs = JSValueRegs(regT2); >- JSValueRegs resultRegs = JSValueRegs(regT0); >- GPRReg scratchGPR = regT3; >- FPRReg scratchFPR = fpRegT2; >-#else >- OperandTypes types = getOperandTypes(currentInstruction); >- JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >- JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >- JSValueRegs resultRegs = leftRegs; >- GPRReg scratchGPR = regT4; >- FPRReg scratchFPR = fpRegT2; >-#endif >- >- SnippetOperand leftOperand(types.first()); >- SnippetOperand rightOperand(types.second()); >- >- if (isOperandConstantInt(op1)) >- leftOperand.setConstInt32(getOperandConstantInt(op1)); >- else if (isOperandConstantInt(op2)) >- rightOperand.setConstInt32(getOperandConstantInt(op2)); >- >- RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >- >- mathIC->m_generator = Generator(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, fpRegT0, fpRegT1, scratchGPR, scratchFPR); >- >- ASSERT(!(Generator::isLeftOperandValidConstant(leftOperand) && Generator::isRightOperandValidConstant(rightOperand))); >- >- if (!Generator::isLeftOperandValidConstant(leftOperand)) >- emitGetVirtualRegister(op1, leftRegs); >- if (!Generator::isRightOperandValidConstant(rightOperand)) >- emitGetVirtualRegister(op2, rightRegs); >- >-#if ENABLE(MATH_IC_STATS) >- auto inlineStart = label(); >-#endif >- >- MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.add(currentInstruction, MathICGenerationState()).iterator->value; >- >- bool generatedInlineCode = mathIC->generateInline(*this, mathICGenerationState); >- if (!generatedInlineCode) { >- if (leftOperand.isConst()) >- emitGetVirtualRegister(op1, leftRegs); >- else if (rightOperand.isConst()) >- emitGetVirtualRegister(op2, rightRegs); >- ArithProfile* arithProfile = mathIC->arithProfile(); >- if (arithProfile && shouldEmitProfiling()) >- callOperationWithResult(profiledFunction, resultRegs, leftRegs, rightRegs, arithProfile); >- else >- callOperationWithResult(nonProfiledFunction, resultRegs, leftRegs, rightRegs); >- } else >- addSlowCase(mathICGenerationState.slowPathJumps); >- >-#if ENABLE(MATH_IC_STATS) >- auto inlineEnd = label(); >- addLinkTask([=] (LinkBuffer& linkBuffer) { >- size_t size = linkBuffer.locationOf(inlineEnd).executableAddress<char*>() - linkBuffer.locationOf(inlineStart).executableAddress<char*>(); >- mathIC->m_generatedCodeSize += size; >- }); >-#endif >- >- emitPutVirtualRegister(result, resultRegs); >-} >- >-template <typename Generator, typename ProfiledRepatchFunction, typename ProfiledFunction, typename RepatchFunction> >-void JIT::emitMathICSlow(JITUnaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledRepatchFunction profiledRepatchFunction, ProfiledFunction profiledFunction, RepatchFunction repatchFunction) >-{ >- MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >- mathICGenerationState.slowPathStart = label(); >- >- int result = currentInstruction[1].u.operand; >- >-#if USE(JSVALUE64) >- JSValueRegs srcRegs = JSValueRegs(regT1); >- JSValueRegs resultRegs = JSValueRegs(regT0); >-#else >- JSValueRegs srcRegs = JSValueRegs(regT1, regT0); >- JSValueRegs resultRegs = JSValueRegs(regT3, regT2); >-#endif >- >-#if ENABLE(MATH_IC_STATS) >- auto slowPathStart = label(); >-#endif >- >- ArithProfile* arithProfile = mathIC->arithProfile(); >- if (arithProfile && shouldEmitProfiling()) { >- if (mathICGenerationState.shouldSlowPathRepatch) >- mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(profiledRepatchFunction), resultRegs, srcRegs, TrustedImmPtr(mathIC)); >- else >- mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, resultRegs, srcRegs, arithProfile); >- } else >- mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(repatchFunction), resultRegs, srcRegs, TrustedImmPtr(mathIC)); >- >-#if ENABLE(MATH_IC_STATS) >- auto slowPathEnd = label(); >- addLinkTask([=] (LinkBuffer& linkBuffer) { >- size_t size = linkBuffer.locationOf(slowPathEnd).executableAddress<char*>() - linkBuffer.locationOf(slowPathStart).executableAddress<char*>(); >- mathIC->m_generatedCodeSize += size; >- }); >-#endif >- >- emitPutVirtualRegister(result, resultRegs); >- >- addLinkTask([=] (LinkBuffer& linkBuffer) { >- MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >- mathIC->finalizeInlineCode(mathICGenerationState, linkBuffer); >- }); >-} >- >-template <typename Generator, typename ProfiledRepatchFunction, typename ProfiledFunction, typename RepatchFunction> >-void JIT::emitMathICSlow(JITBinaryMathIC<Generator>* mathIC, Instruction* currentInstruction, ProfiledRepatchFunction profiledRepatchFunction, ProfiledFunction profiledFunction, RepatchFunction repatchFunction) >-{ >- MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >- mathICGenerationState.slowPathStart = label(); >- >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- >-#if USE(JSVALUE64) >- OperandTypes types = getOperandTypes(copiedInstruction(currentInstruction)); >- JSValueRegs leftRegs = JSValueRegs(regT1); >- JSValueRegs rightRegs = JSValueRegs(regT2); >- JSValueRegs resultRegs = JSValueRegs(regT0); >-#else >- OperandTypes types = getOperandTypes(currentInstruction); >- JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >- JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >- JSValueRegs resultRegs = leftRegs; >-#endif >- >- SnippetOperand leftOperand(types.first()); >- SnippetOperand rightOperand(types.second()); >- >- if (isOperandConstantInt(op1)) >- leftOperand.setConstInt32(getOperandConstantInt(op1)); >- else if (isOperandConstantInt(op2)) >- rightOperand.setConstInt32(getOperandConstantInt(op2)); >- >- ASSERT(!(Generator::isLeftOperandValidConstant(leftOperand) && Generator::isRightOperandValidConstant(rightOperand))); >- >- if (Generator::isLeftOperandValidConstant(leftOperand)) >- emitGetVirtualRegister(op1, leftRegs); >- else if (Generator::isRightOperandValidConstant(rightOperand)) >- emitGetVirtualRegister(op2, rightRegs); >- >-#if ENABLE(MATH_IC_STATS) >- auto slowPathStart = label(); >-#endif >- >- ArithProfile* arithProfile = mathIC->arithProfile(); >- if (arithProfile && shouldEmitProfiling()) { >- if (mathICGenerationState.shouldSlowPathRepatch) >- mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(profiledRepatchFunction), resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC)); >- else >- mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, resultRegs, leftRegs, rightRegs, arithProfile); >- } else >- mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(repatchFunction), resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC)); >- >-#if ENABLE(MATH_IC_STATS) >- auto slowPathEnd = label(); >- addLinkTask([=] (LinkBuffer& linkBuffer) { >- size_t size = linkBuffer.locationOf(slowPathEnd).executableAddress<char*>() - linkBuffer.locationOf(slowPathStart).executableAddress<char*>(); >- mathIC->m_generatedCodeSize += size; >- }); >-#endif >- >- emitPutVirtualRegister(result, resultRegs); >- >- addLinkTask([=] (LinkBuffer& linkBuffer) { >- MathICGenerationState& mathICGenerationState = m_instructionToMathICGenerationState.find(currentInstruction)->value; >- mathIC->finalizeInlineCode(mathICGenerationState, linkBuffer); >- }); >-} >- >-void JIT::emit_op_div(Instruction* currentInstruction) >-{ >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- >-#if USE(JSVALUE64) >- OperandTypes types = getOperandTypes(copiedInstruction(currentInstruction)); >- JSValueRegs leftRegs = JSValueRegs(regT0); >- JSValueRegs rightRegs = JSValueRegs(regT1); >- JSValueRegs resultRegs = leftRegs; >- GPRReg scratchGPR = regT2; >-#else >- OperandTypes types = getOperandTypes(currentInstruction); >- JSValueRegs leftRegs = JSValueRegs(regT1, regT0); >- JSValueRegs rightRegs = JSValueRegs(regT3, regT2); >- JSValueRegs resultRegs = leftRegs; >- GPRReg scratchGPR = regT4; >-#endif >- FPRReg scratchFPR = fpRegT2; >- >- ArithProfile* arithProfile = nullptr; >- if (shouldEmitProfiling()) >- arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >- >- SnippetOperand leftOperand(types.first()); >- SnippetOperand rightOperand(types.second()); >- >- if (isOperandConstantInt(op1)) >- leftOperand.setConstInt32(getOperandConstantInt(op1)); >-#if USE(JSVALUE64) >- else if (isOperandConstantDouble(op1)) >- leftOperand.setConstDouble(getOperandConstantDouble(op1)); >-#endif >- else if (isOperandConstantInt(op2)) >- rightOperand.setConstInt32(getOperandConstantInt(op2)); >-#if USE(JSVALUE64) >- else if (isOperandConstantDouble(op2)) >- rightOperand.setConstDouble(getOperandConstantDouble(op2)); >-#endif >- >- RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst()); >- >- if (!leftOperand.isConst()) >- emitGetVirtualRegister(op1, leftRegs); >- if (!rightOperand.isConst()) >- emitGetVirtualRegister(op2, rightRegs); >- >- JITDivGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, >- fpRegT0, fpRegT1, scratchGPR, scratchFPR, arithProfile); >- >- gen.generateFastPath(*this); >- >- if (gen.didEmitFastPath()) { >- gen.endJumpList().link(this); >- emitPutVirtualRegister(result, resultRegs); >- >- addSlowCase(gen.slowPathJumpList()); >- } else { >- ASSERT(gen.endJumpList().empty()); >- ASSERT(gen.slowPathJumpList().empty()); >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_div); >- slowPathCall.call(); >- } >-} >- >-void JIT::emit_op_mul(Instruction* currentInstruction) >-{ >- ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >- JITMulIC* mulIC = m_codeBlock->addJITMulIC(arithProfile, currentInstruction); >- m_instructionToMathIC.add(currentInstruction, mulIC); >- emitMathICFast(mulIC, currentInstruction, operationValueMulProfiled, operationValueMul); >-} >- >-void JIT::emitSlow_op_mul(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- JITMulIC* mulIC = bitwise_cast<JITMulIC*>(m_instructionToMathIC.get(currentInstruction)); >- emitMathICSlow(mulIC, currentInstruction, operationValueMulProfiledOptimize, operationValueMulProfiled, operationValueMulOptimize); >-} >- >-void JIT::emit_op_sub(Instruction* currentInstruction) >-{ >- ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); >- JITSubIC* subIC = m_codeBlock->addJITSubIC(arithProfile, currentInstruction); >- m_instructionToMathIC.add(currentInstruction, subIC); >- emitMathICFast(subIC, currentInstruction, operationValueSubProfiled, operationValueSub); >-} >- >-void JIT::emitSlow_op_sub(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- JITSubIC* subIC = bitwise_cast<JITSubIC*>(m_instructionToMathIC.get(currentInstruction)); >- emitMathICSlow(subIC, currentInstruction, operationValueSubProfiledOptimize, operationValueSubProfiled, operationValueSubOptimize); >-} >- >-/* ------------------------------ END: OP_ADD, OP_SUB, OP_MUL, OP_POW ------------------------------ */ >- >-} // namespace JSC >- >-#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp b/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp >deleted file mode 100644 >index d3ebdb67c521cb699f37e0989207a4e21e79cc53..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp >+++ /dev/null >@@ -1,357 +0,0 @@ >-/* >-* Copyright (C) 2008, 2015 Apple Inc. All rights reserved. >-* >-* Redistribution and use in source and binary forms, with or without >-* modification, are permitted provided that the following conditions >-* are met: >-* 1. Redistributions of source code must retain the above copyright >-* notice, this list of conditions and the following disclaimer. >-* 2. Redistributions in binary form must reproduce the above copyright >-* notice, this list of conditions and the following disclaimer in the >-* documentation and/or other materials provided with the distribution. >-* >-* THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >-* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >-* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >-* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >-* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >-* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >-* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >-* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >-* OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >-* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >-* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >-*/ >- >-#include "config.h" >- >-#if ENABLE(JIT) >-#if USE(JSVALUE32_64) >-#include "JIT.h" >- >-#include "CodeBlock.h" >-#include "JITInlines.h" >-#include "JSArray.h" >-#include "JSFunction.h" >-#include "Interpreter.h" >-#include "JSCInlines.h" >-#include "ResultType.h" >-#include "SlowPathCall.h" >- >- >-namespace JSC { >- >-void JIT::emit_compareAndJump(OpcodeID opcode, int op1, int op2, unsigned target, RelationalCondition condition) >-{ >- JumpList notInt32Op1; >- JumpList notInt32Op2; >- >- // Character less. >- if (isOperandConstantChar(op1)) { >- emitLoad(op2, regT1, regT0); >- addSlowCase(branchIfNotCell(regT1)); >- JumpList failures; >- emitLoadCharacterString(regT0, regT0, failures); >- addSlowCase(failures); >- addJump(branch32(commute(condition), regT0, Imm32(asString(getConstantOperand(op1))->tryGetValue()[0])), target); >- return; >- } >- if (isOperandConstantChar(op2)) { >- emitLoad(op1, regT1, regT0); >- addSlowCase(branchIfNotCell(regT1)); >- JumpList failures; >- emitLoadCharacterString(regT0, regT0, failures); >- addSlowCase(failures); >- addJump(branch32(condition, regT0, Imm32(asString(getConstantOperand(op2))->tryGetValue()[0])), target); >- return; >- } >- if (isOperandConstantInt(op1)) { >- emitLoad(op2, regT3, regT2); >- notInt32Op2.append(branchIfNotInt32(regT3)); >- addJump(branch32(commute(condition), regT2, Imm32(getConstantOperand(op1).asInt32())), target); >- } else if (isOperandConstantInt(op2)) { >- emitLoad(op1, regT1, regT0); >- notInt32Op1.append(branchIfNotInt32(regT1)); >- addJump(branch32(condition, regT0, Imm32(getConstantOperand(op2).asInt32())), target); >- } else { >- emitLoad2(op1, regT1, regT0, op2, regT3, regT2); >- notInt32Op1.append(branchIfNotInt32(regT1)); >- notInt32Op2.append(branchIfNotInt32(regT3)); >- addJump(branch32(condition, regT0, regT2), target); >- } >- >- if (!supportsFloatingPoint()) { >- addSlowCase(notInt32Op1); >- addSlowCase(notInt32Op2); >- return; >- } >- Jump end = jump(); >- >- // Double less. >- emitBinaryDoubleOp(opcode, target, op1, op2, OperandTypes(), notInt32Op1, notInt32Op2, !isOperandConstantInt(op1), isOperandConstantInt(op1) || !isOperandConstantInt(op2)); >- end.link(this); >-} >- >-void JIT::emit_compareUnsignedAndJump(int op1, int op2, unsigned target, RelationalCondition condition) >-{ >- if (isOperandConstantInt(op1)) { >- emitLoad(op2, regT3, regT2); >- addJump(branch32(commute(condition), regT2, Imm32(getConstantOperand(op1).asInt32())), target); >- } else if (isOperandConstantInt(op2)) { >- emitLoad(op1, regT1, regT0); >- addJump(branch32(condition, regT0, Imm32(getConstantOperand(op2).asInt32())), target); >- } else { >- emitLoad2(op1, regT1, regT0, op2, regT3, regT2); >- addJump(branch32(condition, regT0, regT2), target); >- } >-} >- >- >-void JIT::emit_compareUnsigned(int dst, int op1, int op2, RelationalCondition condition) >-{ >- if (isOperandConstantInt(op1)) { >- emitLoad(op2, regT3, regT2); >- compare32(commute(condition), regT2, Imm32(getConstantOperand(op1).asInt32()), regT0); >- } else if (isOperandConstantInt(op2)) { >- emitLoad(op1, regT1, regT0); >- compare32(condition, regT0, Imm32(getConstantOperand(op2).asInt32()), regT0); >- } else { >- emitLoad2(op1, regT1, regT0, op2, regT3, regT2); >- compare32(condition, regT0, regT2, regT0); >- } >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondition, size_t (JIT_OPERATION *operation)(ExecState*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- emitLoad(op1, regT1, regT0); >- emitLoad(op2, regT3, regT2); >- callOperation(operation, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >- emitJumpSlowToHot(branchTest32(invert ? Zero : NonZero, returnValueGPR), target); >-} >- >-void JIT::emit_op_unsigned(Instruction* currentInstruction) >-{ >- int result = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- >- emitLoad(op1, regT1, regT0); >- >- addSlowCase(branchIfNotInt32(regT1)); >- addSlowCase(branch32(LessThan, regT0, TrustedImm32(0))); >- emitStoreInt32(result, regT0, result == op1); >-} >- >-void JIT::emit_op_inc(Instruction* currentInstruction) >-{ >- int srcDst = currentInstruction[1].u.operand; >- >- emitLoad(srcDst, regT1, regT0); >- >- addSlowCase(branchIfNotInt32(regT1)); >- addSlowCase(branchAdd32(Overflow, TrustedImm32(1), regT0)); >- emitStoreInt32(srcDst, regT0, true); >-} >- >-void JIT::emit_op_dec(Instruction* currentInstruction) >-{ >- int srcDst = currentInstruction[1].u.operand; >- >- emitLoad(srcDst, regT1, regT0); >- >- addSlowCase(branchIfNotInt32(regT1)); >- addSlowCase(branchSub32(Overflow, TrustedImm32(1), regT0)); >- emitStoreInt32(srcDst, regT0, true); >-} >- >-void JIT::emitBinaryDoubleOp(OpcodeID opcodeID, int dst, int op1, int op2, OperandTypes types, JumpList& notInt32Op1, JumpList& notInt32Op2, bool op1IsInRegisters, bool op2IsInRegisters) >-{ >- JumpList end; >- >- if (!notInt32Op1.empty()) { >- // Double case 1: Op1 is not int32; Op2 is unknown. >- notInt32Op1.link(this); >- >- ASSERT(op1IsInRegisters); >- >- // Verify Op1 is double. >- if (!types.first().definitelyIsNumber()) >- addSlowCase(branch32(Above, regT1, TrustedImm32(JSValue::LowestTag))); >- >- if (!op2IsInRegisters) >- emitLoad(op2, regT3, regT2); >- >- Jump doubleOp2 = branch32(Below, regT3, TrustedImm32(JSValue::LowestTag)); >- >- if (!types.second().definitelyIsNumber()) >- addSlowCase(branchIfNotInt32(regT3)); >- >- convertInt32ToDouble(regT2, fpRegT0); >- Jump doTheMath = jump(); >- >- // Load Op2 as double into double register. >- doubleOp2.link(this); >- emitLoadDouble(op2, fpRegT0); >- >- // Do the math. >- doTheMath.link(this); >- switch (opcodeID) { >- case op_jless: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleLessThan, fpRegT2, fpRegT0), dst); >- break; >- case op_jlesseq: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleLessThanOrEqual, fpRegT2, fpRegT0), dst); >- break; >- case op_jgreater: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleGreaterThan, fpRegT2, fpRegT0), dst); >- break; >- case op_jgreatereq: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleGreaterThanOrEqual, fpRegT2, fpRegT0), dst); >- break; >- case op_jnless: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleLessThanOrEqualOrUnordered, fpRegT0, fpRegT2), dst); >- break; >- case op_jnlesseq: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleLessThanOrUnordered, fpRegT0, fpRegT2), dst); >- break; >- case op_jngreater: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleGreaterThanOrEqualOrUnordered, fpRegT0, fpRegT2), dst); >- break; >- case op_jngreatereq: >- emitLoadDouble(op1, fpRegT2); >- addJump(branchDouble(DoubleGreaterThanOrUnordered, fpRegT0, fpRegT2), dst); >- break; >- default: >- RELEASE_ASSERT_NOT_REACHED(); >- } >- >- if (!notInt32Op2.empty()) >- end.append(jump()); >- } >- >- if (!notInt32Op2.empty()) { >- // Double case 2: Op1 is int32; Op2 is not int32. >- notInt32Op2.link(this); >- >- ASSERT(op2IsInRegisters); >- >- if (!op1IsInRegisters) >- emitLoadPayload(op1, regT0); >- >- convertInt32ToDouble(regT0, fpRegT0); >- >- // Verify op2 is double. >- if (!types.second().definitelyIsNumber()) >- addSlowCase(branch32(Above, regT3, TrustedImm32(JSValue::LowestTag))); >- >- // Do the math. >- switch (opcodeID) { >- case op_jless: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleLessThan, fpRegT0, fpRegT1), dst); >- break; >- case op_jlesseq: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleLessThanOrEqual, fpRegT0, fpRegT1), dst); >- break; >- case op_jgreater: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleGreaterThan, fpRegT0, fpRegT1), dst); >- break; >- case op_jgreatereq: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleGreaterThanOrEqual, fpRegT0, fpRegT1), dst); >- break; >- case op_jnless: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleLessThanOrEqualOrUnordered, fpRegT1, fpRegT0), dst); >- break; >- case op_jnlesseq: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleLessThanOrUnordered, fpRegT1, fpRegT0), dst); >- break; >- case op_jngreater: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleGreaterThanOrEqualOrUnordered, fpRegT1, fpRegT0), dst); >- break; >- case op_jngreatereq: >- emitLoadDouble(op2, fpRegT1); >- addJump(branchDouble(DoubleGreaterThanOrUnordered, fpRegT1, fpRegT0), dst); >- break; >- default: >- RELEASE_ASSERT_NOT_REACHED(); >- } >- } >- >- end.link(this); >-} >- >-// Mod (%) >- >-/* ------------------------------ BEGIN: OP_MOD ------------------------------ */ >- >-void JIT::emit_op_mod(Instruction* currentInstruction) >-{ >-#if CPU(X86) >- int dst = currentInstruction[1].u.operand; >- int op1 = currentInstruction[2].u.operand; >- int op2 = currentInstruction[3].u.operand; >- >- // Make sure registers are correct for x86 IDIV instructions. >- ASSERT(regT0 == X86Registers::eax); >- ASSERT(regT1 == X86Registers::edx); >- ASSERT(regT2 == X86Registers::ecx); >- ASSERT(regT3 == X86Registers::ebx); >- >- emitLoad2(op1, regT0, regT3, op2, regT1, regT2); >- addSlowCase(branchIfNotInt32(regT1)); >- addSlowCase(branchIfNotInt32(regT0)); >- >- move(regT3, regT0); >- addSlowCase(branchTest32(Zero, regT2)); >- Jump denominatorNotNeg1 = branch32(NotEqual, regT2, TrustedImm32(-1)); >- addSlowCase(branch32(Equal, regT0, TrustedImm32(-2147483647-1))); >- denominatorNotNeg1.link(this); >- x86ConvertToDoubleWord32(); >- x86Div32(regT2); >- Jump numeratorPositive = branch32(GreaterThanOrEqual, regT3, TrustedImm32(0)); >- addSlowCase(branchTest32(Zero, regT1)); >- numeratorPositive.link(this); >- emitStoreInt32(dst, regT1, (op1 == dst || op2 == dst)); >-#else >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >- slowPathCall.call(); >-#endif >-} >- >-void JIT::emitSlow_op_mod(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >-#if CPU(X86) >- linkAllSlowCases(iter); >- >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_mod); >- slowPathCall.call(); >-#else >- UNUSED_PARAM(currentInstruction); >- UNUSED_PARAM(iter); >- // We would have really useful assertions here if it wasn't for the compiler's >- // insistence on attribute noreturn. >- // RELEASE_ASSERT_NOT_REACHED(); >-#endif >-} >- >-/* ------------------------------ END: OP_MOD ------------------------------ */ >- >-} // namespace JSC >- >-#endif // USE(JSVALUE32_64) >-#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITCall.cpp b/Source/JavaScriptCore/jit/JITCall.cpp >deleted file mode 100644 >index 50ab48b15af6d56cd1ed0c3df2596d2528c21626..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITCall.cpp >+++ /dev/null >@@ -1,352 +0,0 @@ >-/* >- * Copyright (C) 2008-2018 Apple Inc. All rights reserved. >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >- >-#if ENABLE(JIT) >-#if USE(JSVALUE64) >-#include "JIT.h" >- >-#include "CallFrameShuffler.h" >-#include "CodeBlock.h" >-#include "JITInlines.h" >-#include "JSArray.h" >-#include "JSFunction.h" >-#include "Interpreter.h" >-#include "JSCInlines.h" >-#include "LinkBuffer.h" >-#include "ResultType.h" >-#include "SetupVarargsFrame.h" >-#include "StackAlignment.h" >-#include "ThunkGenerators.h" >-#include <wtf/StringPrintStream.h> >- >- >-namespace JSC { >- >-void JIT::emitPutCallResult(Instruction* instruction) >-{ >- int dst = instruction[1].u.operand; >- emitValueProfilingSite(); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::compileSetupVarargsFrame(OpcodeID opcode, Instruction* instruction, CallLinkInfo* info) >-{ >- int thisValue = instruction[3].u.operand; >- int arguments = instruction[4].u.operand; >- int firstFreeRegister = instruction[5].u.operand; >- int firstVarArgOffset = instruction[6].u.operand; >- >- emitGetVirtualRegister(arguments, regT1); >- Z_JITOperation_EJZZ sizeOperation; >- if (opcode == op_tail_call_forward_arguments) >- sizeOperation = operationSizeFrameForForwardArguments; >- else >- sizeOperation = operationSizeFrameForVarargs; >- callOperation(sizeOperation, regT1, -firstFreeRegister, firstVarArgOffset); >- move(TrustedImm32(-firstFreeRegister), regT1); >- emitSetVarargsFrame(*this, returnValueGPR, false, regT1, regT1); >- addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(void*)))), regT1, stackPointerRegister); >- emitGetVirtualRegister(arguments, regT2); >- F_JITOperation_EFJZZ setupOperation; >- if (opcode == op_tail_call_forward_arguments) >- setupOperation = operationSetupForwardArgumentsFrame; >- else >- setupOperation = operationSetupVarargsFrame; >- callOperation(setupOperation, regT1, regT2, firstVarArgOffset, regT0); >- move(returnValueGPR, regT1); >- >- // Profile the argument count. >- load32(Address(regT1, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2); >- load32(info->addressOfMaxNumArguments(), regT0); >- Jump notBiggest = branch32(Above, regT0, regT2); >- store32(regT2, info->addressOfMaxNumArguments()); >- notBiggest.link(this); >- >- // Initialize 'this'. >- emitGetVirtualRegister(thisValue, regT0); >- store64(regT0, Address(regT1, CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register)))); >- >- addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), regT1, stackPointerRegister); >-} >- >-void JIT::compileCallEval(Instruction* instruction) >-{ >- addPtr(TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), stackPointerRegister, regT1); >- storePtr(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset())); >- >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- checkStackPointerAlignment(); >- >- callOperation(operationCallEval, regT1); >- >- addSlowCase(branchIfEmpty(regT0)); >- >- sampleCodeBlock(m_codeBlock); >- >- emitPutCallResult(instruction); >-} >- >-void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); >- info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0); >- >- int registerOffset = -instruction[4].u.operand; >- >- addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >- >- load64(Address(stackPointerRegister, sizeof(Register) * CallFrameSlot::callee - sizeof(CallerFrameAndPC)), regT0); >- emitDumbVirtualCall(*vm(), info); >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- checkStackPointerAlignment(); >- >- sampleCodeBlock(m_codeBlock); >- >- emitPutCallResult(instruction); >-} >- >-void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex) >-{ >- int callee = instruction[2].u.operand; >- >- /* Caller always: >- - Updates callFrameRegister to callee callFrame. >- - Initializes ArgumentCount; CallerFrame; Callee. >- >- For a JS call: >- - Callee initializes ReturnPC; CodeBlock. >- - Callee restores callFrameRegister before return. >- >- For a non-JS call: >- - Caller initializes ReturnPC; CodeBlock. >- - Caller restores callFrameRegister after return. >- */ >- COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct), call_and_construct_opcodes_must_be_same_length); >- COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_call_varargs), call_and_call_varargs_opcodes_must_be_same_length); >- COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct_varargs), call_and_construct_varargs_opcodes_must_be_same_length); >- COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call), call_and_tail_call_opcodes_must_be_same_length); >- COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call_varargs), call_and_tail_call_varargs_opcodes_must_be_same_length); >- COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call_forward_arguments), call_and_tail_call_forward_arguments_opcodes_must_be_same_length); >- >- CallLinkInfo* info = nullptr; >- if (opcodeID != op_call_eval) >- info = m_codeBlock->addCallLinkInfo(); >- if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) >- compileSetupVarargsFrame(opcodeID, instruction, info); >- else { >- int argCount = instruction[3].u.operand; >- int registerOffset = -instruction[4].u.operand; >- >- if (opcodeID == op_call && shouldEmitProfiling()) { >- emitGetVirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0); >- Jump done = branchIfNotCell(regT0); >- load32(Address(regT0, JSCell::structureIDOffset()), regT0); >- store32(regT0, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID()); >- done.link(this); >- } >- >- addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >- store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); >- } // SP holds newCallFrame + sizeof(CallerFrameAndPC), with ArgumentCount initialized. >- >- uint32_t bytecodeOffset = m_codeBlock->bytecodeOffset(instruction); >- uint32_t locationBits = CallSiteIndex(bytecodeOffset).bits(); >- store32(TrustedImm32(locationBits), Address(callFrameRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + TagOffset)); >- >- emitGetVirtualRegister(callee, regT0); // regT0 holds callee. >- store64(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); >- >- if (opcodeID == op_call_eval) { >- compileCallEval(instruction); >- return; >- } >- >- DataLabelPtr addressOfLinkedFunctionCheck; >- Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(nullptr)); >- addSlowCase(slowCase); >- >- ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); >- info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0); >- m_callCompilationInfo.append(CallCompilationInfo()); >- m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck; >- m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info; >- >- if (opcodeID == op_tail_call) { >- CallFrameShuffleData shuffleData; >- shuffleData.numPassedArgs = instruction[3].u.operand; >- shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister; >- shuffleData.numLocals = >- instruction[4].u.operand - sizeof(CallerFrameAndPC) / sizeof(Register); >- shuffleData.args.resize(instruction[3].u.operand); >- for (int i = 0; i < instruction[3].u.operand; ++i) { >- shuffleData.args[i] = >- ValueRecovery::displacedInJSStack( >- virtualRegisterForArgument(i) - instruction[4].u.operand, >- DataFormatJS); >- } >- shuffleData.callee = >- ValueRecovery::inGPR(regT0, DataFormatJS); >- shuffleData.setupCalleeSaveRegisters(m_codeBlock); >- info->setFrameShuffleData(shuffleData); >- CallFrameShuffler(*this, shuffleData).prepareForTailCall(); >- m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); >- return; >- } >- >- if (opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) { >- emitRestoreCalleeSaves(); >- prepareForTailCallSlow(); >- m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); >- return; >- } >- >- m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); >- >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- checkStackPointerAlignment(); >- >- sampleCodeBlock(m_codeBlock); >- >- emitPutCallResult(instruction); >-} >- >-void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex) >-{ >- if (opcodeID == op_call_eval) { >- compileCallEvalSlowCase(instruction, iter); >- return; >- } >- >- linkAllSlowCases(iter); >- >- if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) >- emitRestoreCalleeSaves(); >- >- move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); >- >- m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = >- emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).retaggedCode<NoPtrTag>()); >- >- if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { >- abortWithReason(JITDidReturnFromTailCall); >- return; >- } >- >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- checkStackPointerAlignment(); >- >- sampleCodeBlock(m_codeBlock); >- >- emitPutCallResult(instruction); >-} >- >-void JIT::emit_op_call(Instruction* currentInstruction) >-{ >- compileOpCall(op_call, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_tail_call(Instruction* currentInstruction) >-{ >- compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_call_eval(Instruction* currentInstruction) >-{ >- compileOpCall(op_call_eval, currentInstruction, m_callLinkInfoIndex); >-} >- >-void JIT::emit_op_call_varargs(Instruction* currentInstruction) >-{ >- compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction) >-{ >- compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_tail_call_forward_arguments(Instruction* currentInstruction) >-{ >- compileOpCall(op_tail_call_forward_arguments, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_construct_varargs(Instruction* currentInstruction) >-{ >- compileOpCall(op_construct_varargs, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_construct(Instruction* currentInstruction) >-{ >- compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_call_eval, currentInstruction, iter, m_callLinkInfoIndex); >-} >- >-void JIT::emitSlow_op_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_tail_call_forward_arguments(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_tail_call_forward_arguments, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_construct_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_construct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_construct, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-} // namespace JSC >- >-#endif // USE(JSVALUE64) >-#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITCall32_64.cpp b/Source/JavaScriptCore/jit/JITCall32_64.cpp >deleted file mode 100644 >index 88bef12ceb9568bbac868849d8245ecf22b7e01b..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITCall32_64.cpp >+++ /dev/null >@@ -1,338 +0,0 @@ >-/* >- * Copyright (C) 2008-2018 Apple Inc. All rights reserved. >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >- >-#if ENABLE(JIT) >-#if USE(JSVALUE32_64) >-#include "JIT.h" >- >-#include "CodeBlock.h" >-#include "Interpreter.h" >-#include "JITInlines.h" >-#include "JSArray.h" >-#include "JSFunction.h" >-#include "JSCInlines.h" >-#include "LinkBuffer.h" >-#include "ResultType.h" >-#include "SetupVarargsFrame.h" >-#include "StackAlignment.h" >-#include "ThunkGenerators.h" >-#include <wtf/StringPrintStream.h> >- >-namespace JSC { >- >-void JIT::emitPutCallResult(Instruction* instruction) >-{ >- int dst = instruction[1].u.operand; >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_ret(Instruction* currentInstruction) >-{ >- unsigned dst = currentInstruction[1].u.operand; >- >- emitLoad(dst, regT1, regT0); >- >- checkStackPointerAlignment(); >- emitRestoreCalleeSaves(); >- emitFunctionEpilogue(); >- ret(); >-} >- >-void JIT::emitSlow_op_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_call_eval, currentInstruction, iter, m_callLinkInfoIndex); >-} >- >-void JIT::emitSlow_op_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_tail_call_forward_arguments(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_tail_call_forward_arguments, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_construct_varargs, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emitSlow_op_construct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpCallSlowCase(op_construct, currentInstruction, iter, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_call(Instruction* currentInstruction) >-{ >- compileOpCall(op_call, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_tail_call(Instruction* currentInstruction) >-{ >- compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_call_eval(Instruction* currentInstruction) >-{ >- compileOpCall(op_call_eval, currentInstruction, m_callLinkInfoIndex); >-} >- >-void JIT::emit_op_call_varargs(Instruction* currentInstruction) >-{ >- compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction) >-{ >- compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_tail_call_forward_arguments(Instruction* currentInstruction) >-{ >- compileOpCall(op_tail_call_forward_arguments, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_construct_varargs(Instruction* currentInstruction) >-{ >- compileOpCall(op_construct_varargs, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::emit_op_construct(Instruction* currentInstruction) >-{ >- compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++); >-} >- >-void JIT::compileSetupVarargsFrame(OpcodeID opcode, Instruction* instruction, CallLinkInfo* info) >-{ >- int thisValue = instruction[3].u.operand; >- int arguments = instruction[4].u.operand; >- int firstFreeRegister = instruction[5].u.operand; >- int firstVarArgOffset = instruction[6].u.operand; >- >- emitLoad(arguments, regT1, regT0); >- Z_JITOperation_EJZZ sizeOperation; >- if (opcode == op_tail_call_forward_arguments) >- sizeOperation = operationSizeFrameForForwardArguments; >- else >- sizeOperation = operationSizeFrameForVarargs; >- callOperation(sizeOperation, JSValueRegs(regT1, regT0), -firstFreeRegister, firstVarArgOffset); >- move(TrustedImm32(-firstFreeRegister), regT1); >- emitSetVarargsFrame(*this, returnValueGPR, false, regT1, regT1); >- addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 6 * sizeof(void*)))), regT1, stackPointerRegister); >- emitLoad(arguments, regT2, regT4); >- F_JITOperation_EFJZZ setupOperation; >- if (opcode == op_tail_call_forward_arguments) >- setupOperation = operationSetupForwardArgumentsFrame; >- else >- setupOperation = operationSetupVarargsFrame; >- callOperation(setupOperation, regT1, JSValueRegs(regT2, regT4), firstVarArgOffset, regT0); >- move(returnValueGPR, regT1); >- >- // Profile the argument count. >- load32(Address(regT1, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2); >- load32(info->addressOfMaxNumArguments(), regT0); >- Jump notBiggest = branch32(Above, regT0, regT2); >- store32(regT2, info->addressOfMaxNumArguments()); >- notBiggest.link(this); >- >- // Initialize 'this'. >- emitLoad(thisValue, regT2, regT0); >- store32(regT0, Address(regT1, PayloadOffset + (CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))))); >- store32(regT2, Address(regT1, TagOffset + (CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))))); >- >- addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), regT1, stackPointerRegister); >-} >- >-void JIT::compileCallEval(Instruction* instruction) >-{ >- addPtr(TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), stackPointerRegister, regT1); >- storePtr(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset())); >- >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- >- callOperation(operationCallEval, regT1); >- >- addSlowCase(branchIfEmpty(regT1)); >- >- sampleCodeBlock(m_codeBlock); >- >- emitPutCallResult(instruction); >-} >- >-void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); >- info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0); >- >- int registerOffset = -instruction[4].u.operand; >- int callee = instruction[2].u.operand; >- >- addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >- >- emitLoad(callee, regT1, regT0); >- emitDumbVirtualCall(*vm(), info); >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- checkStackPointerAlignment(); >- >- sampleCodeBlock(m_codeBlock); >- >- emitPutCallResult(instruction); >-} >- >-void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex) >-{ >- int callee = instruction[2].u.operand; >- >- /* Caller always: >- - Updates callFrameRegister to callee callFrame. >- - Initializes ArgumentCount; CallerFrame; Callee. >- >- For a JS call: >- - Callee initializes ReturnPC; CodeBlock. >- - Callee restores callFrameRegister before return. >- >- For a non-JS call: >- - Caller initializes ReturnPC; CodeBlock. >- - Caller restores callFrameRegister after return. >- */ >- CallLinkInfo* info = nullptr; >- if (opcodeID != op_call_eval) >- info = m_codeBlock->addCallLinkInfo(); >- if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) >- compileSetupVarargsFrame(opcodeID, instruction, info); >- else { >- int argCount = instruction[3].u.operand; >- int registerOffset = -instruction[4].u.operand; >- >- if (opcodeID == op_call && shouldEmitProfiling()) { >- emitLoad(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0, regT1); >- Jump done = branchIfNotCell(regT0); >- loadPtr(Address(regT1, JSCell::structureIDOffset()), regT1); >- storePtr(regT1, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID()); >- done.link(this); >- } >- >- addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); >- >- store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); >- } // SP holds newCallFrame + sizeof(CallerFrameAndPC), with ArgumentCount initialized. >- >- uint32_t locationBits = CallSiteIndex(instruction).bits(); >- store32(TrustedImm32(locationBits), tagFor(CallFrameSlot::argumentCount)); >- emitLoad(callee, regT1, regT0); // regT1, regT0 holds callee. >- >- store32(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); >- store32(regT1, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) + TagOffset - sizeof(CallerFrameAndPC))); >- >- if (opcodeID == op_call_eval) { >- compileCallEval(instruction); >- return; >- } >- >- if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) >- emitRestoreCalleeSaves(); >- >- addSlowCase(branchIfNotCell(regT1)); >- >- DataLabelPtr addressOfLinkedFunctionCheck; >- Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(nullptr)); >- >- addSlowCase(slowCase); >- >- ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); >- info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0); >- m_callCompilationInfo.append(CallCompilationInfo()); >- m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck; >- m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info; >- >- checkStackPointerAlignment(); >- if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments) { >- prepareForTailCallSlow(); >- m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); >- return; >- } >- >- m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); >- >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- checkStackPointerAlignment(); >- >- sampleCodeBlock(m_codeBlock); >- emitPutCallResult(instruction); >-} >- >-void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex) >-{ >- if (opcodeID == op_call_eval) { >- compileCallEvalSlowCase(instruction, iter); >- return; >- } >- >- linkAllSlowCases(iter); >- >- move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); >- >- if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) >- emitRestoreCalleeSaves(); >- >- m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).retaggedCode<NoPtrTag>()); >- >- if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { >- abortWithReason(JITDidReturnFromTailCall); >- return; >- } >- >- addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- checkStackPointerAlignment(); >- >- sampleCodeBlock(m_codeBlock); >- emitPutCallResult(instruction); >-} >- >-} // namespace JSC >- >-#endif // USE(JSVALUE32_64) >-#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITOpcodes.cpp b/Source/JavaScriptCore/jit/JITOpcodes.cpp >deleted file mode 100644 >index 38621d886e70c4158ecd7a8eaa9f6dc1105cb829..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITOpcodes.cpp >+++ /dev/null >@@ -1,1462 +0,0 @@ >-/* >- * Copyright (C) 2009-2018 Apple Inc. All rights reserved. >- * Copyright (C) 2010 Patrick Gansterer <paroga@paroga.com> >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >-#if ENABLE(JIT) >-#include "JIT.h" >- >-#include "BasicBlockLocation.h" >-#include "BytecodeStructs.h" >-#include "Exception.h" >-#include "Heap.h" >-#include "InterpreterInlines.h" >-#include "JITInlines.h" >-#include "JSArray.h" >-#include "JSCast.h" >-#include "JSFunction.h" >-#include "JSPropertyNameEnumerator.h" >-#include "LinkBuffer.h" >-#include "MaxFrameExtentForSlowPathCall.h" >-#include "SlowPathCall.h" >-#include "SuperSampler.h" >-#include "ThunkGenerators.h" >-#include "TypeLocation.h" >-#include "TypeProfilerLog.h" >-#include "VirtualRegister.h" >-#include "Watchdog.h" >- >-namespace JSC { >- >-#if USE(JSVALUE64) >- >-void JIT::emit_op_mov(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(src, regT0); >- emitPutVirtualRegister(dst); >-} >- >- >-void JIT::emit_op_end(Instruction* currentInstruction) >-{ >- RELEASE_ASSERT(returnValueGPR != callFrameRegister); >- emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >- emitRestoreCalleeSaves(); >- emitFunctionEpilogue(); >- ret(); >-} >- >-void JIT::emit_op_jmp(Instruction* currentInstruction) >-{ >- unsigned target = currentInstruction[1].u.operand; >- addJump(jump(), target); >-} >- >-void JIT::emit_op_new_object(Instruction* currentInstruction) >-{ >- Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >- size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); >- Allocator allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); >- >- RegisterID resultReg = regT0; >- RegisterID allocatorReg = regT1; >- RegisterID scratchReg = regT2; >- >- if (!allocator) >- addSlowCase(jump()); >- else { >- JumpList slowCases; >- auto butterfly = TrustedImmPtr(nullptr); >- emitAllocateJSObject(resultReg, JITAllocator::constant(allocator), allocatorReg, TrustedImmPtr(structure), butterfly, scratchReg, slowCases); >- emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); >- addSlowCase(slowCases); >- emitPutVirtualRegister(currentInstruction[1].u.operand); >- } >-} >- >-void JIT::emitSlow_op_new_object(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int dst = currentInstruction[1].u.operand; >- Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >- callOperation(operationNewObject, structure); >- emitStoreCell(dst, returnValueGPR); >-} >- >-void JIT::emit_op_overrides_has_instance(Instruction* currentInstruction) >-{ >- auto& bytecode = *reinterpret_cast<OpOverridesHasInstance*>(currentInstruction); >- int dst = bytecode.dst(); >- int constructor = bytecode.constructor(); >- int hasInstanceValue = bytecode.hasInstanceValue(); >- >- emitGetVirtualRegister(hasInstanceValue, regT0); >- >- // We don't jump if we know what Symbol.hasInstance would do. >- Jump customhasInstanceValue = branchPtr(NotEqual, regT0, TrustedImmPtr(m_codeBlock->globalObject()->functionProtoHasInstanceSymbolFunction())); >- >- emitGetVirtualRegister(constructor, regT0); >- >- // Check that constructor 'ImplementsDefaultHasInstance' i.e. the object is not a C-API user nor a bound function. >- test8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance), regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- Jump done = jump(); >- >- customhasInstanceValue.link(this); >- move(TrustedImm32(ValueTrue), regT0); >- >- done.link(this); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_instanceof(Instruction* currentInstruction) >-{ >- auto& bytecode = *reinterpret_cast<OpInstanceof*>(currentInstruction); >- int dst = bytecode.dst(); >- int value = bytecode.value(); >- int proto = bytecode.prototype(); >- >- // Load the operands (baseVal, proto, and value respectively) into registers. >- // We use regT0 for baseVal since we will be done with this first, and we can then use it for the result. >- emitGetVirtualRegister(value, regT2); >- emitGetVirtualRegister(proto, regT1); >- >- // Check that proto are cells. baseVal must be a cell - this is checked by the get_by_id for Symbol.hasInstance. >- emitJumpSlowCaseIfNotJSCell(regT2, value); >- emitJumpSlowCaseIfNotJSCell(regT1, proto); >- >- JITInstanceOfGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), >- RegisterSet::stubUnavailableRegisters(), >- regT0, // result >- regT2, // value >- regT1, // proto >- regT3, regT4); // scratch >- gen.generateFastPath(*this); >- m_instanceOfs.append(gen); >- >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emitSlow_op_instanceof(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- >- JITInstanceOfGenerator& gen = m_instanceOfs[m_instanceOfIndex++]; >- >- Label coldPathBegin = label(); >- Call call = callOperation(operationInstanceOfOptimize, resultVReg, gen.stubInfo(), regT2, regT1); >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_instanceof_custom(Instruction*) >-{ >- // This always goes to slow path since we expect it to be rare. >- addSlowCase(jump()); >-} >- >-void JIT::emit_op_is_empty(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(value, regT0); >- compare64(Equal, regT0, TrustedImm32(JSValue::encode(JSValue())), regT0); >- >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_is_undefined(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(value, regT0); >- Jump isCell = branchIfCell(regT0); >- >- compare64(Equal, regT0, TrustedImm32(ValueUndefined), regT0); >- Jump done = jump(); >- >- isCell.link(this); >- Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- move(TrustedImm32(0), regT0); >- Jump notMasqueradesAsUndefined = jump(); >- >- isMasqueradesAsUndefined.link(this); >- emitLoadStructure(*vm(), regT0, regT1, regT2); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- loadPtr(Address(regT1, Structure::globalObjectOffset()), regT1); >- comparePtr(Equal, regT0, regT1, regT0); >- >- notMasqueradesAsUndefined.link(this); >- done.link(this); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_is_boolean(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(value, regT0); >- xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), regT0); >- test64(Zero, regT0, TrustedImm32(static_cast<int32_t>(~1)), regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_is_number(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(value, regT0); >- test64(NonZero, regT0, tagTypeNumberRegister, regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_is_cell_with_type(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- int type = currentInstruction[3].u.operand; >- >- emitGetVirtualRegister(value, regT0); >- Jump isNotCell = branchIfNotCell(regT0); >- >- compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(type), regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- Jump done = jump(); >- >- isNotCell.link(this); >- move(TrustedImm32(ValueFalse), regT0); >- >- done.link(this); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_is_object(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(value, regT0); >- Jump isNotCell = branchIfNotCell(regT0); >- >- compare8(AboveOrEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType), regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- Jump done = jump(); >- >- isNotCell.link(this); >- move(TrustedImm32(ValueFalse), regT0); >- >- done.link(this); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_ret(Instruction* currentInstruction) >-{ >- ASSERT(callFrameRegister != regT1); >- ASSERT(regT1 != returnValueGPR); >- ASSERT(returnValueGPR != callFrameRegister); >- >- // Return the result in %eax. >- emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >- >- checkStackPointerAlignment(); >- emitRestoreCalleeSaves(); >- emitFunctionEpilogue(); >- ret(); >-} >- >-void JIT::emit_op_to_primitive(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(src, regT0); >- >- Jump isImm = branchIfNotCell(regT0); >- addSlowCase(branchIfObject(regT0)); >- isImm.link(this); >- >- if (dst != src) >- emitPutVirtualRegister(dst); >- >-} >- >-void JIT::emit_op_set_function_name(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- emitGetVirtualRegister(currentInstruction[2].u.operand, regT1); >- callOperation(operationSetFunctionName, regT0, regT1); >-} >- >-void JIT::emit_op_not(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[2].u.operand, regT0); >- >- // Invert against JSValue(false); if the value was tagged as a boolean, then all bits will be >- // clear other than the low bit (which will be 0 or 1 for false or true inputs respectively). >- // Then invert against JSValue(true), which will add the tag back in, and flip the low bit. >- xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), regT0); >- addSlowCase(branchTestPtr(NonZero, regT0, TrustedImm32(static_cast<int32_t>(~1)))); >- xor64(TrustedImm32(static_cast<int32_t>(ValueTrue)), regT0); >- >- emitPutVirtualRegister(currentInstruction[1].u.operand); >-} >- >-void JIT::emit_op_jfalse(Instruction* currentInstruction) >-{ >- unsigned target = currentInstruction[2].u.operand; >- >- GPRReg value = regT0; >- GPRReg result = regT1; >- GPRReg scratch = regT2; >- bool shouldCheckMasqueradesAsUndefined = true; >- >- emitGetVirtualRegister(currentInstruction[1].u.operand, value); >- emitConvertValueToBoolean(*vm(), JSValueRegs(value), result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >- >- addJump(branchTest32(Zero, result), target); >-} >- >-void JIT::emit_op_jeq_null(Instruction* currentInstruction) >-{ >- int src = currentInstruction[1].u.operand; >- unsigned target = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(src, regT0); >- Jump isImmediate = branchIfNotCell(regT0); >- >- // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure. >- Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- emitLoadStructure(*vm(), regT0, regT2, regT1); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- addJump(branchPtr(Equal, Address(regT2, Structure::globalObjectOffset()), regT0), target); >- Jump masqueradesGlobalObjectIsForeign = jump(); >- >- // Now handle the immediate cases - undefined & null >- isImmediate.link(this); >- and64(TrustedImm32(~TagBitUndefined), regT0); >- addJump(branch64(Equal, regT0, TrustedImm64(JSValue::encode(jsNull()))), target); >- >- isNotMasqueradesAsUndefined.link(this); >- masqueradesGlobalObjectIsForeign.link(this); >-}; >-void JIT::emit_op_jneq_null(Instruction* currentInstruction) >-{ >- int src = currentInstruction[1].u.operand; >- unsigned target = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(src, regT0); >- Jump isImmediate = branchIfNotCell(regT0); >- >- // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure. >- addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target); >- emitLoadStructure(*vm(), regT0, regT2, regT1); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- addJump(branchPtr(NotEqual, Address(regT2, Structure::globalObjectOffset()), regT0), target); >- Jump wasNotImmediate = jump(); >- >- // Now handle the immediate cases - undefined & null >- isImmediate.link(this); >- and64(TrustedImm32(~TagBitUndefined), regT0); >- addJump(branch64(NotEqual, regT0, TrustedImm64(JSValue::encode(jsNull()))), target); >- >- wasNotImmediate.link(this); >-} >- >-void JIT::emit_op_jneq_ptr(Instruction* currentInstruction) >-{ >- int src = currentInstruction[1].u.operand; >- Special::Pointer ptr = currentInstruction[2].u.specialPointer; >- unsigned target = currentInstruction[3].u.operand; >- >- emitGetVirtualRegister(src, regT0); >- CCallHelpers::Jump equal = branchPtr(Equal, regT0, TrustedImmPtr(actualPointerFor(m_codeBlock, ptr))); >- store32(TrustedImm32(1), ¤tInstruction[4].u.operand); >- addJump(jump(), target); >- equal.link(this); >-} >- >-void JIT::emit_op_eq(Instruction* currentInstruction) >-{ >- emitGetVirtualRegisters(currentInstruction[2].u.operand, regT0, currentInstruction[3].u.operand, regT1); >- emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >- compare32(Equal, regT1, regT0, regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(currentInstruction[1].u.operand); >-} >- >-void JIT::emit_op_jeq(Instruction* currentInstruction) >-{ >- unsigned target = currentInstruction[3].u.operand; >- emitGetVirtualRegisters(currentInstruction[1].u.operand, regT0, currentInstruction[2].u.operand, regT1); >- emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >- addJump(branch32(Equal, regT0, regT1), target); >-} >- >-void JIT::emit_op_jtrue(Instruction* currentInstruction) >-{ >- unsigned target = currentInstruction[2].u.operand; >- >- GPRReg value = regT0; >- GPRReg result = regT1; >- GPRReg scratch = regT2; >- bool shouldCheckMasqueradesAsUndefined = true; >- emitGetVirtualRegister(currentInstruction[1].u.operand, value); >- emitConvertValueToBoolean(*vm(), JSValueRegs(value), result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >- addJump(branchTest32(NonZero, result), target); >-} >- >-void JIT::emit_op_neq(Instruction* currentInstruction) >-{ >- emitGetVirtualRegisters(currentInstruction[2].u.operand, regT0, currentInstruction[3].u.operand, regT1); >- emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >- compare32(NotEqual, regT1, regT0, regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- >- emitPutVirtualRegister(currentInstruction[1].u.operand); >-} >- >-void JIT::emit_op_jneq(Instruction* currentInstruction) >-{ >- unsigned target = currentInstruction[3].u.operand; >- emitGetVirtualRegisters(currentInstruction[1].u.operand, regT0, currentInstruction[2].u.operand, regT1); >- emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); >- addJump(branch32(NotEqual, regT0, regT1), target); >-} >- >-void JIT::emit_op_throw(Instruction* currentInstruction) >-{ >- ASSERT(regT0 == returnValueGPR); >- copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- callOperationNoExceptionCheck(operationThrow, regT0); >- jumpToExceptionHandler(*vm()); >-} >- >-void JIT::compileOpStrictEq(Instruction* currentInstruction, CompileOpStrictEqType type) >-{ >- int dst = currentInstruction[1].u.operand; >- int src1 = currentInstruction[2].u.operand; >- int src2 = currentInstruction[3].u.operand; >- >- emitGetVirtualRegisters(src1, regT0, src2, regT1); >- >- // Jump slow if both are cells (to cover strings). >- move(regT0, regT2); >- or64(regT1, regT2); >- addSlowCase(branchIfCell(regT2)); >- >- // Jump slow if either is a double. First test if it's an integer, which is fine, and then test >- // if it's a double. >- Jump leftOK = branchIfInt32(regT0); >- addSlowCase(branchIfNumber(regT0)); >- leftOK.link(this); >- Jump rightOK = branchIfInt32(regT1); >- addSlowCase(branchIfNumber(regT1)); >- rightOK.link(this); >- >- if (type == CompileOpStrictEqType::StrictEq) >- compare64(Equal, regT1, regT0, regT0); >- else >- compare64(NotEqual, regT1, regT0, regT0); >- boxBoolean(regT0, JSValueRegs { regT0 }); >- >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_stricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEq(currentInstruction, CompileOpStrictEqType::StrictEq); >-} >- >-void JIT::emit_op_nstricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEq(currentInstruction, CompileOpStrictEqType::NStrictEq); >-} >- >-void JIT::compileOpStrictEqJump(Instruction* currentInstruction, CompileOpStrictEqType type) >-{ >- int target = currentInstruction[3].u.operand; >- int src1 = currentInstruction[1].u.operand; >- int src2 = currentInstruction[2].u.operand; >- >- emitGetVirtualRegisters(src1, regT0, src2, regT1); >- >- // Jump slow if both are cells (to cover strings). >- move(regT0, regT2); >- or64(regT1, regT2); >- addSlowCase(branchIfCell(regT2)); >- >- // Jump slow if either is a double. First test if it's an integer, which is fine, and then test >- // if it's a double. >- Jump leftOK = branchIfInt32(regT0); >- addSlowCase(branchIfNumber(regT0)); >- leftOK.link(this); >- Jump rightOK = branchIfInt32(regT1); >- addSlowCase(branchIfNumber(regT1)); >- rightOK.link(this); >- >- if (type == CompileOpStrictEqType::StrictEq) >- addJump(branch64(Equal, regT1, regT0), target); >- else >- addJump(branch64(NotEqual, regT1, regT0), target); >-} >- >-void JIT::emit_op_jstricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::StrictEq); >-} >- >-void JIT::emit_op_jnstricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::NStrictEq); >-} >- >-void JIT::emitSlow_op_jstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- unsigned target = currentInstruction[3].u.operand; >- callOperation(operationCompareStrictEq, regT0, regT1); >- emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); >-} >- >-void JIT::emitSlow_op_jnstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- unsigned target = currentInstruction[3].u.operand; >- callOperation(operationCompareStrictEq, regT0, regT1); >- emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); >-} >- >-void JIT::emit_op_to_number(Instruction* currentInstruction) >-{ >- int dstVReg = currentInstruction[1].u.operand; >- int srcVReg = currentInstruction[2].u.operand; >- emitGetVirtualRegister(srcVReg, regT0); >- >- addSlowCase(branchIfNotNumber(regT0)); >- >- emitValueProfilingSite(); >- if (srcVReg != dstVReg) >- emitPutVirtualRegister(dstVReg); >-} >- >-void JIT::emit_op_to_string(Instruction* currentInstruction) >-{ >- int srcVReg = currentInstruction[2].u.operand; >- emitGetVirtualRegister(srcVReg, regT0); >- >- addSlowCase(branchIfNotCell(regT0)); >- addSlowCase(branchIfNotString(regT0)); >- >- emitPutVirtualRegister(currentInstruction[1].u.operand); >-} >- >-void JIT::emit_op_to_object(Instruction* currentInstruction) >-{ >- int dstVReg = currentInstruction[1].u.operand; >- int srcVReg = currentInstruction[2].u.operand; >- emitGetVirtualRegister(srcVReg, regT0); >- >- addSlowCase(branchIfNotCell(regT0)); >- addSlowCase(branchIfNotObject(regT0)); >- >- emitValueProfilingSite(); >- if (srcVReg != dstVReg) >- emitPutVirtualRegister(dstVReg); >-} >- >-void JIT::emit_op_catch(Instruction* currentInstruction) >-{ >- restoreCalleeSavesFromEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >- >- move(TrustedImmPtr(m_vm), regT3); >- load64(Address(regT3, VM::callFrameForCatchOffset()), callFrameRegister); >- storePtr(TrustedImmPtr(nullptr), Address(regT3, VM::callFrameForCatchOffset())); >- >- addPtr(TrustedImm32(stackPointerOffsetFor(codeBlock()) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- >- callOperationNoExceptionCheck(operationCheckIfExceptionIsUncatchableAndNotifyProfiler); >- Jump isCatchableException = branchTest32(Zero, returnValueGPR); >- jumpToExceptionHandler(*vm()); >- isCatchableException.link(this); >- >- move(TrustedImmPtr(m_vm), regT3); >- load64(Address(regT3, VM::exceptionOffset()), regT0); >- store64(TrustedImm64(JSValue::encode(JSValue())), Address(regT3, VM::exceptionOffset())); >- emitPutVirtualRegister(currentInstruction[1].u.operand); >- >- load64(Address(regT0, Exception::valueOffset()), regT0); >- emitPutVirtualRegister(currentInstruction[2].u.operand); >- >-#if ENABLE(DFG_JIT) >- // FIXME: consider inline caching the process of doing OSR entry, including >- // argument type proofs, storing locals to the buffer, etc >- // https://bugs.webkit.org/show_bug.cgi?id=175598 >- >- ValueProfileAndOperandBuffer* buffer = static_cast<ValueProfileAndOperandBuffer*>(currentInstruction[3].u.pointer); >- if (buffer || !shouldEmitProfiling()) >- callOperation(operationTryOSREnterAtCatch, m_bytecodeOffset); >- else >- callOperation(operationTryOSREnterAtCatchAndValueProfile, m_bytecodeOffset); >- auto skipOSREntry = branchTestPtr(Zero, returnValueGPR); >- emitRestoreCalleeSaves(); >- jump(returnValueGPR, ExceptionHandlerPtrTag); >- skipOSREntry.link(this); >- if (buffer && shouldEmitProfiling()) { >- buffer->forEach([&] (ValueProfileAndOperand& profile) { >- JSValueRegs regs(regT0); >- emitGetVirtualRegister(profile.m_operand, regs); >- emitValueProfilingSite(profile.m_profile); >- }); >- } >-#endif // ENABLE(DFG_JIT) >-} >- >-void JIT::emit_op_identity_with_profile(Instruction*) >-{ >- // We don't need to do anything here... >-} >- >-void JIT::emit_op_get_parent_scope(Instruction* currentInstruction) >-{ >- int currentScope = currentInstruction[2].u.operand; >- emitGetVirtualRegister(currentScope, regT0); >- loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >- emitStoreCell(currentInstruction[1].u.operand, regT0); >-} >- >-void JIT::emit_op_switch_imm(Instruction* currentInstruction) >-{ >- size_t tableIndex = currentInstruction[1].u.operand; >- unsigned defaultOffset = currentInstruction[2].u.operand; >- unsigned scrutinee = currentInstruction[3].u.operand; >- >- // create jump table for switch destinations, track this switch statement. >- SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >- m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Immediate)); >- jumpTable->ensureCTITable(); >- >- emitGetVirtualRegister(scrutinee, regT0); >- callOperation(operationSwitchImmWithUnknownKeyType, regT0, tableIndex); >- jump(returnValueGPR, JSSwitchPtrTag); >-} >- >-void JIT::emit_op_switch_char(Instruction* currentInstruction) >-{ >- size_t tableIndex = currentInstruction[1].u.operand; >- unsigned defaultOffset = currentInstruction[2].u.operand; >- unsigned scrutinee = currentInstruction[3].u.operand; >- >- // create jump table for switch destinations, track this switch statement. >- SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >- m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Character)); >- jumpTable->ensureCTITable(); >- >- emitGetVirtualRegister(scrutinee, regT0); >- callOperation(operationSwitchCharWithUnknownKeyType, regT0, tableIndex); >- jump(returnValueGPR, JSSwitchPtrTag); >-} >- >-void JIT::emit_op_switch_string(Instruction* currentInstruction) >-{ >- size_t tableIndex = currentInstruction[1].u.operand; >- unsigned defaultOffset = currentInstruction[2].u.operand; >- unsigned scrutinee = currentInstruction[3].u.operand; >- >- // create jump table for switch destinations, track this switch statement. >- StringJumpTable* jumpTable = &m_codeBlock->stringSwitchJumpTable(tableIndex); >- m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset)); >- >- emitGetVirtualRegister(scrutinee, regT0); >- callOperation(operationSwitchStringWithUnknownKeyType, regT0, tableIndex); >- jump(returnValueGPR, JSSwitchPtrTag); >-} >- >-void JIT::emit_op_debug(Instruction* currentInstruction) >-{ >- load32(codeBlock()->debuggerRequestsAddress(), regT0); >- Jump noDebuggerRequests = branchTest32(Zero, regT0); >- callOperation(operationDebug, currentInstruction[1].u.operand); >- noDebuggerRequests.link(this); >-} >- >-void JIT::emit_op_eq_null(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src1 = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(src1, regT0); >- Jump isImmediate = branchIfNotCell(regT0); >- >- Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- move(TrustedImm32(0), regT0); >- Jump wasNotMasqueradesAsUndefined = jump(); >- >- isMasqueradesAsUndefined.link(this); >- emitLoadStructure(*vm(), regT0, regT2, regT1); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >- comparePtr(Equal, regT0, regT2, regT0); >- Jump wasNotImmediate = jump(); >- >- isImmediate.link(this); >- >- and64(TrustedImm32(~TagBitUndefined), regT0); >- compare64(Equal, regT0, TrustedImm32(ValueNull), regT0); >- >- wasNotImmediate.link(this); >- wasNotMasqueradesAsUndefined.link(this); >- >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(dst); >- >-} >- >-void JIT::emit_op_neq_null(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src1 = currentInstruction[2].u.operand; >- >- emitGetVirtualRegister(src1, regT0); >- Jump isImmediate = branchIfNotCell(regT0); >- >- Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- move(TrustedImm32(1), regT0); >- Jump wasNotMasqueradesAsUndefined = jump(); >- >- isMasqueradesAsUndefined.link(this); >- emitLoadStructure(*vm(), regT0, regT2, regT1); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >- comparePtr(NotEqual, regT0, regT2, regT0); >- Jump wasNotImmediate = jump(); >- >- isImmediate.link(this); >- >- and64(TrustedImm32(~TagBitUndefined), regT0); >- compare64(NotEqual, regT0, TrustedImm32(ValueNull), regT0); >- >- wasNotImmediate.link(this); >- wasNotMasqueradesAsUndefined.link(this); >- >- boxBoolean(regT0, JSValueRegs { regT0 }); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_enter(Instruction*) >-{ >- // Even though CTI doesn't use them, we initialize our constant >- // registers to zap stale pointers, to avoid unnecessarily prolonging >- // object lifetime and increasing GC pressure. >- size_t count = m_codeBlock->m_numVars; >- for (size_t j = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); j < count; ++j) >- emitInitRegister(virtualRegisterForLocal(j).offset()); >- >- emitWriteBarrier(m_codeBlock); >- >- emitEnterOptimizationCheck(); >-} >- >-void JIT::emit_op_get_scope(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- emitGetFromCallFrameHeaderPtr(CallFrameSlot::callee, regT0); >- loadPtr(Address(regT0, JSFunction::offsetOfScopeChain()), regT0); >- emitStoreCell(dst, regT0); >-} >- >-void JIT::emit_op_to_this(Instruction* currentInstruction) >-{ >- WriteBarrierBase<Structure>* cachedStructure = ¤tInstruction[2].u.structure; >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT1); >- >- emitJumpSlowCaseIfNotJSCell(regT1); >- >- addSlowCase(branchIfNotType(regT1, FinalObjectType)); >- loadPtr(cachedStructure, regT2); >- addSlowCase(branchTestPtr(Zero, regT2)); >- load32(Address(regT2, Structure::structureIDOffset()), regT2); >- addSlowCase(branch32(NotEqual, Address(regT1, JSCell::structureIDOffset()), regT2)); >-} >- >-void JIT::emit_op_create_this(Instruction* currentInstruction) >-{ >- int callee = currentInstruction[2].u.operand; >- WriteBarrierBase<JSCell>* cachedFunction = ¤tInstruction[4].u.jsCell; >- RegisterID calleeReg = regT0; >- RegisterID rareDataReg = regT4; >- RegisterID resultReg = regT0; >- RegisterID allocatorReg = regT1; >- RegisterID structureReg = regT2; >- RegisterID cachedFunctionReg = regT4; >- RegisterID scratchReg = regT3; >- >- emitGetVirtualRegister(callee, calleeReg); >- addSlowCase(branchIfNotFunction(calleeReg)); >- loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); >- addSlowCase(branchTestPtr(Zero, rareDataReg)); >- xorPtr(TrustedImmPtr(JSFunctionPoison::key()), rareDataReg); >- loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); >- loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); >- >- loadPtr(cachedFunction, cachedFunctionReg); >- Jump hasSeenMultipleCallees = branchPtr(Equal, cachedFunctionReg, TrustedImmPtr(JSCell::seenMultipleCalleeObjects())); >- addSlowCase(branchPtr(NotEqual, calleeReg, cachedFunctionReg)); >- hasSeenMultipleCallees.link(this); >- >- JumpList slowCases; >- auto butterfly = TrustedImmPtr(nullptr); >- emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases); >- emitGetVirtualRegister(callee, scratchReg); >- loadPtr(Address(scratchReg, JSFunction::offsetOfRareData()), scratchReg); >- xorPtr(TrustedImmPtr(JSFunctionPoison::key()), scratchReg); >- load32(Address(scratchReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfInlineCapacity()), scratchReg); >- emitInitializeInlineStorage(resultReg, scratchReg); >- addSlowCase(slowCases); >- emitPutVirtualRegister(currentInstruction[1].u.operand); >-} >- >-void JIT::emit_op_check_tdz(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- addSlowCase(branchIfEmpty(regT0)); >-} >- >- >-// Slow cases >- >-void JIT::emitSlow_op_eq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- callOperation(operationCompareEq, regT0, regT1); >- boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR }); >- emitPutVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >-} >- >-void JIT::emitSlow_op_neq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- callOperation(operationCompareEq, regT0, regT1); >- xor32(TrustedImm32(0x1), regT0); >- boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR }); >- emitPutVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); >-} >- >-void JIT::emitSlow_op_jeq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- unsigned target = currentInstruction[3].u.operand; >- callOperation(operationCompareEq, regT0, regT1); >- emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); >-} >- >-void JIT::emitSlow_op_jneq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- unsigned target = currentInstruction[3].u.operand; >- callOperation(operationCompareEq, regT0, regT1); >- emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); >-} >- >-void JIT::emitSlow_op_instanceof_custom(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- auto& bytecode = *reinterpret_cast<OpInstanceofCustom*>(currentInstruction); >- int dst = bytecode.dst(); >- int value = bytecode.value(); >- int constructor = bytecode.constructor(); >- int hasInstanceValue = bytecode.hasInstanceValue(); >- >- emitGetVirtualRegister(value, regT0); >- emitGetVirtualRegister(constructor, regT1); >- emitGetVirtualRegister(hasInstanceValue, regT2); >- callOperation(operationInstanceOfCustom, regT0, regT1, regT2); >- boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR }); >- emitPutVirtualRegister(dst, returnValueGPR); >-} >- >-#endif // USE(JSVALUE64) >- >-void JIT::emit_op_loop_hint(Instruction*) >-{ >- // Emit the JIT optimization check: >- if (canBeOptimized()) { >- addSlowCase(branchAdd32(PositiveOrZero, TrustedImm32(Options::executionCounterIncrementForLoop()), >- AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter()))); >- } >-} >- >-void JIT::emitSlow_op_loop_hint(Instruction*, Vector<SlowCaseEntry>::iterator& iter) >-{ >-#if ENABLE(DFG_JIT) >- // Emit the slow path for the JIT optimization check: >- if (canBeOptimized()) { >- linkAllSlowCases(iter); >- >- copyCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >- >- callOperation(operationOptimize, m_bytecodeOffset); >- Jump noOptimizedEntry = branchTestPtr(Zero, returnValueGPR); >- if (!ASSERT_DISABLED) { >- Jump ok = branchPtr(MacroAssembler::Above, returnValueGPR, TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000)))); >- abortWithReason(JITUnreasonableLoopHintJumpTarget); >- ok.link(this); >- } >- jump(returnValueGPR, GPRInfo::callFrameRegister); >- noOptimizedEntry.link(this); >- >- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_loop_hint)); >- } >-#else >- UNUSED_PARAM(iter); >-#endif >-} >- >-void JIT::emit_op_check_traps(Instruction*) >-{ >- addSlowCase(branchTest8(NonZero, AbsoluteAddress(m_vm->needTrapHandlingAddress()))); >-} >- >-void JIT::emit_op_nop(Instruction*) >-{ >-} >- >-void JIT::emit_op_super_sampler_begin(Instruction*) >-{ >- add32(TrustedImm32(1), AbsoluteAddress(bitwise_cast<void*>(&g_superSamplerCount))); >-} >- >-void JIT::emit_op_super_sampler_end(Instruction*) >-{ >- sub32(TrustedImm32(1), AbsoluteAddress(bitwise_cast<void*>(&g_superSamplerCount))); >-} >- >-void JIT::emitSlow_op_check_traps(Instruction*, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- callOperation(operationHandleTraps); >-} >- >-void JIT::emit_op_new_regexp(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- callOperation(operationNewRegexp, m_codeBlock->regexp(currentInstruction[2].u.operand)); >- emitStoreCell(dst, returnValueGPR); >-} >- >-void JIT::emitNewFuncCommon(Instruction* currentInstruction) >-{ >- Jump lazyJump; >- int dst = currentInstruction[1].u.operand; >- >-#if USE(JSVALUE64) >- emitGetVirtualRegister(currentInstruction[2].u.operand, regT0); >-#else >- emitLoadPayload(currentInstruction[2].u.operand, regT0); >-#endif >- FunctionExecutable* funcExec = m_codeBlock->functionDecl(currentInstruction[3].u.operand); >- >- OpcodeID opcodeID = Interpreter::getOpcodeID(currentInstruction->u.opcode); >- if (opcodeID == op_new_func) >- callOperation(operationNewFunction, dst, regT0, funcExec); >- else if (opcodeID == op_new_generator_func) >- callOperation(operationNewGeneratorFunction, dst, regT0, funcExec); >- else if (opcodeID == op_new_async_func) >- callOperation(operationNewAsyncFunction, dst, regT0, funcExec); >- else { >- ASSERT(opcodeID == op_new_async_generator_func); >- callOperation(operationNewAsyncGeneratorFunction, dst, regT0, funcExec); >- } >-} >- >-void JIT::emit_op_new_func(Instruction* currentInstruction) >-{ >- emitNewFuncCommon(currentInstruction); >-} >- >-void JIT::emit_op_new_generator_func(Instruction* currentInstruction) >-{ >- emitNewFuncCommon(currentInstruction); >-} >- >-void JIT::emit_op_new_async_generator_func(Instruction* currentInstruction) >-{ >- emitNewFuncCommon(currentInstruction); >-} >- >-void JIT::emit_op_new_async_func(Instruction* currentInstruction) >-{ >- emitNewFuncCommon(currentInstruction); >-} >- >-void JIT::emitNewFuncExprCommon(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >-#if USE(JSVALUE64) >- emitGetVirtualRegister(currentInstruction[2].u.operand, regT0); >-#else >- emitLoadPayload(currentInstruction[2].u.operand, regT0); >-#endif >- >- FunctionExecutable* function = m_codeBlock->functionExpr(currentInstruction[3].u.operand); >- OpcodeID opcodeID = Interpreter::getOpcodeID(currentInstruction->u.opcode); >- >- if (opcodeID == op_new_func_exp) >- callOperation(operationNewFunction, dst, regT0, function); >- else if (opcodeID == op_new_generator_func_exp) >- callOperation(operationNewGeneratorFunction, dst, regT0, function); >- else if (opcodeID == op_new_async_func_exp) >- callOperation(operationNewAsyncFunction, dst, regT0, function); >- else { >- ASSERT(opcodeID == op_new_async_generator_func_exp); >- callOperation(operationNewAsyncGeneratorFunction, dst, regT0, function); >- } >-} >- >-void JIT::emit_op_new_func_exp(Instruction* currentInstruction) >-{ >- emitNewFuncExprCommon(currentInstruction); >-} >- >-void JIT::emit_op_new_generator_func_exp(Instruction* currentInstruction) >-{ >- emitNewFuncExprCommon(currentInstruction); >-} >- >-void JIT::emit_op_new_async_func_exp(Instruction* currentInstruction) >-{ >- emitNewFuncExprCommon(currentInstruction); >-} >- >-void JIT::emit_op_new_async_generator_func_exp(Instruction* currentInstruction) >-{ >- emitNewFuncExprCommon(currentInstruction); >-} >- >-void JIT::emit_op_new_array(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int valuesIndex = currentInstruction[2].u.operand; >- int size = currentInstruction[3].u.operand; >- addPtr(TrustedImm32(valuesIndex * sizeof(Register)), callFrameRegister, regT0); >- callOperation(operationNewArrayWithProfile, dst, >- currentInstruction[4].u.arrayAllocationProfile, regT0, size); >-} >- >-void JIT::emit_op_new_array_with_size(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int sizeIndex = currentInstruction[2].u.operand; >-#if USE(JSVALUE64) >- emitGetVirtualRegister(sizeIndex, regT0); >- callOperation(operationNewArrayWithSizeAndProfile, dst, >- currentInstruction[3].u.arrayAllocationProfile, regT0); >-#else >- emitLoad(sizeIndex, regT1, regT0); >- callOperation(operationNewArrayWithSizeAndProfile, dst, >- currentInstruction[3].u.arrayAllocationProfile, JSValueRegs(regT1, regT0)); >-#endif >-} >- >-#if USE(JSVALUE64) >-void JIT::emit_op_has_structure_property(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int enumerator = currentInstruction[4].u.operand; >- >- emitGetVirtualRegister(base, regT0); >- emitGetVirtualRegister(enumerator, regT1); >- emitJumpSlowCaseIfNotJSCell(regT0, base); >- >- load32(Address(regT0, JSCell::structureIDOffset()), regT0); >- addSlowCase(branch32(NotEqual, regT0, Address(regT1, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >- >- move(TrustedImm64(JSValue::encode(jsBoolean(true))), regT0); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::privateCompileHasIndexedProperty(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >-{ >- Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >- >- PatchableJump badType; >- >- // FIXME: Add support for other types like TypedArrays and Arguments. >- // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >- JumpList slowCases = emitLoadForArrayMode(currentInstruction, arrayMode, badType); >- move(TrustedImm64(JSValue::encode(jsBoolean(true))), regT0); >- Jump done = jump(); >- >- LinkBuffer patchBuffer(*this, m_codeBlock); >- >- patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- >- patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >- >- byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >- m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >- "Baseline has_indexed_property stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >- >- MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >- MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationHasIndexedPropertyGeneric)); >-} >- >-void JIT::emit_op_has_indexed_property(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >- >- emitGetVirtualRegisters(base, regT0, property, regT1); >- >- // This is technically incorrect - we're zero-extending an int32. On the hot path this doesn't matter. >- // We check the value as if it was a uint32 against the m_vectorLength - which will always fail if >- // number was signed since m_vectorLength is always less than intmax (since the total allocation >- // size is always less than 4Gb). As such zero extending will have been correct (and extending the value >- // to 64-bits is necessary since it's used in the address calculation. We zero extend rather than sign >- // extending since it makes it easier to re-tag the value in the slow case. >- zeroExtend32ToPtr(regT1, regT1); >- >- emitJumpSlowCaseIfNotJSCell(regT0, base); >- emitArrayProfilingSiteWithCell(regT0, regT2, profile); >- and32(TrustedImm32(IndexingShapeMask), regT2); >- >- JITArrayMode mode = chooseArrayMode(profile); >- PatchableJump badType; >- >- // FIXME: Add support for other types like TypedArrays and Arguments. >- // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >- JumpList slowCases = emitLoadForArrayMode(currentInstruction, mode, badType); >- >- move(TrustedImm64(JSValue::encode(jsBoolean(true))), regT0); >- >- addSlowCase(badType); >- addSlowCase(slowCases); >- >- Label done = label(); >- >- emitPutVirtualRegister(dst); >- >- Label nextHotPath = label(); >- >- m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, PatchableJump(), badType, mode, profile, done, nextHotPath)); >-} >- >-void JIT::emitSlow_op_has_indexed_property(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >- >- Label slowPath = label(); >- >- emitGetVirtualRegister(base, regT0); >- emitGetVirtualRegister(property, regT1); >- Call call = callOperation(operationHasIndexedPropertyDefault, dst, regT0, regT1, byValInfo); >- >- m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >- m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >- m_byValInstructionIndex++; >-} >- >-void JIT::emit_op_get_direct_pname(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int index = currentInstruction[4].u.operand; >- int enumerator = currentInstruction[5].u.operand; >- >- // Check that base is a cell >- emitGetVirtualRegister(base, regT0); >- emitJumpSlowCaseIfNotJSCell(regT0, base); >- >- // Check the structure >- emitGetVirtualRegister(enumerator, regT2); >- load32(Address(regT0, JSCell::structureIDOffset()), regT1); >- addSlowCase(branch32(NotEqual, regT1, Address(regT2, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >- >- // Compute the offset >- emitGetVirtualRegister(index, regT1); >- // If index is less than the enumerator's cached inline storage, then it's an inline access >- Jump outOfLineAccess = branch32(AboveOrEqual, regT1, Address(regT2, JSPropertyNameEnumerator::cachedInlineCapacityOffset())); >- addPtr(TrustedImm32(JSObject::offsetOfInlineStorage()), regT0); >- signExtend32ToPtr(regT1, regT1); >- load64(BaseIndex(regT0, regT1, TimesEight), regT0); >- >- Jump done = jump(); >- >- // Otherwise it's out of line >- outOfLineAccess.link(this); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >- sub32(Address(regT2, JSPropertyNameEnumerator::cachedInlineCapacityOffset()), regT1); >- neg32(regT1); >- signExtend32ToPtr(regT1, regT1); >- int32_t offsetOfFirstProperty = static_cast<int32_t>(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue); >- load64(BaseIndex(regT0, regT1, TimesEight, offsetOfFirstProperty), regT0); >- >- done.link(this); >- emitValueProfilingSite(); >- emitPutVirtualRegister(dst, regT0); >-} >- >-void JIT::emit_op_enumerator_structure_pname(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int enumerator = currentInstruction[2].u.operand; >- int index = currentInstruction[3].u.operand; >- >- emitGetVirtualRegister(index, regT0); >- emitGetVirtualRegister(enumerator, regT1); >- Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endStructurePropertyIndexOffset())); >- >- move(TrustedImm64(JSValue::encode(jsNull())), regT0); >- >- Jump done = jump(); >- inBounds.link(this); >- >- loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >- signExtend32ToPtr(regT0, regT0); >- load64(BaseIndex(regT1, regT0, TimesEight), regT0); >- >- done.link(this); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_enumerator_generic_pname(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int enumerator = currentInstruction[2].u.operand; >- int index = currentInstruction[3].u.operand; >- >- emitGetVirtualRegister(index, regT0); >- emitGetVirtualRegister(enumerator, regT1); >- Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endGenericPropertyIndexOffset())); >- >- move(TrustedImm64(JSValue::encode(jsNull())), regT0); >- >- Jump done = jump(); >- inBounds.link(this); >- >- loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >- signExtend32ToPtr(regT0, regT0); >- load64(BaseIndex(regT1, regT0, TimesEight), regT0); >- >- done.link(this); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_profile_type(Instruction* currentInstruction) >-{ >- TypeLocation* cachedTypeLocation = currentInstruction[2].u.location; >- int valueToProfile = currentInstruction[1].u.operand; >- >- emitGetVirtualRegister(valueToProfile, regT0); >- >- JumpList jumpToEnd; >- >- jumpToEnd.append(branchIfEmpty(regT0)); >- >- // Compile in a predictive type check, if possible, to see if we can skip writing to the log. >- // These typechecks are inlined to match those of the 64-bit JSValue type checks. >- if (cachedTypeLocation->m_lastSeenType == TypeUndefined) >- jumpToEnd.append(branchIfUndefined(regT0)); >- else if (cachedTypeLocation->m_lastSeenType == TypeNull) >- jumpToEnd.append(branchIfNull(regT0)); >- else if (cachedTypeLocation->m_lastSeenType == TypeBoolean) >- jumpToEnd.append(branchIfBoolean(regT0, regT1)); >- else if (cachedTypeLocation->m_lastSeenType == TypeAnyInt) >- jumpToEnd.append(branchIfInt32(regT0)); >- else if (cachedTypeLocation->m_lastSeenType == TypeNumber) >- jumpToEnd.append(branchIfNumber(regT0)); >- else if (cachedTypeLocation->m_lastSeenType == TypeString) { >- Jump isNotCell = branchIfNotCell(regT0); >- jumpToEnd.append(branchIfString(regT0)); >- isNotCell.link(this); >- } >- >- // Load the type profiling log into T2. >- TypeProfilerLog* cachedTypeProfilerLog = m_vm->typeProfilerLog(); >- move(TrustedImmPtr(cachedTypeProfilerLog), regT2); >- // Load the next log entry into T1. >- loadPtr(Address(regT2, TypeProfilerLog::currentLogEntryOffset()), regT1); >- >- // Store the JSValue onto the log entry. >- store64(regT0, Address(regT1, TypeProfilerLog::LogEntry::valueOffset())); >- >- // Store the structureID of the cell if T0 is a cell, otherwise, store 0 on the log entry. >- Jump notCell = branchIfNotCell(regT0); >- load32(Address(regT0, JSCell::structureIDOffset()), regT0); >- store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >- Jump skipIsCell = jump(); >- notCell.link(this); >- store32(TrustedImm32(0), Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >- skipIsCell.link(this); >- >- // Store the typeLocation on the log entry. >- move(TrustedImmPtr(cachedTypeLocation), regT0); >- store64(regT0, Address(regT1, TypeProfilerLog::LogEntry::locationOffset())); >- >- // Increment the current log entry. >- addPtr(TrustedImm32(sizeof(TypeProfilerLog::LogEntry)), regT1); >- store64(regT1, Address(regT2, TypeProfilerLog::currentLogEntryOffset())); >- Jump skipClearLog = branchPtr(NotEqual, regT1, TrustedImmPtr(cachedTypeProfilerLog->logEndPtr())); >- // Clear the log if we're at the end of the log. >- callOperation(operationProcessTypeProfilerLog); >- skipClearLog.link(this); >- >- jumpToEnd.link(this); >-} >- >-void JIT::emit_op_log_shadow_chicken_prologue(Instruction* currentInstruction) >-{ >- updateTopCallFrame(); >- static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >- GPRReg shadowPacketReg = regT0; >- GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >- GPRReg scratch2Reg = regT2; >- ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT3); >- logShadowChickenProloguePacket(shadowPacketReg, scratch1Reg, regT3); >-} >- >-void JIT::emit_op_log_shadow_chicken_tail(Instruction* currentInstruction) >-{ >- updateTopCallFrame(); >- static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >- GPRReg shadowPacketReg = regT0; >- GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >- GPRReg scratch2Reg = regT2; >- ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT2); >- emitGetVirtualRegister(currentInstruction[2].u.operand, regT3); >- logShadowChickenTailPacket(shadowPacketReg, JSValueRegs(regT2), regT3, m_codeBlock, CallSiteIndex(m_bytecodeOffset)); >-} >- >-#endif // USE(JSVALUE64) >- >-void JIT::emit_op_profile_control_flow(Instruction* currentInstruction) >-{ >- BasicBlockLocation* basicBlockLocation = currentInstruction[1].u.basicBlockLocation; >-#if USE(JSVALUE64) >- basicBlockLocation->emitExecuteCode(*this); >-#else >- basicBlockLocation->emitExecuteCode(*this, regT0); >-#endif >-} >- >-void JIT::emit_op_argument_count(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- load32(payloadFor(CallFrameSlot::argumentCount), regT0); >- sub32(TrustedImm32(1), regT0); >- JSValueRegs result = JSValueRegs::withTwoAvailableRegs(regT0, regT1); >- boxInt32(regT0, result); >- emitPutVirtualRegister(dst, result); >-} >- >-void JIT::emit_op_get_rest_length(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- unsigned numParamsToSkip = currentInstruction[2].u.unsignedValue; >- load32(payloadFor(CallFrameSlot::argumentCount), regT0); >- sub32(TrustedImm32(1), regT0); >- Jump zeroLength = branch32(LessThanOrEqual, regT0, Imm32(numParamsToSkip)); >- sub32(Imm32(numParamsToSkip), regT0); >-#if USE(JSVALUE64) >- boxInt32(regT0, JSValueRegs(regT0)); >-#endif >- Jump done = jump(); >- >- zeroLength.link(this); >-#if USE(JSVALUE64) >- move(TrustedImm64(JSValue::encode(jsNumber(0))), regT0); >-#else >- move(TrustedImm32(0), regT0); >-#endif >- >- done.link(this); >-#if USE(JSVALUE64) >- emitPutVirtualRegister(dst, regT0); >-#else >- move(TrustedImm32(JSValue::Int32Tag), regT1); >- emitPutVirtualRegister(dst, JSValueRegs(regT1, regT0)); >-#endif >-} >- >-void JIT::emit_op_get_argument(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int index = currentInstruction[2].u.operand; >-#if USE(JSVALUE64) >- JSValueRegs resultRegs(regT0); >-#else >- JSValueRegs resultRegs(regT1, regT0); >-#endif >- >- load32(payloadFor(CallFrameSlot::argumentCount), regT2); >- Jump argumentOutOfBounds = branch32(LessThanOrEqual, regT2, TrustedImm32(index)); >- loadValue(addressFor(CallFrameSlot::thisArgument + index), resultRegs); >- Jump done = jump(); >- >- argumentOutOfBounds.link(this); >- moveValue(jsUndefined(), resultRegs); >- >- done.link(this); >- emitValueProfilingSite(); >- emitPutVirtualRegister(dst, resultRegs); >-} >- >-} // namespace JSC >- >-#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >deleted file mode 100644 >index eee9d28465272315f38d97226ed7225e4bd2d80b..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >+++ /dev/null >@@ -1,1288 +0,0 @@ >-/* >- * Copyright (C) 2009-2018 Apple Inc. All rights reserved. >- * Copyright (C) 2010 Patrick Gansterer <paroga@paroga.com> >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >- >-#if ENABLE(JIT) >-#if USE(JSVALUE32_64) >-#include "JIT.h" >- >-#include "BytecodeStructs.h" >-#include "CCallHelpers.h" >-#include "Exception.h" >-#include "JITInlines.h" >-#include "JSArray.h" >-#include "JSCast.h" >-#include "JSFunction.h" >-#include "JSPropertyNameEnumerator.h" >-#include "LinkBuffer.h" >-#include "MaxFrameExtentForSlowPathCall.h" >-#include "Opcode.h" >-#include "SlowPathCall.h" >-#include "TypeProfilerLog.h" >-#include "VirtualRegister.h" >- >-namespace JSC { >- >-void JIT::emit_op_mov(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- if (m_codeBlock->isConstantRegisterIndex(src)) >- emitStore(dst, getConstantOperand(src)); >- else { >- emitLoad(src, regT1, regT0); >- emitStore(dst, regT1, regT0); >- } >-} >- >-void JIT::emit_op_end(Instruction* currentInstruction) >-{ >- ASSERT(returnValueGPR != callFrameRegister); >- emitLoad(currentInstruction[1].u.operand, regT1, returnValueGPR); >- emitRestoreCalleeSaves(); >- emitFunctionEpilogue(); >- ret(); >-} >- >-void JIT::emit_op_jmp(Instruction* currentInstruction) >-{ >- unsigned target = currentInstruction[1].u.operand; >- addJump(jump(), target); >-} >- >-void JIT::emit_op_new_object(Instruction* currentInstruction) >-{ >- Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >- size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); >- Allocator allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); >- >- RegisterID resultReg = returnValueGPR; >- RegisterID allocatorReg = regT1; >- RegisterID scratchReg = regT3; >- >- if (!allocator) >- addSlowCase(jump()); >- else { >- JumpList slowCases; >- auto butterfly = TrustedImmPtr(nullptr); >- emitAllocateJSObject(resultReg, JITAllocator::constant(allocator), allocatorReg, TrustedImmPtr(structure), butterfly, scratchReg, slowCases); >- emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); >- addSlowCase(slowCases); >- emitStoreCell(currentInstruction[1].u.operand, resultReg); >- } >-} >- >-void JIT::emitSlow_op_new_object(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int dst = currentInstruction[1].u.operand; >- Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); >- callOperation(operationNewObject, structure); >- emitStoreCell(dst, returnValueGPR); >-} >- >-void JIT::emit_op_overrides_has_instance(Instruction* currentInstruction) >-{ >- auto& bytecode = *reinterpret_cast<OpOverridesHasInstance*>(currentInstruction); >- int dst = bytecode.dst(); >- int constructor = bytecode.constructor(); >- int hasInstanceValue = bytecode.hasInstanceValue(); >- >- emitLoadPayload(hasInstanceValue, regT0); >- // We don't jump if we know what Symbol.hasInstance would do. >- Jump hasInstanceValueNotCell = emitJumpIfNotJSCell(hasInstanceValue); >- Jump customhasInstanceValue = branchPtr(NotEqual, regT0, TrustedImmPtr(m_codeBlock->globalObject()->functionProtoHasInstanceSymbolFunction())); >- >- // We know that constructor is an object from the way bytecode is emitted for instanceof expressions. >- emitLoadPayload(constructor, regT0); >- >- // Check that constructor 'ImplementsDefaultHasInstance' i.e. the object is not a C-API user nor a bound function. >- test8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance), regT0); >- Jump done = jump(); >- >- hasInstanceValueNotCell.link(this); >- customhasInstanceValue.link(this); >- move(TrustedImm32(1), regT0); >- >- done.link(this); >- emitStoreBool(dst, regT0); >- >-} >- >-void JIT::emit_op_instanceof(Instruction* currentInstruction) >-{ >- auto& bytecode = *reinterpret_cast<OpInstanceof*>(currentInstruction); >- int dst = bytecode.dst(); >- int value = bytecode.value(); >- int proto = bytecode.prototype(); >- >- // Load the operands into registers. >- // We use regT0 for baseVal since we will be done with this first, and we can then use it for the result. >- emitLoadPayload(value, regT2); >- emitLoadPayload(proto, regT1); >- >- // Check that proto are cells. baseVal must be a cell - this is checked by the get_by_id for Symbol.hasInstance. >- emitJumpSlowCaseIfNotJSCell(value); >- emitJumpSlowCaseIfNotJSCell(proto); >- >- JITInstanceOfGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), >- RegisterSet::stubUnavailableRegisters(), >- regT0, // result >- regT2, // value >- regT1, // proto >- regT3, regT4); // scratch >- gen.generateFastPath(*this); >- m_instanceOfs.append(gen); >- >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_instanceof_custom(Instruction*) >-{ >- // This always goes to slow path since we expect it to be rare. >- addSlowCase(jump()); >-} >- >-void JIT::emitSlow_op_instanceof(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- auto& bytecode = *reinterpret_cast<OpInstanceof*>(currentInstruction); >- int dst = bytecode.dst(); >- int value = bytecode.value(); >- int proto = bytecode.prototype(); >- >- JITInstanceOfGenerator& gen = m_instanceOfs[m_instanceOfIndex++]; >- >- Label coldPathBegin = label(); >- emitLoadTag(value, regT0); >- emitLoadTag(proto, regT3); >- Call call = callOperation(operationInstanceOfOptimize, dst, gen.stubInfo(), JSValueRegs(regT0, regT2), JSValueRegs(regT3, regT1)); >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emitSlow_op_instanceof_custom(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- auto& bytecode = *reinterpret_cast<OpInstanceofCustom*>(currentInstruction); >- int dst = bytecode.dst(); >- int value = bytecode.value(); >- int constructor = bytecode.constructor(); >- int hasInstanceValue = bytecode.hasInstanceValue(); >- >- emitLoad(value, regT1, regT0); >- emitLoadPayload(constructor, regT2); >- emitLoad(hasInstanceValue, regT4, regT3); >- callOperation(operationInstanceOfCustom, JSValueRegs(regT1, regT0), regT2, JSValueRegs(regT4, regT3)); >- emitStoreBool(dst, returnValueGPR); >-} >- >-void JIT::emit_op_is_empty(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitLoad(value, regT1, regT0); >- compare32(Equal, regT1, TrustedImm32(JSValue::EmptyValueTag), regT0); >- >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_is_undefined(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitLoad(value, regT1, regT0); >- Jump isCell = branchIfCell(regT1); >- >- compare32(Equal, regT1, TrustedImm32(JSValue::UndefinedTag), regT0); >- Jump done = jump(); >- >- isCell.link(this); >- Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- move(TrustedImm32(0), regT0); >- Jump notMasqueradesAsUndefined = jump(); >- >- isMasqueradesAsUndefined.link(this); >- loadPtr(Address(regT0, JSCell::structureIDOffset()), regT1); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- loadPtr(Address(regT1, Structure::globalObjectOffset()), regT1); >- compare32(Equal, regT0, regT1, regT0); >- >- notMasqueradesAsUndefined.link(this); >- done.link(this); >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_is_boolean(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitLoadTag(value, regT0); >- compare32(Equal, regT0, TrustedImm32(JSValue::BooleanTag), regT0); >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_is_number(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitLoadTag(value, regT0); >- add32(TrustedImm32(1), regT0); >- compare32(Below, regT0, TrustedImm32(JSValue::LowestTag + 1), regT0); >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_is_cell_with_type(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- int type = currentInstruction[3].u.operand; >- >- emitLoad(value, regT1, regT0); >- Jump isNotCell = branchIfNotCell(regT1); >- >- compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(type), regT0); >- Jump done = jump(); >- >- isNotCell.link(this); >- move(TrustedImm32(0), regT0); >- >- done.link(this); >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_is_object(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int value = currentInstruction[2].u.operand; >- >- emitLoad(value, regT1, regT0); >- Jump isNotCell = branchIfNotCell(regT1); >- >- compare8(AboveOrEqual, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType), regT0); >- Jump done = jump(); >- >- isNotCell.link(this); >- move(TrustedImm32(0), regT0); >- >- done.link(this); >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_to_primitive(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- >- Jump isImm = branchIfNotCell(regT1); >- addSlowCase(branchIfObject(regT0)); >- isImm.link(this); >- >- if (dst != src) >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_set_function_name(Instruction* currentInstruction) >-{ >- int func = currentInstruction[1].u.operand; >- int name = currentInstruction[2].u.operand; >- emitLoadPayload(func, regT1); >- emitLoad(name, regT3, regT2); >- callOperation(operationSetFunctionName, regT1, JSValueRegs(regT3, regT2)); >-} >- >-void JIT::emit_op_not(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitLoadTag(src, regT0); >- >- emitLoad(src, regT1, regT0); >- addSlowCase(branchIfNotBoolean(regT1, InvalidGPRReg)); >- xor32(TrustedImm32(1), regT0); >- >- emitStoreBool(dst, regT0, (dst == src)); >-} >- >-void JIT::emit_op_jfalse(Instruction* currentInstruction) >-{ >- int cond = currentInstruction[1].u.operand; >- unsigned target = currentInstruction[2].u.operand; >- >- emitLoad(cond, regT1, regT0); >- >- JSValueRegs value(regT1, regT0); >- GPRReg scratch = regT2; >- GPRReg result = regT3; >- bool shouldCheckMasqueradesAsUndefined = true; >- emitConvertValueToBoolean(*vm(), value, result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >- >- addJump(branchTest32(Zero, result), target); >-} >- >-void JIT::emit_op_jtrue(Instruction* currentInstruction) >-{ >- int cond = currentInstruction[1].u.operand; >- unsigned target = currentInstruction[2].u.operand; >- >- emitLoad(cond, regT1, regT0); >- bool shouldCheckMasqueradesAsUndefined = true; >- JSValueRegs value(regT1, regT0); >- GPRReg scratch = regT2; >- GPRReg result = regT3; >- emitConvertValueToBoolean(*vm(), value, result, scratch, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()); >- >- addJump(branchTest32(NonZero, result), target); >-} >- >-void JIT::emit_op_jeq_null(Instruction* currentInstruction) >-{ >- int src = currentInstruction[1].u.operand; >- unsigned target = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- >- Jump isImmediate = branchIfNotCell(regT1); >- >- Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- addJump(branchPtr(Equal, Address(regT2, Structure::globalObjectOffset()), regT0), target); >- Jump masqueradesGlobalObjectIsForeign = jump(); >- >- // Now handle the immediate cases - undefined & null >- isImmediate.link(this); >- static_assert((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1), ""); >- or32(TrustedImm32(1), regT1); >- addJump(branchIfNull(regT1), target); >- >- isNotMasqueradesAsUndefined.link(this); >- masqueradesGlobalObjectIsForeign.link(this); >-} >- >-void JIT::emit_op_jneq_null(Instruction* currentInstruction) >-{ >- int src = currentInstruction[1].u.operand; >- unsigned target = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- >- Jump isImmediate = branchIfNotCell(regT1); >- >- addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target); >- loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- addJump(branchPtr(NotEqual, Address(regT2, Structure::globalObjectOffset()), regT0), target); >- Jump wasNotImmediate = jump(); >- >- // Now handle the immediate cases - undefined & null >- isImmediate.link(this); >- >- static_assert((JSValue::UndefinedTag + 1 == JSValue::NullTag) && (JSValue::NullTag & 0x1), ""); >- or32(TrustedImm32(1), regT1); >- addJump(branchIfNotNull(regT1), target); >- >- wasNotImmediate.link(this); >-} >- >-void JIT::emit_op_jneq_ptr(Instruction* currentInstruction) >-{ >- int src = currentInstruction[1].u.operand; >- Special::Pointer ptr = currentInstruction[2].u.specialPointer; >- unsigned target = currentInstruction[3].u.operand; >- >- emitLoad(src, regT1, regT0); >- Jump notCell = branchIfNotCell(regT1); >- Jump equal = branchPtr(Equal, regT0, TrustedImmPtr(actualPointerFor(m_codeBlock, ptr))); >- notCell.link(this); >- store32(TrustedImm32(1), ¤tInstruction[4].u.operand); >- addJump(jump(), target); >- equal.link(this); >-} >- >-void JIT::emit_op_eq(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src1 = currentInstruction[2].u.operand; >- int src2 = currentInstruction[3].u.operand; >- >- emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >- addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branchIfCell(regT1)); >- addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >- >- compare32(Equal, regT0, regT2, regT0); >- >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emitSlow_op_eq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int dst = currentInstruction[1].u.operand; >- >- JumpList storeResult; >- JumpList genericCase; >- >- genericCase.append(getSlowCase(iter)); // tags not equal >- >- linkSlowCase(iter); // tags equal and JSCell >- genericCase.append(branchIfNotString(regT0)); >- genericCase.append(branchIfNotString(regT2)); >- >- // String case. >- callOperation(operationCompareStringEq, regT0, regT2); >- storeResult.append(jump()); >- >- // Generic case. >- genericCase.append(getSlowCase(iter)); // doubles >- genericCase.link(this); >- callOperation(operationCompareEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >- >- storeResult.link(this); >- emitStoreBool(dst, returnValueGPR); >-} >- >-void JIT::emit_op_jeq(Instruction* currentInstruction) >-{ >- int target = currentInstruction[3].u.operand; >- int src1 = currentInstruction[1].u.operand; >- int src2 = currentInstruction[2].u.operand; >- >- emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >- addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branchIfCell(regT1)); >- addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >- >- addJump(branch32(Equal, regT0, regT2), target); >-} >- >-void JIT::compileOpEqJumpSlow(Vector<SlowCaseEntry>::iterator& iter, CompileOpEqType type, int jumpTarget) >-{ >- JumpList done; >- JumpList genericCase; >- >- genericCase.append(getSlowCase(iter)); // tags not equal >- >- linkSlowCase(iter); // tags equal and JSCell >- genericCase.append(branchIfNotString(regT0)); >- genericCase.append(branchIfNotString(regT2)); >- >- // String case. >- callOperation(operationCompareStringEq, regT0, regT2); >- emitJumpSlowToHot(branchTest32(type == CompileOpEqType::Eq ? NonZero : Zero, returnValueGPR), jumpTarget); >- done.append(jump()); >- >- // Generic case. >- genericCase.append(getSlowCase(iter)); // doubles >- genericCase.link(this); >- callOperation(operationCompareEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >- emitJumpSlowToHot(branchTest32(type == CompileOpEqType::Eq ? NonZero : Zero, returnValueGPR), jumpTarget); >- >- done.link(this); >-} >- >-void JIT::emitSlow_op_jeq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpEqJumpSlow(iter, CompileOpEqType::Eq, currentInstruction[3].u.operand); >-} >- >-void JIT::emit_op_neq(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src1 = currentInstruction[2].u.operand; >- int src2 = currentInstruction[3].u.operand; >- >- emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >- addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branchIfCell(regT1)); >- addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >- >- compare32(NotEqual, regT0, regT2, regT0); >- >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emitSlow_op_neq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int dst = currentInstruction[1].u.operand; >- >- JumpList storeResult; >- JumpList genericCase; >- >- genericCase.append(getSlowCase(iter)); // tags not equal >- >- linkSlowCase(iter); // tags equal and JSCell >- genericCase.append(branchIfNotString(regT0)); >- genericCase.append(branchIfNotString(regT2)); >- >- // String case. >- callOperation(operationCompareStringEq, regT0, regT2); >- storeResult.append(jump()); >- >- // Generic case. >- genericCase.append(getSlowCase(iter)); // doubles >- genericCase.link(this); >- callOperation(operationCompareEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >- >- storeResult.link(this); >- xor32(TrustedImm32(0x1), returnValueGPR); >- emitStoreBool(dst, returnValueGPR); >-} >- >-void JIT::emit_op_jneq(Instruction* currentInstruction) >-{ >- int target = currentInstruction[3].u.operand; >- int src1 = currentInstruction[1].u.operand; >- int src2 = currentInstruction[2].u.operand; >- >- emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >- addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branchIfCell(regT1)); >- addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >- >- addJump(branch32(NotEqual, regT0, regT2), target); >-} >- >-void JIT::emitSlow_op_jneq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- compileOpEqJumpSlow(iter, CompileOpEqType::NEq, currentInstruction[3].u.operand); >-} >- >-void JIT::compileOpStrictEq(Instruction* currentInstruction, CompileOpStrictEqType type) >-{ >- int dst = currentInstruction[1].u.operand; >- int src1 = currentInstruction[2].u.operand; >- int src2 = currentInstruction[3].u.operand; >- >- emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >- >- // Bail if the tags differ, or are double. >- addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >- >- // Jump to a slow case if both are strings or symbols (non object). >- Jump notCell = branchIfNotCell(regT1); >- Jump firstIsObject = branchIfObject(regT0); >- addSlowCase(branchIfNotObject(regT2)); >- notCell.link(this); >- firstIsObject.link(this); >- >- // Simply compare the payloads. >- if (type == CompileOpStrictEqType::StrictEq) >- compare32(Equal, regT0, regT2, regT0); >- else >- compare32(NotEqual, regT0, regT2, regT0); >- >- emitStoreBool(dst, regT0); >-} >- >-void JIT::emit_op_stricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEq(currentInstruction, CompileOpStrictEqType::StrictEq); >-} >- >-void JIT::emit_op_nstricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEq(currentInstruction, CompileOpStrictEqType::NStrictEq); >-} >- >-void JIT::compileOpStrictEqJump(Instruction* currentInstruction, CompileOpStrictEqType type) >-{ >- int target = currentInstruction[3].u.operand; >- int src1 = currentInstruction[1].u.operand; >- int src2 = currentInstruction[2].u.operand; >- >- emitLoad2(src1, regT1, regT0, src2, regT3, regT2); >- >- // Bail if the tags differ, or are double. >- addSlowCase(branch32(NotEqual, regT1, regT3)); >- addSlowCase(branch32(Below, regT1, TrustedImm32(JSValue::LowestTag))); >- >- // Jump to a slow case if both are strings or symbols (non object). >- Jump notCell = branchIfNotCell(regT1); >- Jump firstIsObject = branchIfObject(regT0); >- addSlowCase(branchIfNotObject(regT2)); >- notCell.link(this); >- firstIsObject.link(this); >- >- // Simply compare the payloads. >- if (type == CompileOpStrictEqType::StrictEq) >- addJump(branch32(Equal, regT0, regT2), target); >- else >- addJump(branch32(NotEqual, regT0, regT2), target); >-} >- >-void JIT::emit_op_jstricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::StrictEq); >-} >- >-void JIT::emit_op_jnstricteq(Instruction* currentInstruction) >-{ >- compileOpStrictEqJump(currentInstruction, CompileOpStrictEqType::NStrictEq); >-} >- >-void JIT::emitSlow_op_jstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- unsigned target = currentInstruction[3].u.operand; >- callOperation(operationCompareStrictEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >- emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); >-} >- >-void JIT::emitSlow_op_jnstricteq(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- unsigned target = currentInstruction[3].u.operand; >- callOperation(operationCompareStrictEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >- emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); >-} >- >-void JIT::emit_op_eq_null(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- Jump isImmediate = branchIfNotCell(regT1); >- >- Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- move(TrustedImm32(0), regT1); >- Jump wasNotMasqueradesAsUndefined = jump(); >- >- isMasqueradesAsUndefined.link(this); >- loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >- compare32(Equal, regT0, regT2, regT1); >- Jump wasNotImmediate = jump(); >- >- isImmediate.link(this); >- >- compare32(Equal, regT1, TrustedImm32(JSValue::NullTag), regT2); >- compare32(Equal, regT1, TrustedImm32(JSValue::UndefinedTag), regT1); >- or32(regT2, regT1); >- >- wasNotImmediate.link(this); >- wasNotMasqueradesAsUndefined.link(this); >- >- emitStoreBool(dst, regT1); >-} >- >-void JIT::emit_op_neq_null(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- Jump isImmediate = branchIfNotCell(regT1); >- >- Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)); >- move(TrustedImm32(1), regT1); >- Jump wasNotMasqueradesAsUndefined = jump(); >- >- isMasqueradesAsUndefined.link(this); >- loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2); >- move(TrustedImmPtr(m_codeBlock->globalObject()), regT0); >- loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2); >- compare32(NotEqual, regT0, regT2, regT1); >- Jump wasNotImmediate = jump(); >- >- isImmediate.link(this); >- >- compare32(NotEqual, regT1, TrustedImm32(JSValue::NullTag), regT2); >- compare32(NotEqual, regT1, TrustedImm32(JSValue::UndefinedTag), regT1); >- and32(regT2, regT1); >- >- wasNotImmediate.link(this); >- wasNotMasqueradesAsUndefined.link(this); >- >- emitStoreBool(dst, regT1); >-} >- >-void JIT::emit_op_throw(Instruction* currentInstruction) >-{ >- ASSERT(regT0 == returnValueGPR); >- copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >- emitLoad(currentInstruction[1].u.operand, regT1, regT0); >- callOperationNoExceptionCheck(operationThrow, JSValueRegs(regT1, regT0)); >- jumpToExceptionHandler(*vm()); >-} >- >-void JIT::emit_op_to_number(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- >- Jump isInt32 = branchIfInt32(regT1); >- addSlowCase(branch32(AboveOrEqual, regT1, TrustedImm32(JSValue::LowestTag))); >- isInt32.link(this); >- >- emitValueProfilingSite(); >- if (src != dst) >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_to_string(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- >- addSlowCase(branchIfNotCell(regT1)); >- addSlowCase(branchIfNotString(regT0)); >- >- if (src != dst) >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_to_object(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int src = currentInstruction[2].u.operand; >- >- emitLoad(src, regT1, regT0); >- >- addSlowCase(branchIfNotCell(regT1)); >- addSlowCase(branchIfNotObject(regT0)); >- >- emitValueProfilingSite(); >- if (src != dst) >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_catch(Instruction* currentInstruction) >-{ >- restoreCalleeSavesFromEntryFrameCalleeSavesBuffer(vm()->topEntryFrame); >- >- move(TrustedImmPtr(m_vm), regT3); >- // operationThrow returns the callFrame for the handler. >- load32(Address(regT3, VM::callFrameForCatchOffset()), callFrameRegister); >- storePtr(TrustedImmPtr(nullptr), Address(regT3, VM::callFrameForCatchOffset())); >- >- addPtr(TrustedImm32(stackPointerOffsetFor(codeBlock()) * sizeof(Register)), callFrameRegister, stackPointerRegister); >- >- callOperationNoExceptionCheck(operationCheckIfExceptionIsUncatchableAndNotifyProfiler); >- Jump isCatchableException = branchTest32(Zero, returnValueGPR); >- jumpToExceptionHandler(*vm()); >- isCatchableException.link(this); >- >- move(TrustedImmPtr(m_vm), regT3); >- >- // Now store the exception returned by operationThrow. >- load32(Address(regT3, VM::exceptionOffset()), regT2); >- move(TrustedImm32(JSValue::CellTag), regT1); >- >- store32(TrustedImm32(0), Address(regT3, VM::exceptionOffset())); >- >- unsigned exception = currentInstruction[1].u.operand; >- emitStore(exception, regT1, regT2); >- >- load32(Address(regT2, Exception::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); >- load32(Address(regT2, Exception::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); >- >- unsigned thrownValue = currentInstruction[2].u.operand; >- emitStore(thrownValue, regT1, regT0); >- >-#if ENABLE(DFG_JIT) >- // FIXME: consider inline caching the process of doing OSR entry, including >- // argument type proofs, storing locals to the buffer, etc >- // https://bugs.webkit.org/show_bug.cgi?id=175598 >- >- ValueProfileAndOperandBuffer* buffer = static_cast<ValueProfileAndOperandBuffer*>(currentInstruction[3].u.pointer); >- if (buffer || !shouldEmitProfiling()) >- callOperation(operationTryOSREnterAtCatch, m_bytecodeOffset); >- else >- callOperation(operationTryOSREnterAtCatchAndValueProfile, m_bytecodeOffset); >- auto skipOSREntry = branchTestPtr(Zero, returnValueGPR); >- emitRestoreCalleeSaves(); >- jump(returnValueGPR, NoPtrTag); >- skipOSREntry.link(this); >- if (buffer && shouldEmitProfiling()) { >- buffer->forEach([&] (ValueProfileAndOperand& profile) { >- JSValueRegs regs(regT1, regT0); >- emitGetVirtualRegister(profile.m_operand, regs); >- emitValueProfilingSite(profile.m_profile); >- }); >- } >-#endif // ENABLE(DFG_JIT) >-} >- >-void JIT::emit_op_identity_with_profile(Instruction*) >-{ >- // We don't need to do anything here... >-} >- >-void JIT::emit_op_get_parent_scope(Instruction* currentInstruction) >-{ >- int currentScope = currentInstruction[2].u.operand; >- emitLoadPayload(currentScope, regT0); >- loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >- emitStoreCell(currentInstruction[1].u.operand, regT0); >-} >- >-void JIT::emit_op_switch_imm(Instruction* currentInstruction) >-{ >- size_t tableIndex = currentInstruction[1].u.operand; >- unsigned defaultOffset = currentInstruction[2].u.operand; >- unsigned scrutinee = currentInstruction[3].u.operand; >- >- // create jump table for switch destinations, track this switch statement. >- SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >- m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Immediate)); >- jumpTable->ensureCTITable(); >- >- emitLoad(scrutinee, regT1, regT0); >- callOperation(operationSwitchImmWithUnknownKeyType, JSValueRegs(regT1, regT0), tableIndex); >- jump(returnValueGPR, NoPtrTag); >-} >- >-void JIT::emit_op_switch_char(Instruction* currentInstruction) >-{ >- size_t tableIndex = currentInstruction[1].u.operand; >- unsigned defaultOffset = currentInstruction[2].u.operand; >- unsigned scrutinee = currentInstruction[3].u.operand; >- >- // create jump table for switch destinations, track this switch statement. >- SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex); >- m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset, SwitchRecord::Character)); >- jumpTable->ensureCTITable(); >- >- emitLoad(scrutinee, regT1, regT0); >- callOperation(operationSwitchCharWithUnknownKeyType, JSValueRegs(regT1, regT0), tableIndex); >- jump(returnValueGPR, NoPtrTag); >-} >- >-void JIT::emit_op_switch_string(Instruction* currentInstruction) >-{ >- size_t tableIndex = currentInstruction[1].u.operand; >- unsigned defaultOffset = currentInstruction[2].u.operand; >- unsigned scrutinee = currentInstruction[3].u.operand; >- >- // create jump table for switch destinations, track this switch statement. >- StringJumpTable* jumpTable = &m_codeBlock->stringSwitchJumpTable(tableIndex); >- m_switches.append(SwitchRecord(jumpTable, m_bytecodeOffset, defaultOffset)); >- >- emitLoad(scrutinee, regT1, regT0); >- callOperation(operationSwitchStringWithUnknownKeyType, JSValueRegs(regT1, regT0), tableIndex); >- jump(returnValueGPR, NoPtrTag); >-} >- >-void JIT::emit_op_debug(Instruction* currentInstruction) >-{ >- load32(codeBlock()->debuggerRequestsAddress(), regT0); >- Jump noDebuggerRequests = branchTest32(Zero, regT0); >- callOperation(operationDebug, currentInstruction[1].u.operand); >- noDebuggerRequests.link(this); >-} >- >- >-void JIT::emit_op_enter(Instruction* currentInstruction) >-{ >- emitEnterOptimizationCheck(); >- >- // Even though JIT code doesn't use them, we initialize our constant >- // registers to zap stale pointers, to avoid unnecessarily prolonging >- // object lifetime and increasing GC pressure. >- for (int i = 0; i < m_codeBlock->m_numVars; ++i) >- emitStore(virtualRegisterForLocal(i).offset(), jsUndefined()); >- >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_enter); >- slowPathCall.call(); >-} >- >-void JIT::emit_op_get_scope(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- emitGetFromCallFrameHeaderPtr(CallFrameSlot::callee, regT0); >- loadPtr(Address(regT0, JSFunction::offsetOfScopeChain()), regT0); >- emitStoreCell(dst, regT0); >-} >- >-void JIT::emit_op_create_this(Instruction* currentInstruction) >-{ >- int callee = currentInstruction[2].u.operand; >- WriteBarrierBase<JSCell>* cachedFunction = ¤tInstruction[4].u.jsCell; >- RegisterID calleeReg = regT0; >- RegisterID rareDataReg = regT4; >- RegisterID resultReg = regT0; >- RegisterID allocatorReg = regT1; >- RegisterID structureReg = regT2; >- RegisterID cachedFunctionReg = regT4; >- RegisterID scratchReg = regT3; >- >- emitLoadPayload(callee, calleeReg); >- addSlowCase(branchIfNotFunction(calleeReg)); >- loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); >- addSlowCase(branchTestPtr(Zero, rareDataReg)); >- load32(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); >- loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); >- >- loadPtr(cachedFunction, cachedFunctionReg); >- Jump hasSeenMultipleCallees = branchPtr(Equal, cachedFunctionReg, TrustedImmPtr(JSCell::seenMultipleCalleeObjects())); >- addSlowCase(branchPtr(NotEqual, calleeReg, cachedFunctionReg)); >- hasSeenMultipleCallees.link(this); >- >- JumpList slowCases; >- auto butterfly = TrustedImmPtr(nullptr); >- emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases); >- addSlowCase(slowCases); >- emitStoreCell(currentInstruction[1].u.operand, resultReg); >-} >- >-void JIT::emit_op_to_this(Instruction* currentInstruction) >-{ >- WriteBarrierBase<Structure>* cachedStructure = ¤tInstruction[2].u.structure; >- int thisRegister = currentInstruction[1].u.operand; >- >- emitLoad(thisRegister, regT3, regT2); >- >- addSlowCase(branchIfNotCell(regT3)); >- addSlowCase(branchIfNotType(regT2, FinalObjectType)); >- loadPtr(Address(regT2, JSCell::structureIDOffset()), regT0); >- loadPtr(cachedStructure, regT2); >- addSlowCase(branchPtr(NotEqual, regT0, regT2)); >-} >- >-void JIT::emit_op_check_tdz(Instruction* currentInstruction) >-{ >- emitLoadTag(currentInstruction[1].u.operand, regT0); >- addSlowCase(branchIfEmpty(regT0)); >-} >- >-void JIT::emit_op_has_structure_property(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int enumerator = currentInstruction[4].u.operand; >- >- emitLoadPayload(base, regT0); >- emitJumpSlowCaseIfNotJSCell(base); >- >- emitLoadPayload(enumerator, regT1); >- >- load32(Address(regT0, JSCell::structureIDOffset()), regT0); >- addSlowCase(branch32(NotEqual, regT0, Address(regT1, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >- >- move(TrustedImm32(1), regT0); >- emitStoreBool(dst, regT0); >-} >- >-void JIT::privateCompileHasIndexedProperty(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >-{ >- Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >- >- PatchableJump badType; >- >- // FIXME: Add support for other types like TypedArrays and Arguments. >- // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >- JumpList slowCases = emitLoadForArrayMode(currentInstruction, arrayMode, badType); >- move(TrustedImm32(1), regT0); >- Jump done = jump(); >- >- LinkBuffer patchBuffer(*this, m_codeBlock); >- >- patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- >- patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >- >- byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >- m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >- "Baseline has_indexed_property stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >- >- MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >- MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationHasIndexedPropertyGeneric)); >-} >- >-void JIT::emit_op_has_indexed_property(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >- >- emitLoadPayload(base, regT0); >- emitJumpSlowCaseIfNotJSCell(base); >- >- emitLoadPayload(property, regT1); >- >- // This is technically incorrect - we're zero-extending an int32. On the hot path this doesn't matter. >- // We check the value as if it was a uint32 against the m_vectorLength - which will always fail if >- // number was signed since m_vectorLength is always less than intmax (since the total allocation >- // size is always less than 4Gb). As such zero extending will have been correct (and extending the value >- // to 64-bits is necessary since it's used in the address calculation. We zero extend rather than sign >- // extending since it makes it easier to re-tag the value in the slow case. >- zeroExtend32ToPtr(regT1, regT1); >- >- emitArrayProfilingSiteWithCell(regT0, regT2, profile); >- and32(TrustedImm32(IndexingShapeMask), regT2); >- >- JITArrayMode mode = chooseArrayMode(profile); >- PatchableJump badType; >- >- // FIXME: Add support for other types like TypedArrays and Arguments. >- // See https://bugs.webkit.org/show_bug.cgi?id=135033 and https://bugs.webkit.org/show_bug.cgi?id=135034. >- JumpList slowCases = emitLoadForArrayMode(currentInstruction, mode, badType); >- move(TrustedImm32(1), regT0); >- >- addSlowCase(badType); >- addSlowCase(slowCases); >- >- Label done = label(); >- >- emitStoreBool(dst, regT0); >- >- Label nextHotPath = label(); >- >- m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, PatchableJump(), badType, mode, profile, done, nextHotPath)); >-} >- >-void JIT::emitSlow_op_has_indexed_property(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >- >- Label slowPath = label(); >- >- emitLoad(base, regT1, regT0); >- emitLoad(property, regT3, regT2); >- Call call = callOperation(operationHasIndexedPropertyDefault, dst, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2), byValInfo); >- >- m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >- m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >- m_byValInstructionIndex++; >-} >- >-void JIT::emit_op_get_direct_pname(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int index = currentInstruction[4].u.operand; >- int enumerator = currentInstruction[5].u.operand; >- >- // Check that base is a cell >- emitLoadPayload(base, regT0); >- emitJumpSlowCaseIfNotJSCell(base); >- >- // Check the structure >- emitLoadPayload(enumerator, regT1); >- load32(Address(regT0, JSCell::structureIDOffset()), regT2); >- addSlowCase(branch32(NotEqual, regT2, Address(regT1, JSPropertyNameEnumerator::cachedStructureIDOffset()))); >- >- // Compute the offset >- emitLoadPayload(index, regT2); >- // If index is less than the enumerator's cached inline storage, then it's an inline access >- Jump outOfLineAccess = branch32(AboveOrEqual, regT2, Address(regT1, JSPropertyNameEnumerator::cachedInlineCapacityOffset())); >- addPtr(TrustedImm32(JSObject::offsetOfInlineStorage()), regT0); >- load32(BaseIndex(regT0, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); >- load32(BaseIndex(regT0, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); >- >- Jump done = jump(); >- >- // Otherwise it's out of line >- outOfLineAccess.link(this); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >- sub32(Address(regT1, JSPropertyNameEnumerator::cachedInlineCapacityOffset()), regT2); >- neg32(regT2); >- int32_t offsetOfFirstProperty = static_cast<int32_t>(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue); >- load32(BaseIndex(regT0, regT2, TimesEight, offsetOfFirstProperty + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); >- load32(BaseIndex(regT0, regT2, TimesEight, offsetOfFirstProperty + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); >- >- done.link(this); >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_enumerator_structure_pname(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int enumerator = currentInstruction[2].u.operand; >- int index = currentInstruction[3].u.operand; >- >- emitLoadPayload(index, regT0); >- emitLoadPayload(enumerator, regT1); >- Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endStructurePropertyIndexOffset())); >- >- move(TrustedImm32(JSValue::NullTag), regT2); >- move(TrustedImm32(0), regT0); >- >- Jump done = jump(); >- inBounds.link(this); >- >- loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >- loadPtr(BaseIndex(regT1, regT0, timesPtr()), regT0); >- move(TrustedImm32(JSValue::CellTag), regT2); >- >- done.link(this); >- emitStore(dst, regT2, regT0); >-} >- >-void JIT::emit_op_enumerator_generic_pname(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int enumerator = currentInstruction[2].u.operand; >- int index = currentInstruction[3].u.operand; >- >- emitLoadPayload(index, regT0); >- emitLoadPayload(enumerator, regT1); >- Jump inBounds = branch32(Below, regT0, Address(regT1, JSPropertyNameEnumerator::endGenericPropertyIndexOffset())); >- >- move(TrustedImm32(JSValue::NullTag), regT2); >- move(TrustedImm32(0), regT0); >- >- Jump done = jump(); >- inBounds.link(this); >- >- loadPtr(Address(regT1, JSPropertyNameEnumerator::cachedPropertyNamesVectorOffset()), regT1); >- loadPtr(BaseIndex(regT1, regT0, timesPtr()), regT0); >- move(TrustedImm32(JSValue::CellTag), regT2); >- >- done.link(this); >- emitStore(dst, regT2, regT0); >-} >- >-void JIT::emit_op_profile_type(Instruction* currentInstruction) >-{ >- TypeLocation* cachedTypeLocation = currentInstruction[2].u.location; >- int valueToProfile = currentInstruction[1].u.operand; >- >- // Load payload in T0. Load tag in T3. >- emitLoadPayload(valueToProfile, regT0); >- emitLoadTag(valueToProfile, regT3); >- >- JumpList jumpToEnd; >- >- jumpToEnd.append(branchIfEmpty(regT3)); >- >- // Compile in a predictive type check, if possible, to see if we can skip writing to the log. >- // These typechecks are inlined to match those of the 32-bit JSValue type checks. >- if (cachedTypeLocation->m_lastSeenType == TypeUndefined) >- jumpToEnd.append(branchIfUndefined(regT3)); >- else if (cachedTypeLocation->m_lastSeenType == TypeNull) >- jumpToEnd.append(branchIfNull(regT3)); >- else if (cachedTypeLocation->m_lastSeenType == TypeBoolean) >- jumpToEnd.append(branchIfBoolean(regT3, InvalidGPRReg)); >- else if (cachedTypeLocation->m_lastSeenType == TypeAnyInt) >- jumpToEnd.append(branchIfInt32(regT3)); >- else if (cachedTypeLocation->m_lastSeenType == TypeNumber) { >- jumpToEnd.append(branchIfNumber(JSValueRegs(regT3, regT0), regT1)); >- } else if (cachedTypeLocation->m_lastSeenType == TypeString) { >- Jump isNotCell = branchIfNotCell(regT3); >- jumpToEnd.append(branchIfString(regT0)); >- isNotCell.link(this); >- } >- >- // Load the type profiling log into T2. >- TypeProfilerLog* cachedTypeProfilerLog = m_vm->typeProfilerLog(); >- move(TrustedImmPtr(cachedTypeProfilerLog), regT2); >- >- // Load the next log entry into T1. >- loadPtr(Address(regT2, TypeProfilerLog::currentLogEntryOffset()), regT1); >- >- // Store the JSValue onto the log entry. >- store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload))); >- store32(regT3, Address(regT1, TypeProfilerLog::LogEntry::valueOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag))); >- >- // Store the structureID of the cell if argument is a cell, otherwise, store 0 on the log entry. >- Jump notCell = branchIfNotCell(regT3); >- load32(Address(regT0, JSCell::structureIDOffset()), regT0); >- store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >- Jump skipNotCell = jump(); >- notCell.link(this); >- store32(TrustedImm32(0), Address(regT1, TypeProfilerLog::LogEntry::structureIDOffset())); >- skipNotCell.link(this); >- >- // Store the typeLocation on the log entry. >- move(TrustedImmPtr(cachedTypeLocation), regT0); >- store32(regT0, Address(regT1, TypeProfilerLog::LogEntry::locationOffset())); >- >- // Increment the current log entry. >- addPtr(TrustedImm32(sizeof(TypeProfilerLog::LogEntry)), regT1); >- store32(regT1, Address(regT2, TypeProfilerLog::currentLogEntryOffset())); >- jumpToEnd.append(branchPtr(NotEqual, regT1, TrustedImmPtr(cachedTypeProfilerLog->logEndPtr()))); >- // Clear the log if we're at the end of the log. >- callOperation(operationProcessTypeProfilerLog); >- >- jumpToEnd.link(this); >-} >- >-void JIT::emit_op_log_shadow_chicken_prologue(Instruction* currentInstruction) >-{ >- updateTopCallFrame(); >- static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >- GPRReg shadowPacketReg = regT0; >- GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >- GPRReg scratch2Reg = regT2; >- ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >- >- scratch1Reg = regT4; >- emitLoadPayload(currentInstruction[1].u.operand, regT3); >- logShadowChickenProloguePacket(shadowPacketReg, scratch1Reg, regT3); >-} >- >-void JIT::emit_op_log_shadow_chicken_tail(Instruction* currentInstruction) >-{ >- updateTopCallFrame(); >- static_assert(nonArgGPR0 != regT0 && nonArgGPR0 != regT2, "we will have problems if this is true."); >- GPRReg shadowPacketReg = regT0; >- GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register. >- GPRReg scratch2Reg = regT2; >- ensureShadowChickenPacket(*vm(), shadowPacketReg, scratch1Reg, scratch2Reg); >- >- emitLoadPayload(currentInstruction[1].u.operand, regT2); >- emitLoadTag(currentInstruction[1].u.operand, regT1); >- JSValueRegs thisRegs(regT1, regT2); >- emitLoadPayload(currentInstruction[2].u.operand, regT3); >- logShadowChickenTailPacket(shadowPacketReg, thisRegs, regT3, m_codeBlock, CallSiteIndex(currentInstruction)); >-} >- >-} // namespace JSC >- >-#endif // USE(JSVALUE32_64) >-#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >deleted file mode 100644 >index 656e5fd082660e1c8ad3148d41db91258b7cdd9e..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >+++ /dev/null >@@ -1,1754 +0,0 @@ >-/* >- * Copyright (C) 2008-2018 Apple Inc. All rights reserved. >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >- >-#if ENABLE(JIT) >-#include "JIT.h" >- >-#include "CodeBlock.h" >-#include "DirectArguments.h" >-#include "GCAwareJITStubRoutine.h" >-#include "GetterSetter.h" >-#include "InterpreterInlines.h" >-#include "JITInlines.h" >-#include "JSArray.h" >-#include "JSFunction.h" >-#include "JSLexicalEnvironment.h" >-#include "LinkBuffer.h" >-#include "ResultType.h" >-#include "ScopedArguments.h" >-#include "ScopedArgumentsTable.h" >-#include "SlowPathCall.h" >-#include "StructureStubInfo.h" >-#include <wtf/ScopedLambda.h> >-#include <wtf/StringPrintStream.h> >- >- >-namespace JSC { >-#if USE(JSVALUE64) >- >-void JIT::emit_op_get_by_val(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >- >- emitGetVirtualRegister(base, regT0); >- bool propertyNameIsIntegerConstant = isOperandConstantInt(property); >- if (propertyNameIsIntegerConstant) >- move(Imm32(getOperandConstantInt(property)), regT1); >- else >- emitGetVirtualRegister(property, regT1); >- >- emitJumpSlowCaseIfNotJSCell(regT0, base); >- >- PatchableJump notIndex; >- if (!propertyNameIsIntegerConstant) { >- notIndex = emitPatchableJumpIfNotInt(regT1); >- addSlowCase(notIndex); >- >- // This is technically incorrect - we're zero-extending an int32. On the hot path this doesn't matter. >- // We check the value as if it was a uint32 against the m_vectorLength - which will always fail if >- // number was signed since m_vectorLength is always less than intmax (since the total allocation >- // size is always less than 4Gb). As such zero extending will have been correct (and extending the value >- // to 64-bits is necessary since it's used in the address calculation). We zero extend rather than sign >- // extending since it makes it easier to re-tag the value in the slow case. >- zeroExtend32ToPtr(regT1, regT1); >- } >- >- emitArrayProfilingSiteWithCell(regT0, regT2, profile); >- and32(TrustedImm32(IndexingShapeMask), regT2); >- >- PatchableJump badType; >- JumpList slowCases; >- >- JITArrayMode mode = chooseArrayMode(profile); >- switch (mode) { >- case JITInt32: >- slowCases = emitInt32GetByVal(currentInstruction, badType); >- break; >- case JITDouble: >- slowCases = emitDoubleGetByVal(currentInstruction, badType); >- break; >- case JITContiguous: >- slowCases = emitContiguousGetByVal(currentInstruction, badType); >- break; >- case JITArrayStorage: >- slowCases = emitArrayStorageGetByVal(currentInstruction, badType); >- break; >- default: >- CRASH(); >- break; >- } >- >- addSlowCase(badType); >- addSlowCase(slowCases); >- >- Label done = label(); >- >- if (!ASSERT_DISABLED) { >- Jump resultOK = branchIfNotEmpty(regT0); >- abortWithReason(JITGetByValResultIsNotEmpty); >- resultOK.link(this); >- } >- >- emitValueProfilingSite(); >- emitPutVirtualRegister(dst); >- >- Label nextHotPath = label(); >- >- m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, nextHotPath)); >-} >- >-JIT::JumpList JIT::emitDoubleLoad(Instruction*, PatchableJump& badType) >-{ >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT2, TrustedImm32(DoubleShape)); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >- slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength()))); >- loadDouble(BaseIndex(regT2, regT1, TimesEight), fpRegT0); >- slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitContiguousLoad(Instruction*, PatchableJump& badType, IndexingType expectedShape) >-{ >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT2, TrustedImm32(expectedShape)); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >- slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength()))); >- load64(BaseIndex(regT2, regT1, TimesEight), regT0); >- slowCases.append(branchTest64(Zero, regT0)); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitArrayStorageLoad(Instruction*, PatchableJump& badType) >-{ >- JumpList slowCases; >- >- add32(TrustedImm32(-ArrayStorageShape), regT2, regT3); >- badType = patchableBranch32(Above, regT3, TrustedImm32(SlowPutArrayStorageShape - ArrayStorageShape)); >- >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >- slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset()))); >- >- load64(BaseIndex(regT2, regT1, TimesEight, ArrayStorage::vectorOffset()), regT0); >- slowCases.append(branchTest64(Zero, regT0)); >- >- return slowCases; >-} >- >-JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, const Identifier& propertyName, Jump& fastDoneCase, Jump& slowDoneCase, JumpList& slowCases) >-{ >- // base: regT0 >- // property: regT1 >- // scratch: regT3 >- >- int dst = currentInstruction[1].u.operand; >- >- slowCases.append(branchIfNotCell(regT1)); >- emitByValIdentifierCheck(byValInfo, regT1, regT3, propertyName, slowCases); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get); >- gen.generateFastPath(*this); >- >- fastDoneCase = jump(); >- >- Label coldPathBegin = label(); >- gen.slowPathJump().link(this); >- >- Call call = callOperationWithProfile(operationGetByIdOptimize, dst, gen.stubInfo(), regT0, propertyName.impl()); >- gen.reportSlowPathCall(coldPathBegin, call); >- slowDoneCase = jump(); >- >- return gen; >-} >- >-void JIT::emitSlow_op_get_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >- >- linkSlowCaseIfNotJSCell(iter, base); // base cell check >- >- if (!isOperandConstantInt(property)) >- linkSlowCase(iter); // property int32 check >- Jump nonCell = jump(); >- linkSlowCase(iter); // base array check >- Jump notString = branchIfNotString(regT0); >- emitNakedCall(CodeLocationLabel<NoPtrTag>(m_vm->getCTIStub(stringGetByValGenerator).retaggedCode<NoPtrTag>())); >- Jump failed = branchTest64(Zero, regT0); >- emitPutVirtualRegister(dst, regT0); >- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_get_by_val)); >- failed.link(this); >- notString.link(this); >- nonCell.link(this); >- >- linkSlowCase(iter); // vector length check >- linkSlowCase(iter); // empty value >- >- Label slowPath = label(); >- >- emitGetVirtualRegister(base, regT0); >- emitGetVirtualRegister(property, regT1); >- Call call = callOperation(operationGetByValOptimize, dst, regT0, regT1, byValInfo); >- >- m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >- m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >- m_byValInstructionIndex++; >- >- emitValueProfilingSite(); >-} >- >-void JIT::emit_op_put_by_val(Instruction* currentInstruction) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >- >- emitGetVirtualRegister(base, regT0); >- bool propertyNameIsIntegerConstant = isOperandConstantInt(property); >- if (propertyNameIsIntegerConstant) >- move(Imm32(getOperandConstantInt(property)), regT1); >- else >- emitGetVirtualRegister(property, regT1); >- >- emitJumpSlowCaseIfNotJSCell(regT0, base); >- PatchableJump notIndex; >- if (!propertyNameIsIntegerConstant) { >- notIndex = emitPatchableJumpIfNotInt(regT1); >- addSlowCase(notIndex); >- // See comment in op_get_by_val. >- zeroExtend32ToPtr(regT1, regT1); >- } >- emitArrayProfilingSiteWithCell(regT0, regT2, profile); >- >- PatchableJump badType; >- JumpList slowCases; >- >- // TODO: Maybe we should do this inline? >- addSlowCase(branchTest32(NonZero, regT2, TrustedImm32(CopyOnWrite))); >- and32(TrustedImm32(IndexingShapeMask), regT2); >- >- JITArrayMode mode = chooseArrayMode(profile); >- switch (mode) { >- case JITInt32: >- slowCases = emitInt32PutByVal(currentInstruction, badType); >- break; >- case JITDouble: >- slowCases = emitDoublePutByVal(currentInstruction, badType); >- break; >- case JITContiguous: >- slowCases = emitContiguousPutByVal(currentInstruction, badType); >- break; >- case JITArrayStorage: >- slowCases = emitArrayStoragePutByVal(currentInstruction, badType); >- break; >- default: >- CRASH(); >- break; >- } >- >- addSlowCase(badType); >- addSlowCase(slowCases); >- >- Label done = label(); >- >- m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, done)); >-} >- >-JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction, PatchableJump& badType, IndexingType indexingShape) >-{ >- int value = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT2, TrustedImm32(indexingShape)); >- >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >- Jump outOfBounds = branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength())); >- >- Label storeResult = label(); >- emitGetVirtualRegister(value, regT3); >- switch (indexingShape) { >- case Int32Shape: >- slowCases.append(branchIfNotInt32(regT3)); >- store64(regT3, BaseIndex(regT2, regT1, TimesEight)); >- break; >- case DoubleShape: { >- Jump notInt = branchIfNotInt32(regT3); >- convertInt32ToDouble(regT3, fpRegT0); >- Jump ready = jump(); >- notInt.link(this); >- add64(tagTypeNumberRegister, regT3); >- move64ToDouble(regT3, fpRegT0); >- slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >- ready.link(this); >- storeDouble(fpRegT0, BaseIndex(regT2, regT1, TimesEight)); >- break; >- } >- case ContiguousShape: >- store64(regT3, BaseIndex(regT2, regT1, TimesEight)); >- emitWriteBarrier(currentInstruction[1].u.operand, value, ShouldFilterValue); >- break; >- default: >- CRASH(); >- break; >- } >- >- Jump done = jump(); >- outOfBounds.link(this); >- >- slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfVectorLength()))); >- >- emitArrayProfileStoreToHoleSpecialCase(profile); >- >- add32(TrustedImm32(1), regT1, regT3); >- store32(regT3, Address(regT2, Butterfly::offsetOfPublicLength())); >- jump().linkTo(storeResult, this); >- >- done.link(this); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitArrayStoragePutByVal(Instruction* currentInstruction, PatchableJump& badType) >-{ >- int value = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT2, TrustedImm32(ArrayStorageShape)); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2); >- slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset()))); >- >- Jump empty = branchTest64(Zero, BaseIndex(regT2, regT1, TimesEight, ArrayStorage::vectorOffset())); >- >- Label storeResult(this); >- emitGetVirtualRegister(value, regT3); >- store64(regT3, BaseIndex(regT2, regT1, TimesEight, ArrayStorage::vectorOffset())); >- emitWriteBarrier(currentInstruction[1].u.operand, value, ShouldFilterValue); >- Jump end = jump(); >- >- empty.link(this); >- emitArrayProfileStoreToHoleSpecialCase(profile); >- add32(TrustedImm32(1), Address(regT2, ArrayStorage::numValuesInVectorOffset())); >- branch32(Below, regT1, Address(regT2, ArrayStorage::lengthOffset())).linkTo(storeResult, this); >- >- add32(TrustedImm32(1), regT1); >- store32(regT1, Address(regT2, ArrayStorage::lengthOffset())); >- sub32(TrustedImm32(1), regT1); >- jump().linkTo(storeResult, this); >- >- end.link(this); >- >- return slowCases; >-} >- >-JITPutByIdGenerator JIT::emitPutByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, PutKind putKind, const Identifier& propertyName, JumpList& doneCases, JumpList& slowCases) >-{ >- // base: regT0 >- // property: regT1 >- // scratch: regT2 >- >- int base = currentInstruction[1].u.operand; >- int value = currentInstruction[3].u.operand; >- >- slowCases.append(branchIfNotCell(regT1)); >- emitByValIdentifierCheck(byValInfo, regT1, regT1, propertyName, slowCases); >- >- // Write barrier breaks the registers. So after issuing the write barrier, >- // reload the registers. >- emitGetVirtualRegisters(base, regT0, value, regT1); >- >- JITPutByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- JSValueRegs(regT0), JSValueRegs(regT1), regT2, m_codeBlock->ecmaMode(), putKind); >- gen.generateFastPath(*this); >- emitWriteBarrier(base, value, ShouldFilterBase); >- doneCases.append(jump()); >- >- Label coldPathBegin = label(); >- gen.slowPathJump().link(this); >- >- Call call = callOperation(gen.slowPathFunction(), gen.stubInfo(), regT1, regT0, propertyName.impl()); >- gen.reportSlowPathCall(coldPathBegin, call); >- doneCases.append(jump()); >- >- return gen; >-} >- >-void JIT::emitSlow_op_put_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- int value = currentInstruction[3].u.operand; >- ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >- >- linkAllSlowCases(iter); >- Label slowPath = label(); >- >- emitGetVirtualRegister(base, regT0); >- emitGetVirtualRegister(property, regT1); >- emitGetVirtualRegister(value, regT2); >- bool isDirect = Interpreter::getOpcodeID(currentInstruction->u.opcode) == op_put_by_val_direct; >- Call call = callOperation(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize, regT0, regT1, regT2, byValInfo); >- >- m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >- m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >- m_byValInstructionIndex++; >-} >- >-void JIT::emit_op_put_getter_by_id(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- int32_t options = currentInstruction[3].u.operand; >- emitGetVirtualRegister(currentInstruction[4].u.operand, regT1); >- callOperation(operationPutGetterById, regT0, m_codeBlock->identifier(currentInstruction[2].u.operand).impl(), options, regT1); >-} >- >-void JIT::emit_op_put_setter_by_id(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- int32_t options = currentInstruction[3].u.operand; >- emitGetVirtualRegister(currentInstruction[4].u.operand, regT1); >- callOperation(operationPutSetterById, regT0, m_codeBlock->identifier(currentInstruction[2].u.operand).impl(), options, regT1); >-} >- >-void JIT::emit_op_put_getter_setter_by_id(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- int32_t attribute = currentInstruction[3].u.operand; >- emitGetVirtualRegister(currentInstruction[4].u.operand, regT1); >- emitGetVirtualRegister(currentInstruction[5].u.operand, regT2); >- callOperation(operationPutGetterSetter, regT0, m_codeBlock->identifier(currentInstruction[2].u.operand).impl(), attribute, regT1, regT2); >-} >- >-void JIT::emit_op_put_getter_by_val(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- emitGetVirtualRegister(currentInstruction[2].u.operand, regT1); >- int32_t attributes = currentInstruction[3].u.operand; >- emitGetVirtualRegister(currentInstruction[4].u.operand, regT2); >- callOperation(operationPutGetterByVal, regT0, regT1, attributes, regT2); >-} >- >-void JIT::emit_op_put_setter_by_val(Instruction* currentInstruction) >-{ >- emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); >- emitGetVirtualRegister(currentInstruction[2].u.operand, regT1); >- int32_t attributes = currentInstruction[3].u.operand; >- emitGetVirtualRegister(currentInstruction[4].u.operand, regT2); >- callOperation(operationPutSetterByVal, regT0, regT1, attributes, regT2); >-} >- >-void JIT::emit_op_del_by_id(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- emitGetVirtualRegister(base, regT0); >- callOperation(operationDeleteByIdJSResult, dst, regT0, m_codeBlock->identifier(property).impl()); >-} >- >-void JIT::emit_op_del_by_val(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- emitGetVirtualRegister(base, regT0); >- emitGetVirtualRegister(property, regT1); >- callOperation(operationDeleteByValJSResult, dst, regT0, regT1); >-} >- >-void JIT::emit_op_try_get_by_id(Instruction* currentInstruction) >-{ >- int resultVReg = currentInstruction[1].u.operand; >- int baseVReg = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitGetVirtualRegister(baseVReg, regT0); >- >- emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::TryGet); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIds.append(gen); >- >- emitValueProfilingSite(); >- emitPutVirtualRegister(resultVReg); >-} >- >-void JIT::emitSlow_op_try_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperation(operationTryGetByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_get_by_id_direct(Instruction* currentInstruction) >-{ >- int resultVReg = currentInstruction[1].u.operand; >- int baseVReg = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitGetVirtualRegister(baseVReg, regT0); >- >- emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetDirect); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIds.append(gen); >- >- emitValueProfilingSite(); >- emitPutVirtualRegister(resultVReg); >-} >- >-void JIT::emitSlow_op_get_by_id_direct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperationWithProfile(operationGetByIdDirectOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_get_by_id(Instruction* currentInstruction) >-{ >- int resultVReg = currentInstruction[1].u.operand; >- int baseVReg = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitGetVirtualRegister(baseVReg, regT0); >- >- emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >- >- if (*ident == m_vm->propertyNames->length && shouldEmitProfiling()) >- emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT1, m_bytecodeOffset); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIds.append(gen); >- >- emitValueProfilingSite(); >- emitPutVirtualRegister(resultVReg); >-} >- >-void JIT::emit_op_get_by_id_with_this(Instruction* currentInstruction) >-{ >- int resultVReg = currentInstruction[1].u.operand; >- int baseVReg = currentInstruction[2].u.operand; >- int thisVReg = currentInstruction[3].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >- >- emitGetVirtualRegister(baseVReg, regT0); >- emitGetVirtualRegister(thisVReg, regT1); >- emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >- emitJumpSlowCaseIfNotJSCell(regT1, thisVReg); >- >- JITGetByIdWithThisGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), JSValueRegs(regT1), AccessType::GetWithThis); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIdsWithThis.append(gen); >- >- emitValueProfilingSite(); >- emitPutVirtualRegister(resultVReg); >-} >- >-void JIT::emitSlow_op_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperationWithProfile(operationGetByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emitSlow_op_get_by_id_with_this(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >- >- JITGetByIdWithThisGenerator& gen = m_getByIdsWithThis[m_getByIdWithThisIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperationWithProfile(operationGetByIdWithThisOptimize, resultVReg, gen.stubInfo(), regT0, regT1, ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_put_by_id(Instruction* currentInstruction) >-{ >- int baseVReg = currentInstruction[1].u.operand; >- int valueVReg = currentInstruction[3].u.operand; >- unsigned direct = currentInstruction[8].u.putByIdFlags & PutByIdIsDirect; >- >- // In order to be able to patch both the Structure, and the object offset, we store one pointer, >- // to just after the arguments have been loaded into registers 'hotPathBegin', and we generate code >- // such that the Structure & offset are always at the same distance from this. >- >- emitGetVirtualRegisters(baseVReg, regT0, valueVReg, regT1); >- >- emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >- >- JITPutByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- JSValueRegs(regT0), JSValueRegs(regT1), regT2, m_codeBlock->ecmaMode(), >- direct ? Direct : NotDirect); >- >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- >- emitWriteBarrier(baseVReg, valueVReg, ShouldFilterBase); >- >- m_putByIds.append(gen); >-} >- >-void JIT::emitSlow_op_put_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[2].u.operand)); >- >- Label coldPathBegin(this); >- >- JITPutByIdGenerator& gen = m_putByIds[m_putByIdIndex++]; >- >- Call call = callOperation(gen.slowPathFunction(), gen.stubInfo(), regT1, regT0, ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_in_by_id(Instruction* currentInstruction) >-{ >- int resultVReg = currentInstruction[1].u.operand; >- int baseVReg = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitGetVirtualRegister(baseVReg, regT0); >- >- emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); >- >- JITInByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0)); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_inByIds.append(gen); >- >- emitPutVirtualRegister(resultVReg); >-} >- >-void JIT::emitSlow_op_in_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITInByIdGenerator& gen = m_inByIds[m_inByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperation(operationInByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emitVarInjectionCheck(bool needsVarInjectionChecks) >-{ >- if (!needsVarInjectionChecks) >- return; >- addSlowCase(branch8(Equal, AbsoluteAddress(m_codeBlock->globalObject()->varInjectionWatchpoint()->addressOfState()), TrustedImm32(IsInvalidated))); >-} >- >-void JIT::emitResolveClosure(int dst, int scope, bool needsVarInjectionChecks, unsigned depth) >-{ >- emitVarInjectionCheck(needsVarInjectionChecks); >- emitGetVirtualRegister(scope, regT0); >- for (unsigned i = 0; i < depth; ++i) >- loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_resolve_scope(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int scope = currentInstruction[2].u.operand; >- ResolveType resolveType = static_cast<ResolveType>(copiedInstruction(currentInstruction)[4].u.operand); >- unsigned depth = currentInstruction[5].u.operand; >- >- auto emitCode = [&] (ResolveType resolveType) { >- switch (resolveType) { >- case GlobalProperty: >- case GlobalVar: >- case GlobalPropertyWithVarInjectionChecks: >- case GlobalVarWithVarInjectionChecks: >- case GlobalLexicalVar: >- case GlobalLexicalVarWithVarInjectionChecks: { >- JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >- RELEASE_ASSERT(constantScope); >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- move(TrustedImmPtr(constantScope), regT0); >- emitPutVirtualRegister(dst); >- break; >- } >- case ClosureVar: >- case ClosureVarWithVarInjectionChecks: >- emitResolveClosure(dst, scope, needsVarInjectionChecks(resolveType), depth); >- break; >- case ModuleVar: >- move(TrustedImmPtr(currentInstruction[6].u.jsCell.get()), regT0); >- emitPutVirtualRegister(dst); >- break; >- case Dynamic: >- addSlowCase(jump()); >- break; >- case LocalClosureVar: >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: >- RELEASE_ASSERT_NOT_REACHED(); >- } >- }; >- >- switch (resolveType) { >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: { >- JumpList skipToEnd; >- load32(¤tInstruction[4], regT0); >- >- Jump notGlobalProperty = branch32(NotEqual, regT0, TrustedImm32(GlobalProperty)); >- emitCode(GlobalProperty); >- skipToEnd.append(jump()); >- notGlobalProperty.link(this); >- >- Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >- emitCode(GlobalPropertyWithVarInjectionChecks); >- skipToEnd.append(jump()); >- notGlobalPropertyWithVarInjections.link(this); >- >- Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >- emitCode(GlobalLexicalVar); >- skipToEnd.append(jump()); >- notGlobalLexicalVar.link(this); >- >- Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >- emitCode(GlobalLexicalVarWithVarInjectionChecks); >- skipToEnd.append(jump()); >- notGlobalLexicalVarWithVarInjections.link(this); >- >- addSlowCase(jump()); >- skipToEnd.link(this); >- break; >- } >- >- default: >- emitCode(resolveType); >- break; >- } >-} >- >-void JIT::emitLoadWithStructureCheck(int scope, Structure** structureSlot) >-{ >- loadPtr(structureSlot, regT1); >- emitGetVirtualRegister(scope, regT0); >- addSlowCase(branchTestPtr(Zero, regT1)); >- load32(Address(regT1, Structure::structureIDOffset()), regT1); >- addSlowCase(branch32(NotEqual, Address(regT0, JSCell::structureIDOffset()), regT1)); >-} >- >-void JIT::emitGetVarFromPointer(JSValue* operand, GPRReg reg) >-{ >- loadPtr(operand, reg); >-} >- >-void JIT::emitGetVarFromIndirectPointer(JSValue** operand, GPRReg reg) >-{ >- loadPtr(operand, reg); >- loadPtr(reg, reg); >-} >- >-void JIT::emitGetClosureVar(int scope, uintptr_t operand) >-{ >- emitGetVirtualRegister(scope, regT0); >- loadPtr(Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register)), regT0); >-} >- >-void JIT::emit_op_get_from_scope(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int scope = currentInstruction[2].u.operand; >- ResolveType resolveType = GetPutInfo(copiedInstruction(currentInstruction)[4].u.operand).resolveType(); >- Structure** structureSlot = currentInstruction[5].u.structure.slot(); >- uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >- >- auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >- switch (resolveType) { >- case GlobalProperty: >- case GlobalPropertyWithVarInjectionChecks: { >- emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection since we don't cache structures for anything but the GlobalObject. Additionally, resolve_scope handles checking for the var injection. >- GPRReg base = regT0; >- GPRReg result = regT0; >- GPRReg offset = regT1; >- GPRReg scratch = regT2; >- >- jitAssert(scopedLambda<Jump(void)>([&] () -> Jump { >- return branchPtr(Equal, base, TrustedImmPtr(m_codeBlock->globalObject())); >- })); >- >- load32(operandSlot, offset); >- if (!ASSERT_DISABLED) { >- Jump isOutOfLine = branch32(GreaterThanOrEqual, offset, TrustedImm32(firstOutOfLineOffset)); >- abortWithReason(JITOffsetIsNotOutOfLine); >- isOutOfLine.link(this); >- } >- loadPtr(Address(base, JSObject::butterflyOffset()), scratch); >- neg32(offset); >- signExtend32ToPtr(offset, offset); >- load64(BaseIndex(scratch, offset, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue)), result); >- break; >- } >- case GlobalVar: >- case GlobalVarWithVarInjectionChecks: >- case GlobalLexicalVar: >- case GlobalLexicalVarWithVarInjectionChecks: >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- if (indirectLoadForOperand) >- emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT0); >- else >- emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT0); >- if (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks) // TDZ check. >- addSlowCase(branchIfEmpty(regT0)); >- break; >- case ClosureVar: >- case ClosureVarWithVarInjectionChecks: >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- emitGetClosureVar(scope, *operandSlot); >- break; >- case Dynamic: >- addSlowCase(jump()); >- break; >- case LocalClosureVar: >- case ModuleVar: >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: >- RELEASE_ASSERT_NOT_REACHED(); >- } >- }; >- >- switch (resolveType) { >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: { >- JumpList skipToEnd; >- load32(¤tInstruction[4], regT0); >- and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >- >- Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >- Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >- isGlobalProperty.link(this); >- emitCode(GlobalProperty, false); >- skipToEnd.append(jump()); >- notGlobalPropertyWithVarInjections.link(this); >- >- Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >- emitCode(GlobalLexicalVar, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVar.link(this); >- >- Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >- emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVarWithVarInjections.link(this); >- >- addSlowCase(jump()); >- >- skipToEnd.link(this); >- break; >- } >- >- default: >- emitCode(resolveType, false); >- break; >- } >- emitPutVirtualRegister(dst); >- emitValueProfilingSite(); >-} >- >-void JIT::emitSlow_op_get_from_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int dst = currentInstruction[1].u.operand; >- callOperationWithProfile(operationGetFromScope, dst, currentInstruction); >-} >- >-void JIT::emitPutGlobalVariable(JSValue* operand, int value, WatchpointSet* set) >-{ >- emitGetVirtualRegister(value, regT0); >- emitNotifyWrite(set); >- storePtr(regT0, operand); >-} >-void JIT::emitPutGlobalVariableIndirect(JSValue** addressOfOperand, int value, WatchpointSet** indirectWatchpointSet) >-{ >- emitGetVirtualRegister(value, regT0); >- loadPtr(indirectWatchpointSet, regT1); >- emitNotifyWrite(regT1); >- loadPtr(addressOfOperand, regT1); >- storePtr(regT0, regT1); >-} >- >-void JIT::emitPutClosureVar(int scope, uintptr_t operand, int value, WatchpointSet* set) >-{ >- emitGetVirtualRegister(value, regT1); >- emitGetVirtualRegister(scope, regT0); >- emitNotifyWrite(set); >- storePtr(regT1, Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register))); >-} >- >-void JIT::emit_op_put_to_scope(Instruction* currentInstruction) >-{ >- int scope = currentInstruction[1].u.operand; >- int value = currentInstruction[3].u.operand; >- GetPutInfo getPutInfo = GetPutInfo(copiedInstruction(currentInstruction)[4].u.operand); >- ResolveType resolveType = getPutInfo.resolveType(); >- Structure** structureSlot = currentInstruction[5].u.structure.slot(); >- uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >- >- auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >- switch (resolveType) { >- case GlobalProperty: >- case GlobalPropertyWithVarInjectionChecks: { >- emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection since we don't cache structures for anything but the GlobalObject. Additionally, resolve_scope handles checking for the var injection. >- emitGetVirtualRegister(value, regT2); >- >- jitAssert(scopedLambda<Jump(void)>([&] () -> Jump { >- return branchPtr(Equal, regT0, TrustedImmPtr(m_codeBlock->globalObject())); >- })); >- >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >- loadPtr(operandSlot, regT1); >- negPtr(regT1); >- storePtr(regT2, BaseIndex(regT0, regT1, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue))); >- emitWriteBarrier(m_codeBlock->globalObject(), value, ShouldFilterValue); >- break; >- } >- case GlobalVar: >- case GlobalVarWithVarInjectionChecks: >- case GlobalLexicalVar: >- case GlobalLexicalVarWithVarInjectionChecks: { >- JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >- RELEASE_ASSERT(constantScope); >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- if (!isInitialization(getPutInfo.initializationMode()) && (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks)) { >- // We need to do a TDZ check here because we can't always prove we need to emit TDZ checks statically. >- if (indirectLoadForOperand) >- emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT0); >- else >- emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT0); >- addSlowCase(branchIfEmpty(regT0)); >- } >- if (indirectLoadForOperand) >- emitPutGlobalVariableIndirect(bitwise_cast<JSValue**>(operandSlot), value, bitwise_cast<WatchpointSet**>(¤tInstruction[5])); >- else >- emitPutGlobalVariable(bitwise_cast<JSValue*>(*operandSlot), value, currentInstruction[5].u.watchpointSet); >- emitWriteBarrier(constantScope, value, ShouldFilterValue); >- break; >- } >- case LocalClosureVar: >- case ClosureVar: >- case ClosureVarWithVarInjectionChecks: >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- emitPutClosureVar(scope, *operandSlot, value, currentInstruction[5].u.watchpointSet); >- emitWriteBarrier(scope, value, ShouldFilterValue); >- break; >- case ModuleVar: >- case Dynamic: >- addSlowCase(jump()); >- break; >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: >- RELEASE_ASSERT_NOT_REACHED(); >- break; >- } >- }; >- >- switch (resolveType) { >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: { >- JumpList skipToEnd; >- load32(¤tInstruction[4], regT0); >- and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >- >- Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >- Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >- isGlobalProperty.link(this); >- emitCode(GlobalProperty, false); >- skipToEnd.append(jump()); >- notGlobalPropertyWithVarInjections.link(this); >- >- Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >- emitCode(GlobalLexicalVar, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVar.link(this); >- >- Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >- emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVarWithVarInjections.link(this); >- >- addSlowCase(jump()); >- >- skipToEnd.link(this); >- break; >- } >- >- default: >- emitCode(resolveType, false); >- break; >- } >-} >- >-void JIT::emitSlow_op_put_to_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- GetPutInfo getPutInfo = GetPutInfo(copiedInstruction(currentInstruction)[4].u.operand); >- ResolveType resolveType = getPutInfo.resolveType(); >- if (resolveType == ModuleVar) { >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_throw_strict_mode_readonly_property_write_error); >- slowPathCall.call(); >- } else >- callOperation(operationPutToScope, currentInstruction); >-} >- >-void JIT::emit_op_get_from_arguments(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int arguments = currentInstruction[2].u.operand; >- int index = currentInstruction[3].u.operand; >- >- emitGetVirtualRegister(arguments, regT0); >- load64(Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>)), regT0); >- emitValueProfilingSite(); >- emitPutVirtualRegister(dst); >-} >- >-void JIT::emit_op_put_to_arguments(Instruction* currentInstruction) >-{ >- int arguments = currentInstruction[1].u.operand; >- int index = currentInstruction[2].u.operand; >- int value = currentInstruction[3].u.operand; >- >- emitGetVirtualRegister(arguments, regT0); >- emitGetVirtualRegister(value, regT1); >- store64(regT1, Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>))); >- >- emitWriteBarrier(arguments, value, ShouldFilterValue); >-} >- >-#endif // USE(JSVALUE64) >- >-#if USE(JSVALUE64) >-void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode) >-{ >- Jump valueNotCell; >- if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) { >- emitGetVirtualRegister(value, regT0); >- valueNotCell = branchIfNotCell(regT0); >- } >- >- emitGetVirtualRegister(owner, regT0); >- Jump ownerNotCell; >- if (mode == ShouldFilterBaseAndValue || mode == ShouldFilterBase) >- ownerNotCell = branchIfNotCell(regT0); >- >- Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), regT0, regT1); >- callOperation(operationWriteBarrierSlowPath, regT0); >- ownerIsRememberedOrInEden.link(this); >- >- if (mode == ShouldFilterBaseAndValue || mode == ShouldFilterBase) >- ownerNotCell.link(this); >- if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) >- valueNotCell.link(this); >-} >- >-void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode) >-{ >- emitGetVirtualRegister(value, regT0); >- Jump valueNotCell; >- if (mode == ShouldFilterValue) >- valueNotCell = branchIfNotCell(regT0); >- >- emitWriteBarrier(owner); >- >- if (mode == ShouldFilterValue) >- valueNotCell.link(this); >-} >- >-#else // USE(JSVALUE64) >- >-void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode) >-{ >- Jump valueNotCell; >- if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) { >- emitLoadTag(value, regT0); >- valueNotCell = branchIfNotCell(regT0); >- } >- >- emitLoad(owner, regT0, regT1); >- Jump ownerNotCell; >- if (mode == ShouldFilterBase || mode == ShouldFilterBaseAndValue) >- ownerNotCell = branchIfNotCell(regT0); >- >- Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), regT1, regT2); >- callOperation(operationWriteBarrierSlowPath, regT1); >- ownerIsRememberedOrInEden.link(this); >- >- if (mode == ShouldFilterBase || mode == ShouldFilterBaseAndValue) >- ownerNotCell.link(this); >- if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) >- valueNotCell.link(this); >-} >- >-void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode) >-{ >- Jump valueNotCell; >- if (mode == ShouldFilterValue) { >- emitLoadTag(value, regT0); >- valueNotCell = branchIfNotCell(regT0); >- } >- >- emitWriteBarrier(owner); >- >- if (mode == ShouldFilterValue) >- valueNotCell.link(this); >-} >- >-#endif // USE(JSVALUE64) >- >-void JIT::emitWriteBarrier(JSCell* owner) >-{ >- Jump ownerIsRememberedOrInEden = barrierBranch(*vm(), owner, regT0); >- callOperation(operationWriteBarrierSlowPath, owner); >- ownerIsRememberedOrInEden.link(this); >-} >- >-void JIT::emitByValIdentifierCheck(ByValInfo* byValInfo, RegisterID cell, RegisterID scratch, const Identifier& propertyName, JumpList& slowCases) >-{ >- if (propertyName.isSymbol()) >- slowCases.append(branchPtr(NotEqual, cell, TrustedImmPtr(byValInfo->cachedSymbol.get()))); >- else { >- slowCases.append(branchIfNotString(cell)); >- loadPtr(Address(cell, JSString::offsetOfValue()), scratch); >- slowCases.append(branchPtr(NotEqual, scratch, TrustedImmPtr(propertyName.impl()))); >- } >-} >- >-void JIT::privateCompileGetByVal(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >-{ >- Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >- >- PatchableJump badType; >- JumpList slowCases; >- >- switch (arrayMode) { >- case JITInt32: >- slowCases = emitInt32GetByVal(currentInstruction, badType); >- break; >- case JITDouble: >- slowCases = emitDoubleGetByVal(currentInstruction, badType); >- break; >- case JITContiguous: >- slowCases = emitContiguousGetByVal(currentInstruction, badType); >- break; >- case JITArrayStorage: >- slowCases = emitArrayStorageGetByVal(currentInstruction, badType); >- break; >- case JITDirectArguments: >- slowCases = emitDirectArgumentsGetByVal(currentInstruction, badType); >- break; >- case JITScopedArguments: >- slowCases = emitScopedArgumentsGetByVal(currentInstruction, badType); >- break; >- default: >- TypedArrayType type = typedArrayTypeForJITArrayMode(arrayMode); >- if (isInt(type)) >- slowCases = emitIntTypedArrayGetByVal(currentInstruction, badType, type); >- else >- slowCases = emitFloatTypedArrayGetByVal(currentInstruction, badType, type); >- break; >- } >- >- Jump done = jump(); >- >- LinkBuffer patchBuffer(*this, m_codeBlock); >- >- patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- >- patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >- >- byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >- m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >- "Baseline get_by_val stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >- >- MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >- MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationGetByValGeneric)); >-} >- >-void JIT::privateCompileGetByValWithCachedId(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, const Identifier& propertyName) >-{ >- Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >- >- Jump fastDoneCase; >- Jump slowDoneCase; >- JumpList slowCases; >- >- JITGetByIdGenerator gen = emitGetByValWithCachedId(byValInfo, currentInstruction, propertyName, fastDoneCase, slowDoneCase, slowCases); >- >- ConcurrentJSLocker locker(m_codeBlock->m_lock); >- LinkBuffer patchBuffer(*this, m_codeBlock); >- patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- patchBuffer.link(fastDoneCase, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >- patchBuffer.link(slowDoneCase, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToNextHotPath)); >- if (!m_exceptionChecks.empty()) >- patchBuffer.link(m_exceptionChecks, byValInfo->exceptionHandler); >- >- for (const auto& callSite : m_calls) { >- if (callSite.callee) >- patchBuffer.link(callSite.from, callSite.callee); >- } >- gen.finalize(patchBuffer, patchBuffer); >- >- byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >- m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >- "Baseline get_by_val with cached property name '%s' stub for %s, return point %p", propertyName.impl()->utf8().data(), toCString(*m_codeBlock).data(), returnAddress.value()); >- byValInfo->stubInfo = gen.stubInfo(); >- >- MacroAssembler::repatchJump(byValInfo->notIndexJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >- MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationGetByValGeneric)); >-} >- >-void JIT::privateCompilePutByVal(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode) >-{ >- Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >- >- PatchableJump badType; >- JumpList slowCases; >- >- bool needsLinkForWriteBarrier = false; >- >- switch (arrayMode) { >- case JITInt32: >- slowCases = emitInt32PutByVal(currentInstruction, badType); >- break; >- case JITDouble: >- slowCases = emitDoublePutByVal(currentInstruction, badType); >- break; >- case JITContiguous: >- slowCases = emitContiguousPutByVal(currentInstruction, badType); >- needsLinkForWriteBarrier = true; >- break; >- case JITArrayStorage: >- slowCases = emitArrayStoragePutByVal(currentInstruction, badType); >- needsLinkForWriteBarrier = true; >- break; >- default: >- TypedArrayType type = typedArrayTypeForJITArrayMode(arrayMode); >- if (isInt(type)) >- slowCases = emitIntTypedArrayPutByVal(currentInstruction, badType, type); >- else >- slowCases = emitFloatTypedArrayPutByVal(currentInstruction, badType, type); >- break; >- } >- >- Jump done = jump(); >- >- LinkBuffer patchBuffer(*this, m_codeBlock); >- patchBuffer.link(badType, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >- if (needsLinkForWriteBarrier) { >- ASSERT(removeCodePtrTag(m_calls.last().callee.executableAddress()) == removeCodePtrTag(operationWriteBarrierSlowPath)); >- patchBuffer.link(m_calls.last().from, m_calls.last().callee); >- } >- >- bool isDirect = Interpreter::getOpcodeID(currentInstruction->u.opcode) == op_put_by_val_direct; >- if (!isDirect) { >- byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >- m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >- "Baseline put_by_val stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >- >- } else { >- byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >- m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >- "Baseline put_by_val_direct stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); >- } >- MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >- MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(isDirect ? operationDirectPutByValGeneric : operationPutByValGeneric)); >-} >- >-void JIT::privateCompilePutByValWithCachedId(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, PutKind putKind, const Identifier& propertyName) >-{ >- Instruction* currentInstruction = &m_codeBlock->instructions()[byValInfo->bytecodeIndex]; >- >- JumpList doneCases; >- JumpList slowCases; >- >- JITPutByIdGenerator gen = emitPutByValWithCachedId(byValInfo, currentInstruction, putKind, propertyName, doneCases, slowCases); >- >- ConcurrentJSLocker locker(m_codeBlock->m_lock); >- LinkBuffer patchBuffer(*this, m_codeBlock); >- patchBuffer.link(slowCases, CodeLocationLabel<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>::createFromExecutableAddress(returnAddress.value())).labelAtOffset(byValInfo->returnAddressToSlowPath)); >- patchBuffer.link(doneCases, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); >- if (!m_exceptionChecks.empty()) >- patchBuffer.link(m_exceptionChecks, byValInfo->exceptionHandler); >- >- for (const auto& callSite : m_calls) { >- if (callSite.callee) >- patchBuffer.link(callSite.from, callSite.callee); >- } >- gen.finalize(patchBuffer, patchBuffer); >- >- byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( >- m_codeBlock, patchBuffer, JITStubRoutinePtrTag, >- "Baseline put_by_val%s with cached property name '%s' stub for %s, return point %p", (putKind == Direct) ? "_direct" : "", propertyName.impl()->utf8().data(), toCString(*m_codeBlock).data(), returnAddress.value()); >- byValInfo->stubInfo = gen.stubInfo(); >- >- MacroAssembler::repatchJump(byValInfo->notIndexJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code())); >- MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(putKind == Direct ? operationDirectPutByValGeneric : operationPutByValGeneric)); >-} >- >- >-JIT::JumpList JIT::emitDirectArgumentsGetByVal(Instruction*, PatchableJump& badType) >-{ >- JumpList slowCases; >- >-#if USE(JSVALUE64) >- RegisterID base = regT0; >- RegisterID property = regT1; >- JSValueRegs result = JSValueRegs(regT0); >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >-#else >- RegisterID base = regT0; >- RegisterID property = regT2; >- JSValueRegs result = JSValueRegs(regT1, regT0); >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >-#endif >- >- load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >- badType = patchableBranch32(NotEqual, scratch, TrustedImm32(DirectArgumentsType)); >- >- load32(Address(base, DirectArguments::offsetOfLength()), scratch2); >- slowCases.append(branch32(AboveOrEqual, property, scratch2)); >- slowCases.append(branchTestPtr(NonZero, Address(base, DirectArguments::offsetOfMappedArguments()))); >- >- loadValue(BaseIndex(base, property, TimesEight, DirectArguments::storageOffset()), result); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitScopedArgumentsGetByVal(Instruction*, PatchableJump& badType) >-{ >- JumpList slowCases; >- >-#if USE(JSVALUE64) >- RegisterID base = regT0; >- RegisterID property = regT1; >- JSValueRegs result = JSValueRegs(regT0); >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >- RegisterID scratch3 = regT5; >-#else >- RegisterID base = regT0; >- RegisterID property = regT2; >- JSValueRegs result = JSValueRegs(regT1, regT0); >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >- RegisterID scratch3 = regT5; >-#endif >- >- load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >- badType = patchableBranch32(NotEqual, scratch, TrustedImm32(ScopedArgumentsType)); >- loadPtr(Address(base, ScopedArguments::offsetOfStorage()), scratch3); >- xorPtr(TrustedImmPtr(ScopedArgumentsPoison::key()), scratch3); >- slowCases.append(branch32(AboveOrEqual, property, Address(scratch3, ScopedArguments::offsetOfTotalLengthInStorage()))); >- >- loadPtr(Address(base, ScopedArguments::offsetOfTable()), scratch); >- xorPtr(TrustedImmPtr(ScopedArgumentsPoison::key()), scratch); >- load32(Address(scratch, ScopedArgumentsTable::offsetOfLength()), scratch2); >- Jump overflowCase = branch32(AboveOrEqual, property, scratch2); >- loadPtr(Address(base, ScopedArguments::offsetOfScope()), scratch2); >- xorPtr(TrustedImmPtr(ScopedArgumentsPoison::key()), scratch2); >- loadPtr(Address(scratch, ScopedArgumentsTable::offsetOfArguments()), scratch); >- load32(BaseIndex(scratch, property, TimesFour), scratch); >- slowCases.append(branch32(Equal, scratch, TrustedImm32(ScopeOffset::invalidOffset))); >- loadValue(BaseIndex(scratch2, scratch, TimesEight, JSLexicalEnvironment::offsetOfVariables()), result); >- Jump done = jump(); >- overflowCase.link(this); >- sub32(property, scratch2); >- neg32(scratch2); >- loadValue(BaseIndex(scratch3, scratch2, TimesEight), result); >- slowCases.append(branchIfEmpty(result)); >- done.link(this); >- >- load32(Address(scratch3, ScopedArguments::offsetOfTotalLengthInStorage()), scratch); >- emitPreparePreciseIndexMask32(property, scratch, scratch2); >- andPtr(scratch2, result.payloadGPR()); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitIntTypedArrayGetByVal(Instruction*, PatchableJump& badType, TypedArrayType type) >-{ >- ASSERT(isInt(type)); >- >- // The best way to test the array type is to use the classInfo. We need to do so without >- // clobbering the register that holds the indexing type, base, and property. >- >-#if USE(JSVALUE64) >- RegisterID base = regT0; >- RegisterID property = regT1; >- RegisterID resultPayload = regT0; >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >-#else >- RegisterID base = regT0; >- RegisterID property = regT2; >- RegisterID resultPayload = regT0; >- RegisterID resultTag = regT1; >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >-#endif >- >- JumpList slowCases; >- >- load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >- badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type))); >- slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength()))); >- loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch); >- cageConditionally(Gigacage::Primitive, scratch, scratch2); >- >- switch (elementSize(type)) { >- case 1: >- if (JSC::isSigned(type)) >- load8SignedExtendTo32(BaseIndex(scratch, property, TimesOne), resultPayload); >- else >- load8(BaseIndex(scratch, property, TimesOne), resultPayload); >- break; >- case 2: >- if (JSC::isSigned(type)) >- load16SignedExtendTo32(BaseIndex(scratch, property, TimesTwo), resultPayload); >- else >- load16(BaseIndex(scratch, property, TimesTwo), resultPayload); >- break; >- case 4: >- load32(BaseIndex(scratch, property, TimesFour), resultPayload); >- break; >- default: >- CRASH(); >- } >- >- Jump done; >- if (type == TypeUint32) { >- Jump canBeInt = branch32(GreaterThanOrEqual, resultPayload, TrustedImm32(0)); >- >- convertInt32ToDouble(resultPayload, fpRegT0); >- addDouble(AbsoluteAddress(&twoToThe32), fpRegT0); >-#if USE(JSVALUE64) >- moveDoubleTo64(fpRegT0, resultPayload); >- sub64(tagTypeNumberRegister, resultPayload); >-#else >- moveDoubleToInts(fpRegT0, resultPayload, resultTag); >-#endif >- >- done = jump(); >- canBeInt.link(this); >- } >- >-#if USE(JSVALUE64) >- or64(tagTypeNumberRegister, resultPayload); >-#else >- move(TrustedImm32(JSValue::Int32Tag), resultTag); >-#endif >- if (done.isSet()) >- done.link(this); >- return slowCases; >-} >- >-JIT::JumpList JIT::emitFloatTypedArrayGetByVal(Instruction*, PatchableJump& badType, TypedArrayType type) >-{ >- ASSERT(isFloat(type)); >- >-#if USE(JSVALUE64) >- RegisterID base = regT0; >- RegisterID property = regT1; >- RegisterID resultPayload = regT0; >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >-#else >- RegisterID base = regT0; >- RegisterID property = regT2; >- RegisterID resultPayload = regT0; >- RegisterID resultTag = regT1; >- RegisterID scratch = regT3; >- RegisterID scratch2 = regT4; >-#endif >- >- JumpList slowCases; >- >- load8(Address(base, JSCell::typeInfoTypeOffset()), scratch); >- badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type))); >- slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength()))); >- loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch); >- cageConditionally(Gigacage::Primitive, scratch, scratch2); >- >- switch (elementSize(type)) { >- case 4: >- loadFloat(BaseIndex(scratch, property, TimesFour), fpRegT0); >- convertFloatToDouble(fpRegT0, fpRegT0); >- break; >- case 8: { >- loadDouble(BaseIndex(scratch, property, TimesEight), fpRegT0); >- break; >- } >- default: >- CRASH(); >- } >- >- Jump notNaN = branchDouble(DoubleEqual, fpRegT0, fpRegT0); >- static const double NaN = PNaN; >- loadDouble(TrustedImmPtr(&NaN), fpRegT0); >- notNaN.link(this); >- >-#if USE(JSVALUE64) >- moveDoubleTo64(fpRegT0, resultPayload); >- sub64(tagTypeNumberRegister, resultPayload); >-#else >- moveDoubleToInts(fpRegT0, resultPayload, resultTag); >-#endif >- return slowCases; >-} >- >-JIT::JumpList JIT::emitIntTypedArrayPutByVal(Instruction* currentInstruction, PatchableJump& badType, TypedArrayType type) >-{ >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ASSERT(isInt(type)); >- >- int value = currentInstruction[3].u.operand; >- >-#if USE(JSVALUE64) >- RegisterID base = regT0; >- RegisterID property = regT1; >- RegisterID earlyScratch = regT3; >- RegisterID lateScratch = regT2; >- RegisterID lateScratch2 = regT4; >-#else >- RegisterID base = regT0; >- RegisterID property = regT2; >- RegisterID earlyScratch = regT3; >- RegisterID lateScratch = regT1; >- RegisterID lateScratch2 = regT4; >-#endif >- >- JumpList slowCases; >- >- load8(Address(base, JSCell::typeInfoTypeOffset()), earlyScratch); >- badType = patchableBranch32(NotEqual, earlyScratch, TrustedImm32(typeForTypedArrayType(type))); >- Jump inBounds = branch32(Below, property, Address(base, JSArrayBufferView::offsetOfLength())); >- emitArrayProfileOutOfBoundsSpecialCase(profile); >- slowCases.append(jump()); >- inBounds.link(this); >- >-#if USE(JSVALUE64) >- emitGetVirtualRegister(value, earlyScratch); >- slowCases.append(branchIfNotInt32(earlyScratch)); >-#else >- emitLoad(value, lateScratch, earlyScratch); >- slowCases.append(branchIfNotInt32(lateScratch)); >-#endif >- >- // We would be loading this into base as in get_by_val, except that the slow >- // path expects the base to be unclobbered. >- loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch); >- cageConditionally(Gigacage::Primitive, lateScratch, lateScratch2); >- >- if (isClamped(type)) { >- ASSERT(elementSize(type) == 1); >- ASSERT(!JSC::isSigned(type)); >- Jump inBounds = branch32(BelowOrEqual, earlyScratch, TrustedImm32(0xff)); >- Jump tooBig = branch32(GreaterThan, earlyScratch, TrustedImm32(0xff)); >- xor32(earlyScratch, earlyScratch); >- Jump clamped = jump(); >- tooBig.link(this); >- move(TrustedImm32(0xff), earlyScratch); >- clamped.link(this); >- inBounds.link(this); >- } >- >- switch (elementSize(type)) { >- case 1: >- store8(earlyScratch, BaseIndex(lateScratch, property, TimesOne)); >- break; >- case 2: >- store16(earlyScratch, BaseIndex(lateScratch, property, TimesTwo)); >- break; >- case 4: >- store32(earlyScratch, BaseIndex(lateScratch, property, TimesFour)); >- break; >- default: >- CRASH(); >- } >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitFloatTypedArrayPutByVal(Instruction* currentInstruction, PatchableJump& badType, TypedArrayType type) >-{ >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ASSERT(isFloat(type)); >- >- int value = currentInstruction[3].u.operand; >- >-#if USE(JSVALUE64) >- RegisterID base = regT0; >- RegisterID property = regT1; >- RegisterID earlyScratch = regT3; >- RegisterID lateScratch = regT2; >- RegisterID lateScratch2 = regT4; >-#else >- RegisterID base = regT0; >- RegisterID property = regT2; >- RegisterID earlyScratch = regT3; >- RegisterID lateScratch = regT1; >- RegisterID lateScratch2 = regT4; >-#endif >- >- JumpList slowCases; >- >- load8(Address(base, JSCell::typeInfoTypeOffset()), earlyScratch); >- badType = patchableBranch32(NotEqual, earlyScratch, TrustedImm32(typeForTypedArrayType(type))); >- Jump inBounds = branch32(Below, property, Address(base, JSArrayBufferView::offsetOfLength())); >- emitArrayProfileOutOfBoundsSpecialCase(profile); >- slowCases.append(jump()); >- inBounds.link(this); >- >-#if USE(JSVALUE64) >- emitGetVirtualRegister(value, earlyScratch); >- Jump doubleCase = branchIfNotInt32(earlyScratch); >- convertInt32ToDouble(earlyScratch, fpRegT0); >- Jump ready = jump(); >- doubleCase.link(this); >- slowCases.append(branchIfNotNumber(earlyScratch)); >- add64(tagTypeNumberRegister, earlyScratch); >- move64ToDouble(earlyScratch, fpRegT0); >- ready.link(this); >-#else >- emitLoad(value, lateScratch, earlyScratch); >- Jump doubleCase = branchIfNotInt32(lateScratch); >- convertInt32ToDouble(earlyScratch, fpRegT0); >- Jump ready = jump(); >- doubleCase.link(this); >- slowCases.append(branch32(Above, lateScratch, TrustedImm32(JSValue::LowestTag))); >- moveIntsToDouble(earlyScratch, lateScratch, fpRegT0, fpRegT1); >- ready.link(this); >-#endif >- >- // We would be loading this into base as in get_by_val, except that the slow >- // path expects the base to be unclobbered. >- loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch); >- cageConditionally(Gigacage::Primitive, lateScratch, lateScratch2); >- >- switch (elementSize(type)) { >- case 4: >- convertDoubleToFloat(fpRegT0, fpRegT0); >- storeFloat(fpRegT0, BaseIndex(lateScratch, property, TimesFour)); >- break; >- case 8: >- storeDouble(fpRegT0, BaseIndex(lateScratch, property, TimesEight)); >- break; >- default: >- CRASH(); >- } >- >- return slowCases; >-} >- >-} // namespace JSC >- >-#endif // ENABLE(JIT) >diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp >deleted file mode 100644 >index f973aefab22ada789452b59bda704fe7995b9968..0000000000000000000000000000000000000000 >--- a/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp >+++ /dev/null >@@ -1,1163 +0,0 @@ >-/* >- * Copyright (C) 2008-2018 Apple Inc. All rights reserved. >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >- >-#if ENABLE(JIT) >-#if USE(JSVALUE32_64) >-#include "JIT.h" >- >-#include "CodeBlock.h" >-#include "DirectArguments.h" >-#include "GCAwareJITStubRoutine.h" >-#include "InterpreterInlines.h" >-#include "JITInlines.h" >-#include "JSArray.h" >-#include "JSFunction.h" >-#include "JSLexicalEnvironment.h" >-#include "LinkBuffer.h" >-#include "ResultType.h" >-#include "SlowPathCall.h" >-#include "StructureStubInfo.h" >-#include <wtf/StringPrintStream.h> >- >- >-namespace JSC { >- >-void JIT::emit_op_put_getter_by_id(Instruction* currentInstruction) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- int options = currentInstruction[3].u.operand; >- int getter = currentInstruction[4].u.operand; >- >- emitLoadPayload(base, regT1); >- emitLoadPayload(getter, regT3); >- callOperation(operationPutGetterById, regT1, m_codeBlock->identifier(property).impl(), options, regT3); >-} >- >-void JIT::emit_op_put_setter_by_id(Instruction* currentInstruction) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- int options = currentInstruction[3].u.operand; >- int setter = currentInstruction[4].u.operand; >- >- emitLoadPayload(base, regT1); >- emitLoadPayload(setter, regT3); >- callOperation(operationPutSetterById, regT1, m_codeBlock->identifier(property).impl(), options, regT3); >-} >- >-void JIT::emit_op_put_getter_setter_by_id(Instruction* currentInstruction) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- int attribute = currentInstruction[3].u.operand; >- int getter = currentInstruction[4].u.operand; >- int setter = currentInstruction[5].u.operand; >- >- emitLoadPayload(base, regT1); >- emitLoadPayload(getter, regT3); >- emitLoadPayload(setter, regT4); >- callOperation(operationPutGetterSetter, regT1, m_codeBlock->identifier(property).impl(), attribute, regT3, regT4); >-} >- >-void JIT::emit_op_put_getter_by_val(Instruction* currentInstruction) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- int32_t attributes = currentInstruction[3].u.operand; >- int getter = currentInstruction[4].u.operand; >- >- emitLoadPayload(base, regT2); >- emitLoad(property, regT1, regT0); >- emitLoadPayload(getter, regT3); >- callOperation(operationPutGetterByVal, regT2, JSValueRegs(regT1, regT0), attributes, regT3); >-} >- >-void JIT::emit_op_put_setter_by_val(Instruction* currentInstruction) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- int32_t attributes = currentInstruction[3].u.operand; >- int getter = currentInstruction[4].u.operand; >- >- emitLoadPayload(base, regT2); >- emitLoad(property, regT1, regT0); >- emitLoadPayload(getter, regT3); >- callOperation(operationPutSetterByVal, regT2, JSValueRegs(regT1, regT0), attributes, regT3); >-} >- >-void JIT::emit_op_del_by_id(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- emitLoad(base, regT1, regT0); >- callOperation(operationDeleteByIdJSResult, dst, JSValueRegs(regT1, regT0), m_codeBlock->identifier(property).impl()); >-} >- >-void JIT::emit_op_del_by_val(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- emitLoad2(base, regT1, regT0, property, regT3, regT2); >- callOperation(operationDeleteByValJSResult, dst, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); >-} >- >-void JIT::emit_op_get_by_val(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >- >- emitLoad2(base, regT1, regT0, property, regT3, regT2); >- >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- PatchableJump notIndex = patchableBranch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)); >- addSlowCase(notIndex); >- emitArrayProfilingSiteWithCell(regT0, regT1, profile); >- and32(TrustedImm32(IndexingShapeMask), regT1); >- >- PatchableJump badType; >- JumpList slowCases; >- >- JITArrayMode mode = chooseArrayMode(profile); >- switch (mode) { >- case JITInt32: >- slowCases = emitInt32GetByVal(currentInstruction, badType); >- break; >- case JITDouble: >- slowCases = emitDoubleGetByVal(currentInstruction, badType); >- break; >- case JITContiguous: >- slowCases = emitContiguousGetByVal(currentInstruction, badType); >- break; >- case JITArrayStorage: >- slowCases = emitArrayStorageGetByVal(currentInstruction, badType); >- break; >- default: >- CRASH(); >- } >- >- addSlowCase(badType); >- addSlowCase(slowCases); >- >- Label done = label(); >- >- if (!ASSERT_DISABLED) { >- Jump resultOK = branchIfNotEmpty(regT1); >- abortWithReason(JITGetByValResultIsNotEmpty); >- resultOK.link(this); >- } >- >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >- >- Label nextHotPath = label(); >- >- m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, nextHotPath)); >-} >- >-JIT::JumpList JIT::emitContiguousLoad(Instruction*, PatchableJump& badType, IndexingType expectedShape) >-{ >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT1, TrustedImm32(expectedShape)); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >- slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfPublicLength()))); >- load32(BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); // tag >- load32(BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); // payload >- slowCases.append(branchIfEmpty(regT1)); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitDoubleLoad(Instruction*, PatchableJump& badType) >-{ >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT1, TrustedImm32(DoubleShape)); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >- slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfPublicLength()))); >- loadDouble(BaseIndex(regT3, regT2, TimesEight), fpRegT0); >- slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitArrayStorageLoad(Instruction*, PatchableJump& badType) >-{ >- JumpList slowCases; >- >- add32(TrustedImm32(-ArrayStorageShape), regT1, regT3); >- badType = patchableBranch32(Above, regT3, TrustedImm32(SlowPutArrayStorageShape - ArrayStorageShape)); >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >- slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, ArrayStorage::vectorLengthOffset()))); >- load32(BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), regT1); // tag >- load32(BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT0); // payload >- slowCases.append(branchIfEmpty(regT1)); >- >- return slowCases; >-} >- >-JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, const Identifier& propertyName, Jump& fastDoneCase, Jump& slowDoneCase, JumpList& slowCases) >-{ >- int dst = currentInstruction[1].u.operand; >- >- // base: tag(regT1), payload(regT0) >- // property: tag(regT3), payload(regT2) >- // scratch: regT4 >- >- slowCases.append(branchIfNotCell(regT3)); >- emitByValIdentifierCheck(byValInfo, regT2, regT4, propertyName, slowCases); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get); >- gen.generateFastPath(*this); >- >- fastDoneCase = jump(); >- >- Label coldPathBegin = label(); >- gen.slowPathJump().link(this); >- >- Call call = callOperationWithProfile(operationGetByIdOptimize, dst, gen.stubInfo(), JSValueRegs(regT1, regT0), propertyName.impl()); >- gen.reportSlowPathCall(coldPathBegin, call); >- slowDoneCase = jump(); >- >- return gen; >-} >- >-void JIT::emitSlow_op_get_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int property = currentInstruction[3].u.operand; >- ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >- >- linkSlowCaseIfNotJSCell(iter, base); // base cell check >- linkSlowCase(iter); // property int32 check >- >- Jump nonCell = jump(); >- linkSlowCase(iter); // base array check >- Jump notString = branchIfNotString(regT0); >- emitNakedCall(CodeLocationLabel<NoPtrTag>(m_vm->getCTIStub(stringGetByValGenerator).retaggedCode<NoPtrTag>())); >- Jump failed = branchTestPtr(Zero, regT0); >- emitStoreCell(dst, regT0); >- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_get_by_val)); >- failed.link(this); >- notString.link(this); >- nonCell.link(this); >- >- linkSlowCase(iter); // vector length check >- linkSlowCase(iter); // empty value >- >- Label slowPath = label(); >- >- emitLoad(base, regT1, regT0); >- emitLoad(property, regT3, regT2); >- Call call = callOperation(operationGetByValOptimize, dst, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2), byValInfo); >- >- m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >- m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >- m_byValInstructionIndex++; >- >- emitValueProfilingSite(); >-} >- >-void JIT::emit_op_put_by_val(Instruction* currentInstruction) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ByValInfo* byValInfo = m_codeBlock->addByValInfo(); >- >- emitLoad2(base, regT1, regT0, property, regT3, regT2); >- >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- PatchableJump notIndex = patchableBranch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)); >- addSlowCase(notIndex); >- emitArrayProfilingSiteWithCell(regT0, regT1, profile); >- and32(TrustedImm32(IndexingShapeMask), regT1); >- >- PatchableJump badType; >- JumpList slowCases; >- >- JITArrayMode mode = chooseArrayMode(profile); >- switch (mode) { >- case JITInt32: >- slowCases = emitInt32PutByVal(currentInstruction, badType); >- break; >- case JITDouble: >- slowCases = emitDoublePutByVal(currentInstruction, badType); >- break; >- case JITContiguous: >- slowCases = emitContiguousPutByVal(currentInstruction, badType); >- break; >- case JITArrayStorage: >- slowCases = emitArrayStoragePutByVal(currentInstruction, badType); >- break; >- default: >- CRASH(); >- break; >- } >- >- addSlowCase(badType); >- addSlowCase(slowCases); >- >- Label done = label(); >- >- m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeOffset, notIndex, badType, mode, profile, done, done)); >-} >- >-JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction, PatchableJump& badType, IndexingType indexingShape) >-{ >- int base = currentInstruction[1].u.operand; >- int value = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT1, TrustedImm32(ContiguousShape)); >- >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >- Jump outOfBounds = branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfPublicLength())); >- >- Label storeResult = label(); >- emitLoad(value, regT1, regT0); >- switch (indexingShape) { >- case Int32Shape: >- slowCases.append(branchIfNotInt32(regT1)); >- store32(regT0, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload))); >- store32(regT1, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag))); >- break; >- case ContiguousShape: >- store32(regT0, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload))); >- store32(regT1, BaseIndex(regT3, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag))); >- emitLoad(base, regT2, regT3); >- emitWriteBarrier(base, value, ShouldFilterValue); >- break; >- case DoubleShape: { >- Jump notInt = branchIfNotInt32(regT1); >- convertInt32ToDouble(regT0, fpRegT0); >- Jump ready = jump(); >- notInt.link(this); >- moveIntsToDouble(regT0, regT1, fpRegT0, fpRegT1); >- slowCases.append(branchDouble(DoubleNotEqualOrUnordered, fpRegT0, fpRegT0)); >- ready.link(this); >- storeDouble(fpRegT0, BaseIndex(regT3, regT2, TimesEight)); >- break; >- } >- default: >- CRASH(); >- break; >- } >- >- Jump done = jump(); >- >- outOfBounds.link(this); >- slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, Butterfly::offsetOfVectorLength()))); >- >- emitArrayProfileStoreToHoleSpecialCase(profile); >- >- add32(TrustedImm32(1), regT2, regT1); >- store32(regT1, Address(regT3, Butterfly::offsetOfPublicLength())); >- jump().linkTo(storeResult, this); >- >- done.link(this); >- >- return slowCases; >-} >- >-JIT::JumpList JIT::emitArrayStoragePutByVal(Instruction* currentInstruction, PatchableJump& badType) >-{ >- int base = currentInstruction[1].u.operand; >- int value = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- >- JumpList slowCases; >- >- badType = patchableBranch32(NotEqual, regT1, TrustedImm32(ArrayStorageShape)); >- >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT3); >- slowCases.append(branch32(AboveOrEqual, regT2, Address(regT3, ArrayStorage::vectorLengthOffset()))); >- >- Jump empty = branch32(Equal, BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::EmptyValueTag)); >- >- Label storeResult(this); >- emitLoad(value, regT1, regT0); >- store32(regT0, BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload))); // payload >- store32(regT1, BaseIndex(regT3, regT2, TimesEight, ArrayStorage::vectorOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag))); // tag >- Jump end = jump(); >- >- empty.link(this); >- emitArrayProfileStoreToHoleSpecialCase(profile); >- add32(TrustedImm32(1), Address(regT3, OBJECT_OFFSETOF(ArrayStorage, m_numValuesInVector))); >- branch32(Below, regT2, Address(regT3, ArrayStorage::lengthOffset())).linkTo(storeResult, this); >- >- add32(TrustedImm32(1), regT2, regT0); >- store32(regT0, Address(regT3, ArrayStorage::lengthOffset())); >- jump().linkTo(storeResult, this); >- >- end.link(this); >- >- emitWriteBarrier(base, value, ShouldFilterValue); >- >- return slowCases; >-} >- >-JITPutByIdGenerator JIT::emitPutByValWithCachedId(ByValInfo* byValInfo, Instruction* currentInstruction, PutKind putKind, const Identifier& propertyName, JumpList& doneCases, JumpList& slowCases) >-{ >- // base: tag(regT1), payload(regT0) >- // property: tag(regT3), payload(regT2) >- >- int base = currentInstruction[1].u.operand; >- int value = currentInstruction[3].u.operand; >- >- slowCases.append(branchIfNotCell(regT3)); >- emitByValIdentifierCheck(byValInfo, regT2, regT2, propertyName, slowCases); >- >- // Write barrier breaks the registers. So after issuing the write barrier, >- // reload the registers. >- emitWriteBarrier(base, value, ShouldFilterBase); >- emitLoadPayload(base, regT0); >- emitLoad(value, regT3, regT2); >- >- JITPutByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2), regT1, m_codeBlock->ecmaMode(), putKind); >- gen.generateFastPath(*this); >- doneCases.append(jump()); >- >- Label coldPathBegin = label(); >- gen.slowPathJump().link(this); >- >- // JITPutByIdGenerator only preserve the value and the base's payload, we have to reload the tag. >- emitLoadTag(base, regT1); >- >- Call call = callOperation(gen.slowPathFunction(), gen.stubInfo(), JSValueRegs(regT3, regT2), JSValueRegs(regT1, regT0), propertyName.impl()); >- gen.reportSlowPathCall(coldPathBegin, call); >- doneCases.append(jump()); >- >- return gen; >-} >- >-void JIT::emitSlow_op_put_by_val(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- int base = currentInstruction[1].u.operand; >- int property = currentInstruction[2].u.operand; >- int value = currentInstruction[3].u.operand; >- ArrayProfile* profile = currentInstruction[4].u.arrayProfile; >- ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo; >- >- linkSlowCaseIfNotJSCell(iter, base); // base cell check >- linkSlowCase(iter); // property int32 check >- linkSlowCase(iter); // base not array check >- >- JITArrayMode mode = chooseArrayMode(profile); >- switch (mode) { >- case JITInt32: >- case JITDouble: >- linkSlowCase(iter); // value type check >- break; >- default: >- break; >- } >- >- Jump skipProfiling = jump(); >- linkSlowCase(iter); // out of bounds >- emitArrayProfileOutOfBoundsSpecialCase(profile); >- skipProfiling.link(this); >- >- Label slowPath = label(); >- >- bool isDirect = Interpreter::getOpcodeID(currentInstruction->u.opcode) == op_put_by_val_direct; >- >-#if CPU(X86) >- // FIXME: We only have 5 temp registers, but need 6 to make this call, therefore we materialize >- // our own call. When we finish moving JSC to the C call stack, we'll get another register so >- // we can use the normal case. >- unsigned pokeOffset = 0; >- poke(GPRInfo::callFrameRegister, pokeOffset++); >- emitLoad(base, regT0, regT1); >- poke(regT1, pokeOffset++); >- poke(regT0, pokeOffset++); >- emitLoad(property, regT0, regT1); >- poke(regT1, pokeOffset++); >- poke(regT0, pokeOffset++); >- emitLoad(value, regT0, regT1); >- poke(regT1, pokeOffset++); >- poke(regT0, pokeOffset++); >- poke(TrustedImmPtr(byValInfo), pokeOffset++); >- Call call = appendCallWithExceptionCheck(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize); >-#else >- // The register selection below is chosen to reduce register swapping on ARM. >- // Swapping shouldn't happen on other platforms. >- emitLoad(base, regT2, regT1); >- emitLoad(property, regT3, regT0); >- emitLoad(value, regT5, regT4); >- Call call = callOperation(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize, JSValueRegs(regT2, regT1), JSValueRegs(regT3, regT0), JSValueRegs(regT5, regT4), byValInfo); >-#endif >- >- m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; >- m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call; >- m_byValInstructionIndex++; >-} >- >-void JIT::emit_op_try_get_by_id(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitLoad(base, regT1, regT0); >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::TryGet); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIds.append(gen); >- >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emitSlow_op_try_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperation(operationTryGetByIdOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >- >-void JIT::emit_op_get_by_id_direct(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitLoad(base, regT1, regT0); >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetDirect); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIds.append(gen); >- >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emitSlow_op_get_by_id_direct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperationWithProfile(operationGetByIdDirectOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >- >-void JIT::emit_op_get_by_id(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitLoad(base, regT1, regT0); >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- >- if (*ident == m_vm->propertyNames->length && shouldEmitProfiling()) >- emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT2, m_bytecodeOffset); >- >- JITGetByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIds.append(gen); >- >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emitSlow_op_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperationWithProfile(operationGetByIdOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_get_by_id_with_this(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- int thisVReg = currentInstruction[3].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >- >- emitLoad(base, regT1, regT0); >- emitLoad(thisVReg, regT4, regT3); >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- emitJumpSlowCaseIfNotJSCell(thisVReg, regT4); >- >- JITGetByIdWithThisGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs(regT1, regT0), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT4, regT3), AccessType::GetWithThis); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_getByIdsWithThis.append(gen); >- >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emitSlow_op_get_by_id_with_this(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[4].u.operand)); >- >- JITGetByIdWithThisGenerator& gen = m_getByIdsWithThis[m_getByIdWithThisIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperationWithProfile(operationGetByIdWithThisOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), JSValueRegs(regT4, regT3), ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_put_by_id(Instruction* currentInstruction) >-{ >- // In order to be able to patch both the Structure, and the object offset, we store one pointer, >- // to just after the arguments have been loaded into registers 'hotPathBegin', and we generate code >- // such that the Structure & offset are always at the same distance from this. >- >- int base = currentInstruction[1].u.operand; >- int value = currentInstruction[3].u.operand; >- int direct = currentInstruction[8].u.putByIdFlags & PutByIdIsDirect; >- >- emitLoad2(base, regT1, regT0, value, regT3, regT2); >- >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- >- JITPutByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2), >- regT1, m_codeBlock->ecmaMode(), direct ? Direct : NotDirect); >- >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- >- emitWriteBarrier(base, value, ShouldFilterBase); >- >- m_putByIds.append(gen); >-} >- >-void JIT::emitSlow_op_put_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int base = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[2].u.operand)); >- >- Label coldPathBegin(this); >- >- // JITPutByIdGenerator only preserve the value and the base's payload, we have to reload the tag. >- emitLoadTag(base, regT1); >- >- JITPutByIdGenerator& gen = m_putByIds[m_putByIdIndex++]; >- >- Call call = callOperation( >- gen.slowPathFunction(), gen.stubInfo(), JSValueRegs(regT3, regT2), JSValueRegs(regT1, regT0), ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emit_op_in_by_id(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int base = currentInstruction[2].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- emitLoad(base, regT1, regT0); >- emitJumpSlowCaseIfNotJSCell(base, regT1); >- >- JITInByIdGenerator gen( >- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), >- ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0)); >- gen.generateFastPath(*this); >- addSlowCase(gen.slowPathJump()); >- m_inByIds.append(gen); >- >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emitSlow_op_in_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int resultVReg = currentInstruction[1].u.operand; >- const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); >- >- JITInByIdGenerator& gen = m_inByIds[m_inByIdIndex++]; >- >- Label coldPathBegin = label(); >- >- Call call = callOperation(operationInByIdOptimize, resultVReg, gen.stubInfo(), JSValueRegs(regT1, regT0), ident->impl()); >- >- gen.reportSlowPathCall(coldPathBegin, call); >-} >- >-void JIT::emitVarInjectionCheck(bool needsVarInjectionChecks) >-{ >- if (!needsVarInjectionChecks) >- return; >- addSlowCase(branch8(Equal, AbsoluteAddress(m_codeBlock->globalObject()->varInjectionWatchpoint()->addressOfState()), TrustedImm32(IsInvalidated))); >-} >- >-void JIT::emitResolveClosure(int dst, int scope, bool needsVarInjectionChecks, unsigned depth) >-{ >- emitVarInjectionCheck(needsVarInjectionChecks); >- move(TrustedImm32(JSValue::CellTag), regT1); >- emitLoadPayload(scope, regT0); >- for (unsigned i = 0; i < depth; ++i) >- loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_resolve_scope(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int scope = currentInstruction[2].u.operand; >- ResolveType resolveType = static_cast<ResolveType>(currentInstruction[4].u.operand); >- unsigned depth = currentInstruction[5].u.operand; >- auto emitCode = [&] (ResolveType resolveType) { >- switch (resolveType) { >- case GlobalProperty: >- case GlobalVar: >- case GlobalLexicalVar: >- case GlobalPropertyWithVarInjectionChecks: >- case GlobalVarWithVarInjectionChecks: >- case GlobalLexicalVarWithVarInjectionChecks: { >- JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >- RELEASE_ASSERT(constantScope); >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- move(TrustedImm32(JSValue::CellTag), regT1); >- move(TrustedImmPtr(constantScope), regT0); >- emitStore(dst, regT1, regT0); >- break; >- } >- case ClosureVar: >- case ClosureVarWithVarInjectionChecks: >- emitResolveClosure(dst, scope, needsVarInjectionChecks(resolveType), depth); >- break; >- case ModuleVar: >- move(TrustedImm32(JSValue::CellTag), regT1); >- move(TrustedImmPtr(currentInstruction[6].u.jsCell.get()), regT0); >- emitStore(dst, regT1, regT0); >- break; >- case Dynamic: >- addSlowCase(jump()); >- break; >- case LocalClosureVar: >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: >- RELEASE_ASSERT_NOT_REACHED(); >- } >- }; >- switch (resolveType) { >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: { >- JumpList skipToEnd; >- load32(¤tInstruction[4], regT0); >- >- Jump notGlobalProperty = branch32(NotEqual, regT0, TrustedImm32(GlobalProperty)); >- emitCode(GlobalProperty); >- skipToEnd.append(jump()); >- notGlobalProperty.link(this); >- >- Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >- emitCode(GlobalPropertyWithVarInjectionChecks); >- skipToEnd.append(jump()); >- notGlobalPropertyWithVarInjections.link(this); >- >- Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >- emitCode(GlobalLexicalVar); >- skipToEnd.append(jump()); >- notGlobalLexicalVar.link(this); >- >- Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >- emitCode(GlobalLexicalVarWithVarInjectionChecks); >- skipToEnd.append(jump()); >- notGlobalLexicalVarWithVarInjections.link(this); >- >- addSlowCase(jump()); >- skipToEnd.link(this); >- break; >- } >- >- default: >- emitCode(resolveType); >- break; >- } >-} >- >-void JIT::emitLoadWithStructureCheck(int scope, Structure** structureSlot) >-{ >- emitLoad(scope, regT1, regT0); >- loadPtr(structureSlot, regT2); >- addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), regT2)); >-} >- >-void JIT::emitGetVarFromPointer(JSValue* operand, GPRReg tag, GPRReg payload) >-{ >- uintptr_t rawAddress = bitwise_cast<uintptr_t>(operand); >- load32(bitwise_cast<void*>(rawAddress + TagOffset), tag); >- load32(bitwise_cast<void*>(rawAddress + PayloadOffset), payload); >-} >-void JIT::emitGetVarFromIndirectPointer(JSValue** operand, GPRReg tag, GPRReg payload) >-{ >- loadPtr(operand, payload); >- load32(Address(payload, TagOffset), tag); >- load32(Address(payload, PayloadOffset), payload); >-} >- >-void JIT::emitGetClosureVar(int scope, uintptr_t operand) >-{ >- emitLoad(scope, regT1, regT0); >- load32(Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + TagOffset), regT1); >- load32(Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + PayloadOffset), regT0); >-} >- >-void JIT::emit_op_get_from_scope(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int scope = currentInstruction[2].u.operand; >- ResolveType resolveType = GetPutInfo(currentInstruction[4].u.operand).resolveType(); >- Structure** structureSlot = currentInstruction[5].u.structure.slot(); >- uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >- >- auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >- switch (resolveType) { >- case GlobalProperty: >- case GlobalPropertyWithVarInjectionChecks: { >- emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection. >- GPRReg base = regT2; >- GPRReg resultTag = regT1; >- GPRReg resultPayload = regT0; >- GPRReg offset = regT3; >- >- move(regT0, base); >- load32(operandSlot, offset); >- if (!ASSERT_DISABLED) { >- Jump isOutOfLine = branch32(GreaterThanOrEqual, offset, TrustedImm32(firstOutOfLineOffset)); >- abortWithReason(JITOffsetIsNotOutOfLine); >- isOutOfLine.link(this); >- } >- loadPtr(Address(base, JSObject::butterflyOffset()), base); >- neg32(offset); >- load32(BaseIndex(base, offset, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload) + (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue)), resultPayload); >- load32(BaseIndex(base, offset, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag) + (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue)), resultTag); >- break; >- } >- case GlobalVar: >- case GlobalVarWithVarInjectionChecks: >- case GlobalLexicalVar: >- case GlobalLexicalVarWithVarInjectionChecks: >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- if (indirectLoadForOperand) >- emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT1, regT0); >- else >- emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT1, regT0); >- if (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks) // TDZ check. >- addSlowCase(branchIfEmpty(regT1)); >- break; >- case ClosureVar: >- case ClosureVarWithVarInjectionChecks: >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- emitGetClosureVar(scope, *operandSlot); >- break; >- case Dynamic: >- addSlowCase(jump()); >- break; >- case ModuleVar: >- case LocalClosureVar: >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: >- RELEASE_ASSERT_NOT_REACHED(); >- } >- }; >- >- switch (resolveType) { >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: { >- JumpList skipToEnd; >- load32(¤tInstruction[4], regT0); >- and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >- >- Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >- Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >- isGlobalProperty.link(this); >- emitCode(GlobalProperty, false); >- skipToEnd.append(jump()); >- notGlobalPropertyWithVarInjections.link(this); >- >- Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >- emitCode(GlobalLexicalVar, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVar.link(this); >- >- Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >- emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVarWithVarInjections.link(this); >- >- addSlowCase(jump()); >- >- skipToEnd.link(this); >- break; >- } >- >- default: >- emitCode(resolveType, false); >- break; >- } >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emitSlow_op_get_from_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- int dst = currentInstruction[1].u.operand; >- callOperationWithProfile(operationGetFromScope, dst, currentInstruction); >-} >- >-void JIT::emitPutGlobalVariable(JSValue* operand, int value, WatchpointSet* set) >-{ >- emitLoad(value, regT1, regT0); >- emitNotifyWrite(set); >- uintptr_t rawAddress = bitwise_cast<uintptr_t>(operand); >- store32(regT1, bitwise_cast<void*>(rawAddress + TagOffset)); >- store32(regT0, bitwise_cast<void*>(rawAddress + PayloadOffset)); >-} >- >-void JIT::emitPutGlobalVariableIndirect(JSValue** addressOfOperand, int value, WatchpointSet** indirectWatchpointSet) >-{ >- emitLoad(value, regT1, regT0); >- loadPtr(indirectWatchpointSet, regT2); >- emitNotifyWrite(regT2); >- loadPtr(addressOfOperand, regT2); >- store32(regT1, Address(regT2, TagOffset)); >- store32(regT0, Address(regT2, PayloadOffset)); >-} >- >-void JIT::emitPutClosureVar(int scope, uintptr_t operand, int value, WatchpointSet* set) >-{ >- emitLoad(value, regT3, regT2); >- emitLoad(scope, regT1, regT0); >- emitNotifyWrite(set); >- store32(regT3, Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + TagOffset)); >- store32(regT2, Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register) + PayloadOffset)); >-} >- >-void JIT::emit_op_put_to_scope(Instruction* currentInstruction) >-{ >- int scope = currentInstruction[1].u.operand; >- int value = currentInstruction[3].u.operand; >- GetPutInfo getPutInfo = GetPutInfo(currentInstruction[4].u.operand); >- ResolveType resolveType = getPutInfo.resolveType(); >- Structure** structureSlot = currentInstruction[5].u.structure.slot(); >- uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(¤tInstruction[6].u.pointer); >- >- auto emitCode = [&] (ResolveType resolveType, bool indirectLoadForOperand) { >- switch (resolveType) { >- case GlobalProperty: >- case GlobalPropertyWithVarInjectionChecks: { >- emitWriteBarrier(m_codeBlock->globalObject(), value, ShouldFilterValue); >- emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection. >- emitLoad(value, regT3, regT2); >- >- loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0); >- loadPtr(operandSlot, regT1); >- negPtr(regT1); >- store32(regT3, BaseIndex(regT0, regT1, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag))); >- store32(regT2, BaseIndex(regT0, regT1, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload))); >- break; >- } >- case GlobalVar: >- case GlobalVarWithVarInjectionChecks: >- case GlobalLexicalVar: >- case GlobalLexicalVarWithVarInjectionChecks: { >- JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_codeBlock); >- RELEASE_ASSERT(constantScope); >- emitWriteBarrier(constantScope, value, ShouldFilterValue); >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- if (!isInitialization(getPutInfo.initializationMode()) && (resolveType == GlobalLexicalVar || resolveType == GlobalLexicalVarWithVarInjectionChecks)) { >- // We need to do a TDZ check here because we can't always prove we need to emit TDZ checks statically. >- if (indirectLoadForOperand) >- emitGetVarFromIndirectPointer(bitwise_cast<JSValue**>(operandSlot), regT1, regT0); >- else >- emitGetVarFromPointer(bitwise_cast<JSValue*>(*operandSlot), regT1, regT0); >- addSlowCase(branchIfEmpty(regT1)); >- } >- if (indirectLoadForOperand) >- emitPutGlobalVariableIndirect(bitwise_cast<JSValue**>(operandSlot), value, bitwise_cast<WatchpointSet**>(¤tInstruction[5])); >- else >- emitPutGlobalVariable(bitwise_cast<JSValue*>(*operandSlot), value, currentInstruction[5].u.watchpointSet); >- break; >- } >- case LocalClosureVar: >- case ClosureVar: >- case ClosureVarWithVarInjectionChecks: >- emitWriteBarrier(scope, value, ShouldFilterValue); >- emitVarInjectionCheck(needsVarInjectionChecks(resolveType)); >- emitPutClosureVar(scope, *operandSlot, value, currentInstruction[5].u.watchpointSet); >- break; >- case ModuleVar: >- case Dynamic: >- addSlowCase(jump()); >- break; >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: >- RELEASE_ASSERT_NOT_REACHED(); >- } >- }; >- >- switch (resolveType) { >- case UnresolvedProperty: >- case UnresolvedPropertyWithVarInjectionChecks: { >- JumpList skipToEnd; >- load32(¤tInstruction[4], regT0); >- and32(TrustedImm32(GetPutInfo::typeBits), regT0); // Load ResolveType into T0 >- >- Jump isGlobalProperty = branch32(Equal, regT0, TrustedImm32(GlobalProperty)); >- Jump notGlobalPropertyWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalPropertyWithVarInjectionChecks)); >- isGlobalProperty.link(this); >- emitCode(GlobalProperty, false); >- skipToEnd.append(jump()); >- notGlobalPropertyWithVarInjections.link(this); >- >- Jump notGlobalLexicalVar = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVar)); >- emitCode(GlobalLexicalVar, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVar.link(this); >- >- Jump notGlobalLexicalVarWithVarInjections = branch32(NotEqual, regT0, TrustedImm32(GlobalLexicalVarWithVarInjectionChecks)); >- emitCode(GlobalLexicalVarWithVarInjectionChecks, true); >- skipToEnd.append(jump()); >- notGlobalLexicalVarWithVarInjections.link(this); >- >- addSlowCase(jump()); >- >- skipToEnd.link(this); >- break; >- } >- >- default: >- emitCode(resolveType, false); >- break; >- } >-} >- >-void JIT::emitSlow_op_put_to_scope(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) >-{ >- linkAllSlowCases(iter); >- >- GetPutInfo getPutInfo = GetPutInfo(currentInstruction[4].u.operand); >- ResolveType resolveType = getPutInfo.resolveType(); >- if (resolveType == ModuleVar) { >- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_throw_strict_mode_readonly_property_write_error); >- slowPathCall.call(); >- } else >- callOperation(operationPutToScope, currentInstruction); >-} >- >-void JIT::emit_op_get_from_arguments(Instruction* currentInstruction) >-{ >- int dst = currentInstruction[1].u.operand; >- int arguments = currentInstruction[2].u.operand; >- int index = currentInstruction[3].u.operand; >- >- emitLoadPayload(arguments, regT0); >- load32(Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + TagOffset), regT1); >- load32(Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + PayloadOffset), regT0); >- emitValueProfilingSite(); >- emitStore(dst, regT1, regT0); >-} >- >-void JIT::emit_op_put_to_arguments(Instruction* currentInstruction) >-{ >- int arguments = currentInstruction[1].u.operand; >- int index = currentInstruction[2].u.operand; >- int value = currentInstruction[3].u.operand; >- >- emitWriteBarrier(arguments, value, ShouldFilterValue); >- >- emitLoadPayload(arguments, regT0); >- emitLoad(value, regT1, regT2); >- store32(regT1, Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + TagOffset)); >- store32(regT2, Address(regT0, DirectArguments::storageOffset() + index * sizeof(WriteBarrier<Unknown>) + PayloadOffset)); >-} >- >-} // namespace JSC >- >-#endif // USE(JSVALUE32_64) >-#endif // ENABLE(JIT)
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 186013
:
341398
|
341507