Bug 125181 - ARM64: Crash in JIT code due to improper reuse of cached memory temp register
Summary: ARM64: Crash in JIT code due to improper reuse of cached memory temp register
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: JavaScriptCore (show other bugs)
Version: 528+ (Nightly build)
Hardware: All All
: P2 Normal
Assignee: Michael Saboff
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-12-03 14:06 PST by Michael Saboff
Modified: 2013-12-03 15:54 PST (History)
0 users

See Also:


Attachments
Patch for landing (4.33 KB, patch)
2013-12-03 14:44 PST, Michael Saboff
ggaren: review+
Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Michael Saboff 2013-12-03 14:06:14 PST
Several of the branchXX() macro assembler functions in MacroAssemblerARM64.h that take an absolute address materialize the address into the memory temp register and then load the result into the same memory temp register without invalidating the corresponding temp register cache.  For example consider the code snippet below:

    [ 167] put_to_scope      loc1, maxDepth(@id6), loc2, 65537
             0x15b0d1494:    ldur   x0, [x29, #-24]
             0x15b0d1498:    movz   x17, #24056
             0x15b0d149c:    movk   x17, #11360, lsl #16
             0x15b0d14a0:    movk   x17, #1, lsl #32
             0x15b0d14a4:    ldrb   w1, [x17, xzr]
             0x15b0d14a8:    cmp    w1, #2
             0x15b0d14ac:    b.eq   0x15b0d14fc
             0x15b0d14b0:    movz   x17, #24064  <= Full address for x17 materialized here and next two instructions
             0x15b0d14b4:    movk   x17, #11360, lsl #16
             0x15b0d14b8:    movk   x17, #1, lsl #32
             0x15b0d14bc:    ldr    x17, [x17, xzr]  <= Load into x17, the temp register cache entry for x17 should be invalidated
             0x15b0d14c0:    cmp    x17, x0
             0x15b0d14c4:    b.eq   0x15b0d14fc
             0x15b0d14c8:    movk   x17, #24057  <=  This move changes only the lower 16 bits of x17 assuming the prior  materialized contents are valid.
             0x15b0d14cc:    ldrb   w16, [x17, xzr]
             0x15b0d14d0:    cbnz   w16, 0x15b0d2ba8
             0x15b0d14d4:    mov    w16, #0x2
             0x15b0d14d8:    movz   x17, #24056
             0x15b0d14dc:    movk   x17, #11360, lsl #16
             0x15b0d14e0:    movk   x17, #1, lsl #32
             0x15b0d14e4:    strb   w16, [x17]
             0x15b0d14e8:    movz   x1, #0
             0x15b0d14ec:    movz   x17, #24064
             0x15b0d14f0:    movk   x17, #11360, lsl #16
             0x15b0d14f4:    movk   x17, #1, lsl #32
             0x15b0d14f8:    str    x1, [x17, xzr]
             0x15b0d14fc:    movz   x17, #24720
             0x15b0d1500:    movk   x17, #11345, lsl #16
             0x15b0d1504:    movk   x17, #1, lsl #32
             0x15b0d1508:    str    x0, [x17, xzr]

One fix is to invalidate the cache when the destination of an absolute load is memory temp register
Comment 1 Michael Saboff 2013-12-03 14:44:43 PST
Created attachment 218344 [details]
Patch for landing
Comment 2 Geoffrey Garen 2013-12-03 15:44:07 PST
Comment on attachment 218344 [details]
Patch for landing

r=me
Comment 3 Michael Saboff 2013-12-03 15:54:10 PST
Committed r160056: <http://trac.webkit.org/changeset/160056>