Bug 112141 - LLInt CLoop backend misses Double2Ints() on 32bit architectures
Summary: LLInt CLoop backend misses Double2Ints() on 32bit architectures
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: JavaScriptCore (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Nobody
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-03-12 06:01 PDT by Gabor Rapcsanyi
Modified: 2013-03-12 09:03 PDT (History)
4 users (show)

See Also:


Attachments
proposed fix (2.30 KB, patch)
2013-03-12 06:19 PDT, Gabor Rapcsanyi
no flags Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Gabor Rapcsanyi 2013-03-12 06:01:27 PDT
LLInt cloop backend misses Double2Ints() on 32bit architectures
Comment 1 Gabor Rapcsanyi 2013-03-12 06:19:16 PDT
Created attachment 192723 [details]
proposed fix
Comment 2 Filip Pizlo 2013-03-12 08:26:05 PDT
Comment on attachment 192723 [details]
proposed fix

View in context: https://bugs.webkit.org/attachment.cgi?id=192723&action=review

R=me, but it looks like both Ints2Double and Double2Ints are not endianness-friendly.

> Source/JavaScriptCore/llint/LowLevelInterpreter.cpp:130
> +static void Double2Ints(double val, uint32_t& lo, uint32_t& hi)
> +{
> +    union {
> +        double dval;
> +        uint64_t ival64;
> +    } u;
> +    u.dval = val;
> +    hi = static_cast<uint32_t>(u.ival64 >> 32);
> +    lo = static_cast<uint32_t>(u.ival64);
> +}

Is this right?  Does Double2Ints mandate 'hi' and 'lo' being the low-order bits and the high-order bits, or are they supposed to be the first bits and the last bits?
Comment 3 WebKit Review Bot 2013-03-12 08:30:24 PDT
Comment on attachment 192723 [details]
proposed fix

Clearing flags on attachment: 192723

Committed r145551: <http://trac.webkit.org/changeset/145551>
Comment 4 WebKit Review Bot 2013-03-12 08:30:27 PDT
All reviewed patches have been landed.  Closing bug.
Comment 5 Gabor Rapcsanyi 2013-03-12 09:03:04 PDT
(In reply to comment #2)
> (From update of attachment 192723 [details])
> View in context: https://bugs.webkit.org/attachment.cgi?id=192723&action=review
> 
> R=me, but it looks like both Ints2Double and Double2Ints are not endianness-friendly.
> 

Nope they don't, should we make it endianness-friendly?

> > Source/JavaScriptCore/llint/LowLevelInterpreter.cpp:130
> > +static void Double2Ints(double val, uint32_t& lo, uint32_t& hi)
> > +{
> > +    union {
> > +        double dval;
> > +        uint64_t ival64;
> > +    } u;
> > +    u.dval = val;
> > +    hi = static_cast<uint32_t>(u.ival64 >> 32);
> > +    lo = static_cast<uint32_t>(u.ival64);
> > +}
> 
> Is this right?  Does Double2Ints mandate 'hi' and 'lo' being the low-order bits and the high-order bits, or are they supposed to be the first bits and the last bits?

I've just made it as the Ints2Double. We tried it on JSC tests and everything went fine. So I think its ok for the little-endian architecture at least.