LLInt cloop backend misses Double2Ints() on 32bit architectures
Created attachment 192723 [details] proposed fix
Comment on attachment 192723 [details] proposed fix View in context: https://bugs.webkit.org/attachment.cgi?id=192723&action=review R=me, but it looks like both Ints2Double and Double2Ints are not endianness-friendly. > Source/JavaScriptCore/llint/LowLevelInterpreter.cpp:130 > +static void Double2Ints(double val, uint32_t& lo, uint32_t& hi) > +{ > + union { > + double dval; > + uint64_t ival64; > + } u; > + u.dval = val; > + hi = static_cast<uint32_t>(u.ival64 >> 32); > + lo = static_cast<uint32_t>(u.ival64); > +} Is this right? Does Double2Ints mandate 'hi' and 'lo' being the low-order bits and the high-order bits, or are they supposed to be the first bits and the last bits?
Comment on attachment 192723 [details] proposed fix Clearing flags on attachment: 192723 Committed r145551: <http://trac.webkit.org/changeset/145551>
All reviewed patches have been landed. Closing bug.
(In reply to comment #2) > (From update of attachment 192723 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=192723&action=review > > R=me, but it looks like both Ints2Double and Double2Ints are not endianness-friendly. > Nope they don't, should we make it endianness-friendly? > > Source/JavaScriptCore/llint/LowLevelInterpreter.cpp:130 > > +static void Double2Ints(double val, uint32_t& lo, uint32_t& hi) > > +{ > > + union { > > + double dval; > > + uint64_t ival64; > > + } u; > > + u.dval = val; > > + hi = static_cast<uint32_t>(u.ival64 >> 32); > > + lo = static_cast<uint32_t>(u.ival64); > > +} > > Is this right? Does Double2Ints mandate 'hi' and 'lo' being the low-order bits and the high-order bits, or are they supposed to be the first bits and the last bits? I've just made it as the Ints2Double. We tried it on JSC tests and everything went fine. So I think its ok for the little-endian architecture at least.