WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
RESOLVED WONTFIX
98586
Extend random precision to 53bits
https://bugs.webkit.org/show_bug.cgi?id=98586
Summary
Extend random precision to 53bits
Yusuke Suzuki
Reported
2012-10-05 23:33:26 PDT
Currently because of dividing uint32_t result, Math.random precision in JSC is 32bits. So I suggest combining 2 uint32_t random numbers and generating 53bits precision random numbers.
Attachments
Patch
(1.52 KB, patch)
2012-10-05 23:37 PDT
,
Yusuke Suzuki
no flags
Details
Formatted Diff
Diff
Patch
(4.58 KB, patch)
2012-10-07 01:57 PDT
,
Yusuke Suzuki
no flags
Details
Formatted Diff
Diff
Patch
(4.89 KB, patch)
2012-10-07 12:59 PDT
,
Yusuke Suzuki
no flags
Details
Formatted Diff
Diff
Patch
(15.01 KB, patch)
2012-11-27 07:52 PST
,
Yusuke Suzuki
no flags
Details
Formatted Diff
Diff
Patch
(15.12 KB, patch)
2012-11-27 08:44 PST
,
Yusuke Suzuki
no flags
Details
Formatted Diff
Diff
Patch
(14.69 KB, patch)
2014-08-01 08:12 PDT
,
Yusuke Suzuki
no flags
Details
Formatted Diff
Diff
Patch
(17.40 KB, patch)
2014-08-01 09:03 PDT
,
Yusuke Suzuki
no flags
Details
Formatted Diff
Diff
Show Obsolete
(7)
View All
Add attachment
proposed patch, testcase, etc.
Yusuke Suzuki
Comment 1
2012-10-05 23:37:21 PDT
Created
attachment 167443
[details]
Patch attached rev1 patch
Oliver Hunt
Comment 2
2012-10-06 10:10:01 PDT
Comment on
attachment 167443
[details]
Patch What's the performance impact of this?
Geoffrey Garen
Comment 3
2012-10-06 10:36:22 PDT
Comment on
attachment 167443
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=167443&action=review
> Source/JavaScriptCore/ChangeLog:3 > + Extend random precision to 53bits
If this creates an observable benefit, you should be able to write a regression test for it.
> Source/JavaScriptCore/runtime/WeakRandom.h:71 > + uint32_t a = static_cast<uint32_t>(getUint32()) >> 5;
I think this would be clearer as a uint64_t.
> Source/JavaScriptCore/runtime/WeakRandom.h:72 > + uint32_t b = static_cast<uint32_t>(getUint32()) >> 6;
I think this would be clearer as a uint64_t.
> Source/JavaScriptCore/runtime/WeakRandom.h:73 > + uint64_t combined = (static_cast<uint64_t>(a) << 26) | b;
I think it would be clearer just to shift 'a' right by 21 and then left by 32. It's not clear what you're trying to accomplish by removing bits from both values. For example, maybe you think that increases the randomness. But if each bit is random, that's not the case.
Oliver Hunt
Comment 4
2012-10-06 11:44:40 PDT
> I think it would be clearer just to shift 'a' right by 21 and then left by 32. It's not clear what you're trying to accomplish by removing bits from both values. For example, maybe you think that increases the randomness. But if each bit is random, that's not the case.
I you don't remove the overlapping bits then the likelihood of a bit in the overlapping region becomes much higher. Here's your probability table for the overlapping region: a b -> a | b 0 0 0 0 1 1 1 0 1 1 1 1 so probability of a given bit in the overlapping region being 1 is 0.75 rather than 0.5 as it should be
Oliver Hunt
Comment 5
2012-10-06 11:45:41 PDT
(In reply to
comment #3
)
> (From update of
attachment 167443
[details]
) > View in context:
https://bugs.webkit.org/attachment.cgi?id=167443&action=review
> > > Source/JavaScriptCore/ChangeLog:3 > > + Extend random precision to 53bits > > If this creates an observable benefit, you should be able to write a regression test for it. > > > Source/JavaScriptCore/runtime/WeakRandom.h:71 > > + uint32_t a = static_cast<uint32_t>(getUint32()) >> 5; > > I think this would be clearer as a uint64_t. > > > Source/JavaScriptCore/runtime/WeakRandom.h:72 > > + uint32_t b = static_cast<uint32_t>(getUint32()) >> 6; > > I think this would be clearer as a uint64_t. > > > Source/JavaScriptCore/runtime/WeakRandom.h:73 > > + uint64_t combined = (static_cast<uint64_t>(a) << 26) | b; > > I think it would be clearer just to shift 'a' right by 21 and then left by 32. It's not clear what you're trying to accomplish by removing bits from both values. For example, maybe you think that increases the randomness. But if each bit is random, that's not the case.
That said i'm more concerned about pert, as at one point I did generate 53 bits of randomness, but that got killed due to the pert cost.
Geoffrey Garen
Comment 6
2012-10-06 11:46:53 PDT
> I you don't remove the overlapping bits then the likelihood of a bit in the overlapping region becomes much higher.
The algorithm I suggested attempts to remove the overlapping bits. Are you saying there's a case where it won't do so?
Yusuke Suzuki
Comment 7
2012-10-07 00:51:23 PDT
Comment on
attachment 167443
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=167443&action=review
Thanks for your review.
>>> Source/JavaScriptCore/ChangeLog:3 >>> + Extend random precision to 53bits >> >> If this creates an observable benefit, you should be able to write a regression test for it. > > That said i'm more concerned about pert, as at one point I did generate 53 bits of randomness, but that got killed due to the pert cost.
We can observe Math.random().toString(2) result may become > 32 bits floating part. But because Math.random() result is random, so test may become erratic. I'll append test case to revised patch.
>> Source/JavaScriptCore/runtime/WeakRandom.h:71 >> + uint32_t a = static_cast<uint32_t>(getUint32()) >> 5; > > I think this would be clearer as a uint64_t.
I've got it. I'll create revised patch using uint64_t.
>> Source/JavaScriptCore/runtime/WeakRandom.h:73 >> + uint64_t combined = (static_cast<uint64_t>(a) << 26) | b; > > I think it would be clearer just to shift 'a' right by 21 and then left by 32. It's not clear what you're trying to accomplish by removing bits from both values. For example, maybe you think that increases the randomness. But if each bit is random, that's not the case.
I understand that a = v1 >> 15; b = v2; combined = (a << 32) | b; Is it right?
Yusuke Suzuki
Comment 8
2012-10-07 01:51:40 PDT
(In reply to
comment #7
)
> a = v1 >> 15;
Oops, sorry it is typo. >> 11 is correct.
Yusuke Suzuki
Comment 9
2012-10-07 01:57:11 PDT
Created
attachment 167475
[details]
Patch attached rev2 patch
Yusuke Suzuki
Comment 10
2012-10-07 02:00:22 PDT
(In reply to
comment #2
)
> (From update of
attachment 167443
[details]
) > What's the performance impact of this?
I've run SunSpider. Before applying patch ============================================ RESULTS (means and 95% confidence intervals) -------------------------------------------- Total: 143.6ms +/- 2.7% -------------------------------------------- 3d: 25.5ms +/- 5.3% cube: 8.5ms +/- 4.4% morph: 6.2ms +/- 7.3% raytrace: 10.8ms +/- 6.8% access: 11.4ms +/- 4.4% binary-trees: 1.0ms +/- 0.0% fannkuch: 5.1ms +/- 4.4% nbody: 3.0ms +/- 0.0% nsieve: 2.3ms +/- 15.0% bitops: 7.1ms +/- 3.2% 3bit-bits-in-byte: 1.0ms +/- 0.0% bits-in-byte: 2.0ms +/- 0.0% bitwise-and: 1.1ms +/- 20.5% nsieve-bits: 3.0ms +/- 0.0% controlflow: 1.2ms +/- 25.1% recursive: 1.2ms +/- 25.1% crypto: 12.7ms +/- 2.7% aes: 7.7ms +/- 4.5% md5: 3.0ms +/- 0.0% sha1: 2.0ms +/- 0.0% date: 19.6ms +/- 2.5% format-tofte: 10.7ms +/- 3.2% format-xparb: 8.9ms +/- 2.5% math: 11.7ms +/- 5.8% cordic: 3.0ms +/- 0.0% partial-sums: 6.7ms +/- 10.1% spectral-norm: 2.0ms +/- 0.0% regexp: 9.0ms +/- 9.2% dna: 9.0ms +/- 9.2% string: 45.4ms +/- 3.3% base64: 4.0ms +/- 0.0% fasta: 5.8ms +/- 5.2% tagcloud: 10.9ms +/- 7.8% unpack-code: 19.6ms +/- 3.9% validate-input: 5.1ms +/- 4.4% After applying patch ============================================ RESULTS (means and 95% confidence intervals) -------------------------------------------- Total: 144.1ms +/- 1.7% -------------------------------------------- 3d: 26.7ms +/- 6.6% cube: 9.1ms +/- 5.8% morph: 6.4ms +/- 9.4% raytrace: 11.2ms +/- 7.2% access: 11.3ms +/- 4.3% binary-trees: 1.1ms +/- 20.5% fannkuch: 5.0ms +/- 0.0% nbody: 3.0ms +/- 0.0% nsieve: 2.2ms +/- 13.7% bitops: 7.4ms +/- 5.0% 3bit-bits-in-byte: 1.0ms +/- 0.0% bits-in-byte: 2.0ms +/- 0.0% bitwise-and: 1.4ms +/- 26.4% nsieve-bits: 3.0ms +/- 0.0% controlflow: 1.2ms +/- 25.1% recursive: 1.2ms +/- 25.1% crypto: 12.9ms +/- 1.8% aes: 7.9ms +/- 2.9% md5: 3.0ms +/- 0.0% sha1: 2.0ms +/- 0.0% date: 19.8ms +/- 1.5% format-tofte: 10.8ms +/- 2.8% format-xparb: 9.0ms +/- 0.0% math: 11.0ms +/- 0.0% cordic: 3.0ms +/- 0.0% partial-sums: 6.0ms +/- 0.0% spectral-norm: 2.0ms +/- 0.0% regexp: 8.9ms +/- 2.5% dna: 8.9ms +/- 2.5% string: 44.9ms +/- 2.2% base64: 4.0ms +/- 0.0% fasta: 5.8ms +/- 5.2% tagcloud: 10.6ms +/- 3.5% unpack-code: 19.4ms +/- 2.6% validate-input: 5.1ms +/- 4.4% I think the performance impact of this is small and acceptable.
Geoffrey Garen
Comment 11
2012-10-07 11:44:48 PDT
Comment on
attachment 167475
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=167475&action=review
Ollie, this patch looks good to me. What do you think? I believe the case where 53 bits of randomness was a performance regression was back when we used Windows super crypto random, which boils a pot of tea and measures the molecular temperature before returning.
> LayoutTests/fast/js/script-tests/53bits-random.js:5 > +var MAX = 1000000;
toString(2) will be 55 in length if the last bit is 1. The odds of that are 1/2. If we want the odds of test failure to be 1/1000000, we should only loop about 20 times. Looping 1000000 times will hide errors in the random distribution of the last bit.
> LayoutTests/fast/js/script-tests/53bits-random.js:10 > + if (Math.random().toString(2).length === (53 + 2)) { > + found = true;
You should look *both* for a case where the last bit is 1, *and* the case where the last bit is 0. Otherwise, putting all 1's in the non-random bits will pass this test.
Oliver Hunt
Comment 12
2012-10-07 12:06:43 PDT
Those are fairly giant error margins, can you run with --runs 30 and make sure you aren't doing anything else at all while running? (Have all other applications closed, etc) (In reply to
comment #10
)
> (In reply to
comment #2
) > > (From update of
attachment 167443
[details]
[details]) > > What's the performance impact of this? > > I've run SunSpider. > > Before applying patch > ============================================ > RESULTS (means and 95% confidence intervals) > -------------------------------------------- > Total: 143.6ms +/- 2.7% > -------------------------------------------- > > 3d: 25.5ms +/- 5.3% > cube: 8.5ms +/- 4.4% > morph: 6.2ms +/- 7.3% > raytrace: 10.8ms +/- 6.8% > > access: 11.4ms +/- 4.4% > binary-trees: 1.0ms +/- 0.0% > fannkuch: 5.1ms +/- 4.4% > nbody: 3.0ms +/- 0.0% > nsieve: 2.3ms +/- 15.0% > > bitops: 7.1ms +/- 3.2% > 3bit-bits-in-byte: 1.0ms +/- 0.0% > bits-in-byte: 2.0ms +/- 0.0% > bitwise-and: 1.1ms +/- 20.5% > nsieve-bits: 3.0ms +/- 0.0% > > controlflow: 1.2ms +/- 25.1% > recursive: 1.2ms +/- 25.1% > > crypto: 12.7ms +/- 2.7% > aes: 7.7ms +/- 4.5% > md5: 3.0ms +/- 0.0% > sha1: 2.0ms +/- 0.0% > > date: 19.6ms +/- 2.5% > format-tofte: 10.7ms +/- 3.2% > format-xparb: 8.9ms +/- 2.5% > > math: 11.7ms +/- 5.8% > cordic: 3.0ms +/- 0.0% > partial-sums: 6.7ms +/- 10.1% > spectral-norm: 2.0ms +/- 0.0% > > regexp: 9.0ms +/- 9.2% > dna: 9.0ms +/- 9.2% > > string: 45.4ms +/- 3.3% > base64: 4.0ms +/- 0.0% > fasta: 5.8ms +/- 5.2% > tagcloud: 10.9ms +/- 7.8% > unpack-code: 19.6ms +/- 3.9% > validate-input: 5.1ms +/- 4.4% > > After applying patch > ============================================ > RESULTS (means and 95% confidence intervals) > -------------------------------------------- > Total: 144.1ms +/- 1.7% > -------------------------------------------- > > 3d: 26.7ms +/- 6.6% > cube: 9.1ms +/- 5.8% > morph: 6.4ms +/- 9.4% > raytrace: 11.2ms +/- 7.2% > > access: 11.3ms +/- 4.3% > binary-trees: 1.1ms +/- 20.5% > fannkuch: 5.0ms +/- 0.0% > nbody: 3.0ms +/- 0.0% > nsieve: 2.2ms +/- 13.7% > > bitops: 7.4ms +/- 5.0% > 3bit-bits-in-byte: 1.0ms +/- 0.0% > bits-in-byte: 2.0ms +/- 0.0% > bitwise-and: 1.4ms +/- 26.4% > nsieve-bits: 3.0ms +/- 0.0% > > controlflow: 1.2ms +/- 25.1% > recursive: 1.2ms +/- 25.1% > > crypto: 12.9ms +/- 1.8% > aes: 7.9ms +/- 2.9% > md5: 3.0ms +/- 0.0% > sha1: 2.0ms +/- 0.0% > > date: 19.8ms +/- 1.5% > format-tofte: 10.8ms +/- 2.8% > format-xparb: 9.0ms +/- 0.0% > > math: 11.0ms +/- 0.0% > cordic: 3.0ms +/- 0.0% > partial-sums: 6.0ms +/- 0.0% > spectral-norm: 2.0ms +/- 0.0% > > regexp: 8.9ms +/- 2.5% > dna: 8.9ms +/- 2.5% > > string: 44.9ms +/- 2.2% > base64: 4.0ms +/- 0.0% > fasta: 5.8ms +/- 5.2% > tagcloud: 10.6ms +/- 3.5% > unpack-code: 19.4ms +/- 2.6% > validate-input: 5.1ms +/- 4.4% > > I think the performance impact of this is small and acceptable.
Yusuke Suzuki
Comment 13
2012-10-07 12:41:13 PDT
(In reply to
comment #12
)
> Those are fairly giant error margins, can you run with --runs 30 and make sure you aren't doing anything else at all while running? (Have all other applications closed, etc)
Sure. I closed all other applications and ran SunSpider with --runs=30. JSC is compiled as Release. Before applying patch ============================================ RESULTS (means and 95% confidence intervals) -------------------------------------------- Total: 132.7ms +/- 0.3% -------------------------------------------- 3d: 24.0ms +/- 0.0% cube: 8.0ms +/- 0.0% morph: 6.0ms +/- 0.0% raytrace: 10.0ms +/- 0.0% access: 10.8ms +/- 2.1% binary-trees: 1.1ms +/- 11.4% fannkuch: 5.0ms +/- 1.4% nbody: 2.6ms +/- 7.3% nsieve: 2.1ms +/- 4.6% bitops: 6.5ms +/- 3.3% 3bit-bits-in-byte: 1.0ms +/- 0.0% bits-in-byte: 2.0ms +/- 0.0% bitwise-and: 1.0ms +/- 6.6% nsieve-bits: 2.4ms +/- 7.8% controlflow: 1.0ms +/- 0.0% recursive: 1.0ms +/- 0.0% crypto: 11.2ms +/- 1.3% aes: 7.0ms +/- 0.0% md5: 2.2ms +/- 6.5% sha1: 2.0ms +/- 0.0% date: 18.5ms +/- 1.0% format-tofte: 10.0ms +/- 0.0% format-xparb: 8.5ms +/- 2.2% math: 10.8ms +/- 1.4% cordic: 2.8ms +/- 5.4% partial-sums: 6.0ms +/- 0.0% spectral-norm: 2.0ms +/- 0.0% regexp: 8.0ms +/- 0.0% dna: 8.0ms +/- 0.0% string: 42.0ms +/- 0.0% base64: 4.0ms +/- 0.0% fasta: 5.0ms +/- 0.0% tagcloud: 10.0ms +/- 0.0% unpack-code: 18.0ms +/- 0.0% validate-input: 5.0ms +/- 0.0% After applying patch ============================================ RESULTS (means and 95% confidence intervals) -------------------------------------------- Total: 133.0ms +/- 0.4% -------------------------------------------- 3d: 23.9ms +/- 0.4% cube: 8.0ms +/- 0.0% morph: 6.0ms +/- 1.1% raytrace: 10.0ms +/- 0.7% access: 10.9ms +/- 2.3% binary-trees: 1.0ms +/- 0.0% fannkuch: 5.1ms +/- 2.2% nbody: 2.6ms +/- 7.2% nsieve: 2.2ms +/- 6.5% bitops: 6.6ms +/- 3.6% 3bit-bits-in-byte: 1.0ms +/- 0.0% bits-in-byte: 2.0ms +/- 0.0% bitwise-and: 1.1ms +/- 8.9% nsieve-bits: 2.5ms +/- 7.6% controlflow: 1.0ms +/- 0.0% recursive: 1.0ms +/- 0.0% crypto: 11.3ms +/- 1.6% aes: 7.0ms +/- 0.0% md5: 2.3ms +/- 7.7% sha1: 2.0ms +/- 0.0% date: 18.6ms +/- 1.0% format-tofte: 10.0ms +/- 0.7% format-xparb: 8.5ms +/- 2.2% math: 10.5ms +/- 1.8% cordic: 2.5ms +/- 7.5% partial-sums: 6.0ms +/- 0.0% spectral-norm: 2.0ms +/- 0.0% regexp: 8.0ms +/- 0.0% dna: 8.0ms +/- 0.0% string: 42.2ms +/- 0.4% base64: 4.0ms +/- 0.0% fasta: 5.1ms +/- 2.5% tagcloud: 10.0ms +/- 0.0% unpack-code: 18.1ms +/- 0.5% validate-input: 5.0ms +/- 0.0%
Yusuke Suzuki
Comment 14
2012-10-07 12:49:17 PDT
Comment on
attachment 167475
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=167475&action=review
Thanks for your review. I'll fix test and attach revised patch.
>> LayoutTests/fast/js/script-tests/53bits-random.js:5 >> +var MAX = 1000000; > > toString(2) will be 55 in length if the last bit is 1. The odds of that are 1/2. If we want the odds of test failure to be 1/1000000, we should only loop about 20 times. Looping 1000000 times will hide errors in the random distribution of the last bit.
Thanks for your pointing. Right. So I'll change it to 20.
>> LayoutTests/fast/js/script-tests/53bits-random.js:10 >> + found = true; > > You should look *both* for a case where the last bit is 1, *and* the case where the last bit is 0. Otherwise, putting all 1's in the non-random bits will pass this test.
You're right. I'll make sure that 53 + 1 length pattern comes (that is, last bit is 0).
Yusuke Suzuki
Comment 15
2012-10-07 12:50:40 PDT
> You're right. I'll make sure that 53 + 1 length pattern comes (that is, last bit is 0).
Ah, it is not good. I'll make sure that less than 55 length pattern comes.
Yusuke Suzuki
Comment 16
2012-10-07 12:59:08 PDT
Created
attachment 167484
[details]
Patch attached rev3 patch
Gavin Barraclough
Comment 17
2012-10-07 14:45:23 PDT
> I believe the case where 53 bits of randomness was a performance regression was back when we used Windows super crypto random, which boils a pot of tea and measures the molecular temperature before returning.
As an englishman I am offended by perfectly innocent pots of tea being unfairly defamed and slandered like this. ;-)
Yusuke Suzuki
Comment 18
2012-10-12 09:42:59 PDT
(In reply to
comment #12
) Oliver, is it (
https://bugs.webkit.org/show_bug.cgi?id=98586#c13
) acceptable result for you? Personally I think this difference is errors and no performance regression is observed.
Brent Fulgham
Comment 19
2012-11-25 23:51:12 PST
So, is this patch ready to land?
Yusuke Suzuki
Comment 20
2012-11-26 06:53:26 PST
(In reply to
comment #19
)
> So, is this patch ready to land?
Rev3 patch is not reviewed yet, but this patch fixes issues pointed at #11. And at #13, I made sure that this patch has no effect on performance regression. So I think this patch is ready to land. I would appreciate if anyone would review rev3 patch & land it :)
Brent Fulgham
Comment 21
2012-11-26 10:41:21 PST
(In reply to
comment #20
)
> (In reply to
comment #19
) > > So, is this patch ready to land? > > Rev3 patch is not reviewed yet, but this patch fixes issues pointed at #11. > And at #13, I made sure that this patch has no effect on performance regression. > So I think this patch is ready to land. > I would appreciate if anyone would review rev3 patch & land it :)
Sorry :-) That message was aimed at barraclough, ggaren, or olliej.
Oliver Hunt
Comment 22
2012-11-26 11:38:49 PST
Comment on
attachment 167484
[details]
Patch The more i look at this the less happy i am. In principle I am in favour of having more random bits in the double, but the reality is that this method of getting that randomness produces a result where the low 32 bits will be extremely highly correlated with the high 21 bits. I'd almost be tempted to pull double get() out of WeakRandom, and have a WeakRandomDouble class that has two separately initialized WeakRandom members, and use those two separate generators for the double. This will remove the interdependency between the two. As it is you've halved the period of the generator (something i may look at entirely independently later anyway), and you have not really gained an increase in the randomness of the low bits.
Yusuke Suzuki
Comment 23
2012-11-27 06:36:31 PST
(In reply to
comment #22
) Thanks for your review.
> (From update of
attachment 167484
[details]
) > The more i look at this the less happy i am. In principle I am in favour of having more random bits in the double, but the reality is that this method of getting that randomness produces a result where the low 32 bits will be extremely highly correlated with the high 21 bits. > > I'd almost be tempted to pull double get() out of WeakRandom, and have a WeakRandomDouble class that has two separately initialized WeakRandom members, and use those two separate generators for the double. This will remove the interdependency between the two. As it is you've halved the period of the generator (something i may look at entirely independently later anyway), and you have not really gained an increase in the randomness of the low bits.
Right, thanks. So I'll create new class WeakDoubleRandom and add it to JSGlobalObject members. WeakDoubleRandom has 2 WeakRandom object and generates random double number by using them. JSGlobalObject::weakRandomInteger needs uint32_t random value. But I think it is better to generate uint32_t random value by WeakRandom object instead of generating by WeakDoubleRandom object. If generate 32bit value by WeakDoubleRandom, we can take 3 ways, 1: generate 32bit value by using one WeakRandom object and return it. Another WeakRandom object is not used. 2: generate 32bit value by using one WeakRandom object and return it. Another WeakRandom object is used, but value is discarded. 3: generate two 32bit values by using two WeakRandom objects and discard 16bit and combine them to 32bit. I think 1 is not good for WeakDoubleRandom's randomness and 2 and 3 are not good for weakRandomInteger performance. So I suggest adding WeakDoubleRandom to JSGlobalObject in order to generate double numbers and leaving JSGlobalObject's WeakRandom object for 32bit random numbers. I'll create revised patch soon.
Yusuke Suzuki
Comment 24
2012-11-27 07:52:04 PST
Created
attachment 176267
[details]
Patch attached rev4 patch
Yusuke Suzuki
Comment 25
2012-11-27 08:44:33 PST
Created
attachment 176277
[details]
Patch Oops. Rebased the topic branch.
Anders Carlsson
Comment 26
2014-02-05 11:04:24 PST
Comment on
attachment 176277
[details]
Patch Clearing review flag on patches from before 2014. If this patch is still relevant, please reset the r? flag.
Yusuke Suzuki
Comment 27
2014-02-06 01:01:37 PST
(In reply to
comment #26
)
> (From update of
attachment 176277
[details]
) > Clearing review flag on patches from before 2014. If this patch is still relevant, please reset the r? flag.
Thank you. I'll rebase this patch and reset r? flag.
Yusuke Suzuki
Comment 28
2014-08-01 08:12:16 PDT
Created
attachment 235888
[details]
Patch
Yusuke Suzuki
Comment 29
2014-08-01 08:14:17 PDT
Comment on
attachment 235888
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=235888&action=review
Added comments.
> Source/JavaScriptCore/runtime/WeakDoubleRandom.h:61 > + m_high.getUint32();
In this patch, to generate 32bit value, discarded the value from the random generator for high bit.
Yusuke Suzuki
Comment 30
2014-08-01 08:20:20 PDT
Comment on
attachment 235888
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=235888&action=review
Added comment part 2
> Source/JavaScriptCore/runtime/JSGlobalObject.cpp:175 > + , m_weakDoubleRandom(generateSeed(), generateSeed())
One concern is that, when these 2 seed value becomes the same, the state of the used 2 random generator in WeakDoubleRandom becomes the same. If it is not preferable, I think there's 2 way to prevent it. 1. Generate 2 seed values from 1 seed value. Maybe initializing WeakRandom with the provided seed and generate 2 seeds from that. 2. Using 1 WeakRandom in WeakDoubleRandom and generate 2 values in WeakDoubleRandom::get. This is typical(I think) and old solution in the previous patch. But this has problems pointed at #22. Do you think about this?
Yusuke Suzuki
Comment 31
2014-08-01 09:03:33 PDT
Created
attachment 235890
[details]
Patch
Oliver Hunt
Comment 32
2014-08-01 10:32:34 PDT
(In reply to
comment #31
)
> Created an attachment (id=235890) [details] > Patch
I would simply do: double WeakRandom::get() { uint64_t result = advance(); result <<= 32; result += advance(); result &= (1 << 53) - 1 return result / (double)(1 << 53); } Which should provide sufficient entropy for Math.random() That said i am unconvinced that Math.random() really requires this, it's not expected to be a foot source of entropy
Yusuke Suzuki
Comment 33
2014-08-01 10:56:54 PDT
Comment on
attachment 235890
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=235890&action=review
Moved comments to the latest patch.
> Source/JavaScriptCore/runtime/JSGlobalObject.cpp:175 > + , m_weakDoubleRandom(generateSeed(), generateSeed())
One concern is that, when these 2 seed value becomes the same, the state of the used 2 random generator in WeakDoubleRandom becomes the same. If it is not preferable, I think there's 2 way to prevent it. 1. Generate 2 seed values from 1 seed value. Maybe initializing WeakRandom with the provided seed and generate 2 seeds from that. 2. Using 1 WeakRandom in WeakDoubleRandom and generate 2 values in WeakDoubleRandom::get. This is typical(I think) and old solution in the previous patch. But this has problems pointed at #22. Do you think about this?
> Source/JavaScriptCore/runtime/WeakDoubleRandom.h:60 > + // Discard a generated value from the random generator for high bit.
In this patch, to generate 32bit value, discarded the value from the random generator for high bit.
Yusuke Suzuki
Comment 34
2014-08-01 11:51:43 PDT
(In reply to
comment #32
)
> (In reply to
comment #31
) > > Created an attachment (id=235890) [details] [details] > > Patch > > I would simply do: > > double WeakRandom::get() > { > uint64_t result = advance(); > result <<= 32; > result += advance(); > result &= (1 << 53) - 1 > return result / (double)(1 << 53); > } > > Which should provide sufficient entropy for Math.random() > > That said i am unconvinced that Math.random() really requires this, it's not expected to be a foot source of entropy
Ah, OK. Investigating the other engines, SpiderMonkey has 53bit entropy, but V8 has 32bit entropy. So larger entropy is nice I think, but 32bit is acceptable.
Yusuke Suzuki
Comment 35
2014-08-01 13:24:44 PDT
Closing the bug. Thanks for your reviews :)
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug