Bug 80797 - Argument length limited to 65536
: Argument length limited to 65536
Status: RESOLVED INVALID
Product: WebKit
Classification: Unclassified
Component: JavaScriptCore
: 528+ (Nightly build)
: Macintosh Intel Mac OS X 10.6
: P3 Minor
Assigned To: Nobody
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2012-03-11 16:45 PDT by Matthew Caruana Galizia
Modified: 2014-06-18 10:08 PDT (History)
6 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Matthew Caruana Galizia 2012-03-11 16:45:05 PDT
Attempting to call a function with more than 65536 arguments results in a RangeError being thrown.

How to reproduce:

var i, l, a, s;
for (i = 0, l = 65537, a = []; i < l; i++) a.push(60);
s = String.fromCharCode.apply(String, a);

Expected result:

The arguments are applied to the method.

Actual result:

The following error is thrown: "RangeError: Maximum call stack size exceeded."
Comment 1 Gavin Barraclough 2012-03-11 22:13:48 PDT
This is a restriction we place on code to enforce reasonable resource allocation, and to avoid the need for otherwise unnecessary timeout checking in argument copying loops in the VM.  This isn't something that's likely to change any time soon.

The maximum number of arguments you can pass to a function is always going to be physically limited by the size of the stack.  We do artificially cap the argument count below this right now, and we could reasonably raise the hard limit to around 2^31 or 2^32, but (1) this would still be an arbitrary limit and (2) the stack size limit would never let you get there anyway.

Stack size is finite, and 0xFFFF seems as good an arbitrary limit as any other would be. :-)

Any patch to change this will have to be careful not to introduce integer overflow, I think we may steal a couple of bits from the arguments count in either CodeBlock or Executable, and I think we may mix use of uint32t & int32t in our handling of argument counts.

Do you have a specific web compatibility concern here?
Comment 2 Matthew Caruana Galizia 2012-03-12 05:37:36 PDT
Thank you for explaining. There is a web compatibility concern in that many developers seem to have written implementations of `pack` or other utilities that perform binary operations that rely on `String.fromCharCode.apply` to perform the final transformation, because in some test cases it's faster than calling it on every iteration of a per-character loop.

See for example:

http://stackoverflow.com/questions/3195865/javascript-html-converting-byte-array-to-string
http://codereview.stackexchange.com/questions/3569/pack-and-unpack-bytes-to-strings
http://stackoverflow.com/questions/5636812/sorting-strings-in-reverse-order-with-backbone-js/5639070

It seems like this won't be trivial to fix in JavaScriptCore, and I understand why it's actually better to have a hard limit than allow developers to write code that performs badly. So perhaps it would be better to warn web developers of this issue at least for now. A warning has been added to Mozilla's docs at https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/apply.
Comment 3 Gavin Barraclough 2012-03-12 10:03:25 PDT
Ah, thanks for the examples, & the comment in the docs looks good!

cheers, G.
Comment 4 Elijah Lynn 2014-02-20 04:44:55 PST
The original code sample works in Chrome Version 32.0.1700.107 on Ubuntu. I have 10GB memory if it makes any difference.

This: 
var i, l, a, s;
for (i = 0, l = 65537, a = []; i < l; i++) a.push(60);
s = String.fromCharCode.apply(String, a);

Produces: 
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<...
Comment 5 Mathias Bynens 2014-02-20 04:47:58 PST
(In reply to comment #4)
> The original code sample works in Chrome Version 32.0.1700.107 on Ubuntu. I have 10GB memory if it makes any difference.

This is the WebKit bug tracker, not Chromium’s. The snippet you posted still throws an error in WebKit/JavaScriptCore.
Comment 6 Mathias Bynens 2014-02-20 04:49:21 PST
There are different argument count limits, depending on how you test: http://mathiasbynens.be/demo/javascript-argument-count
Comment 7 Oliver Hunt 2014-02-20 10:23:15 PST
not a webkit bug, but also the semantics of [[call]] allow an implementation to throw a stack overflow at any call it feels like
Comment 8 Kyle Simpson 2014-06-18 06:05:04 PDT
Just noting another use-case (specifically around performance) where this limitation is... unfortunate.

If I have two arrays and I want to merge them together, there's these options:

  A = A.concat(B)

vs

  A.push.apply(A,B)


The former is obviously more idiomatic, but it also has the unfortunate side-effect of creating a new merged array rather than adding onto the existing one. So, if you have a "big" array A, you end up duplicating the A, and then GC throwing away the previous one.

In those cases, the `A.push.apply(A,B)` would be more ideal since it modifies A in place, which prevents the memory duplication and prevents the GC'ing.

But now, obviously, the size of B is limited to ~65k items.

That still is sorta OK if A is "big" but B is, relatively speaking, "small". But it is still highly unfortunate that code would have to know implementation-dependent limits on such things.

I wonder if it would be possible for an implementation to detect such an push.apply(..) case and handle it more gracefully to work-around the limitation of how many params can be passed. It could see "wow, B is really big, we can't pass it in all at once, but we can rewrite it internally to the rough equivalent of..."

A.push.apply(A,B) -->

for (var len=B.length, s = 0, m; s<len; ) {
   m = Math.min(s+65000,len);
   A.push.apply(A,B.slice(s,m));
   s = m;
}



------

I see this as similar to the restriction on call-stack size when using recursion. If I write a recursive algorithm that should be TCO, but it runs in a browser that doesn't have that capability, it could fail. It's unfortunate that I have to know and guard against such things.

That's why seeing ES6 mandate TCO (they are still doing that, right!?) was so nice, because it signals a time in the future when there's a very valid programming technique which will no longer be susceptible to arbitrary, implementation-dependent limitations.
Comment 9 Oliver Hunt 2014-06-18 10:08:28 PDT
You can't expect arbitrarily large argument lists in any implementation - generally that's not the purpose of .apply (you could use spread as well).  We also can't detect the call to push in advance as we don't know we're in push until after we've already called it, and that means we have to have already copied your argument array onto the stack.

> Just noting another use-case (specifically around performance) where this limitation is... unfortunate.
> 
> If I have two arrays and I want to merge them together, there's these options:
> 
>   A = A.concat(B)
> 
> vs
> 
>   A.push.apply(A,B)
> 
> 
> The former is obviously more idiomatic, but it also has the unfortunate side-effect of creating a new merged array rather than adding onto the existing one. So, if you have a "big" array A, you end up duplicating the A, and then GC throwing away the previous one.
> 
> In those cases, the `A.push.apply(A,B)` would be more ideal since it modifies A in place, which prevents the memory duplication and prevents the GC'ing.
> 
> But now, obviously, the size of B is limited to ~65k items.
> 
> That still is sorta OK if A is "big" but B is, relatively speaking, "small". But it is still highly unfortunate that code would have to know implementation-dependent limits on such things.
> 
> I wonder if it would be possible for an implementation to detect such an push.apply(..) case and handle it more gracefully to work-around the limitation of how many params can be passed. It could see "wow, B is really big, we can't pass it in all at once, but we can rewrite it internally to the rough equivalent of..."
> 
> A.push.apply(A,B) -->
> 
> for (var len=B.length, s = 0, m; s<len; ) {
>    m = Math.min(s+65000,len);
>    A.push.apply(A,B.slice(s,m));
>    s = m;
> }
> 
> 
> 
> ------
> 
> I see this as similar to the restriction on call-stack size when using recursion. If I write a recursive algorithm that should be TCO, but it runs in a browser that doesn't have that capability, it could fail. It's unfortunate that I have to know and guard against such things.
> 
> That's why seeing ES6 mandate TCO (they are still doing that, right!?) was so nice, because it signals a time in the future when there's a very valid programming technique which will no longer be susceptible to arbitrary, implementation-dependent limitations.