Bug 43390 - Do not CRASH if we run out of room for jit code.
Summary: Do not CRASH if we run out of room for jit code.
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: JavaScriptCore (show other bugs)
Version: 528+ (Nightly build)
Hardware: PC OS X 10.5
: P2 Normal
Assignee: Gavin Barraclough
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2010-08-02 16:26 PDT by Gavin Barraclough
Modified: 2010-08-03 18:18 PDT (History)
2 users (show)

See Also:


Attachments
The patch (74.94 KB, patch)
2010-08-02 16:38 PDT, Gavin Barraclough
oliver: review-
Details | Formatted Diff | Diff
v2 (72.85 KB, patch)
2010-08-02 16:48 PDT, Gavin Barraclough
oliver: review+
Details | Formatted Diff | Diff
v3! (80.41 KB, patch)
2010-08-03 17:07 PDT, Gavin Barraclough
oliver: review+
Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Gavin Barraclough 2010-08-02 16:26:07 PDT
In most cases we can throw a JS exception, or just choose not to apply an optimization.
Comment 1 Gavin Barraclough 2010-08-02 16:38:44 PDT
Created attachment 63276 [details]
The patch

Just working on testing that this fails correctly, if forced to run out of memory!
Comment 2 Oliver Hunt 2010-08-02 16:45:47 PDT
Comment on attachment 63276 [details]
The patch


JavaScriptCore/assembler/ARMAssembler.cpp:356
 +          for (Jumps::Iterator iter = m_jumps.begin(); iter != m_jumps.end(); ++iter) {
if (!data) return 0;

JavaScriptCore/assembler/ARMv7Assembler.h:1631
 +          if (copy) {
if (!copy) fail
Comment 3 Gavin Barraclough 2010-08-02 16:48:58 PDT
Created attachment 63277 [details]
v2
Comment 4 Oliver Hunt 2010-08-02 16:58:18 PDT
Comment on attachment 63277 [details]
v2

r=me
Comment 5 Gabor Loki 2010-08-03 00:17:07 PDT
Is this patch help in any out-of-memory situation? Or is it only a preliminary patch to handle out of memory?

I do not see the point why this change is good for WebKit. At first, after the check of the allocationSuccessful function there are only several cases when we do not call the CRASH. In those cases only very few memory is needed. If WebKit is not able to allocate this, how will we able to fall-back to any other function (which can also allocate local variables or structures to check something)? So, what is the difference to crash in jit or anywhere else? I do not think WebKit can step further with the fall-back code if there is not enough memory. This situation should be handled in a different way.

In some other systems there is a protected memory range where the most important functions reside in (like memory, exception and/or critical gui handling modules). I do not know if it is possible to implement such a thing in WebKit, but this approach would be better than the current one. 

In my opinion this patch only increases the complexity of the code in this current form. It is very likely to forget checking allocationSuccessful after creating a LinkBuffer object. :(

(In additionally moving the checks to upper abstraction level always increases the size of the binary application.)
Comment 6 Gavin Barraclough 2010-08-03 00:47:08 PDT
Hi Gabor,

> "So, what is the difference to crash in jit or anywhere else?"

No difference – we try to handle memory exhaustion from JS code by throwing an exception throughout JSC – e.g. in the case of string concatenation, stack overflow, array object growing too large.  The reason we do so is to try to prevent semi-malicious code, or code that simply has a very large dataset, from crashing the browser and taking down all your tabs.  Running with the Fixed VM pool would currently mean that code that simply constructed ever larger strings of code & attempted to eval them would take down the browser – and this is not the user experience we want in WebKit.  The cases handled by the patch are the cases that can be triggered by simply executing a large footprint of JS code or regexes.

> "after the check of the allocationSuccessful function there are only several cases when we do not call the CRASH"

That's a little unfair Loki – if you actually look at the patch it's quite the opposite – there are far fewer cases where we will now CRASH – just 3 (given the 32_64/other versions).  (1) The CRASHes in stub generation should not be reachable, since this happens only once on startup.  (2) The same would be true for the stringGetByVal stub if that was generated at startup; I should change that.  (3) It would be nice to also be able to fallback in the case of native calls, and I think this should be possible by falling back to the generic native call stub that dynamically loads the function pointer, but I chose not to look at this in this patch.  I should be able to sort these cases out in a subsequent patch, but I think they're much less likely to be hit than simply evaling too large a string.

> "It is very likely to forget checking allocationSuccessful after creating a LinkBuffer object."

You're absolutely right.  I meant to add an ASSERT to guard against this, and I forgot to do so.  It would be much better if I did, will put up a new copy of the patch with this.

Suggestions for improvements are welcome, but we need to have a better experience for our users than their browser crashing if they visit a page with too much JS code.

cheers,
G.
Comment 7 Gabor Loki 2010-08-03 01:45:18 PDT
Now, I see the main idea. Thanks.

> the patch it's quite the opposite – there are far fewer cases where we will now CRASH

I was speaking about the individual cases in the patch, not the possible occurrences of these cases in runtime, but you are right. There stubs generation - which allocates larger size of mem than the others - is only executed once. The stubs generation fit well in Fixed VM pool theory, but I feel the others - which falling back - do not take care about out-of-memory. What is happening if there is fall-back (because of out-of-memory) and the native function will create a local structure or variable which should be allocated from memory? Is this allocation done in the same Fixed VM pool or in the main heap?

>  we need to have a better experience for our users than their browser crashing if they visit a page with too much JS code.

I was wondering exactly that. Do you have a case or an example which show the effect of this patch?
Comment 8 Geoffrey Garen 2010-08-03 11:20:04 PDT
> I was wondering exactly that. Do you have a case or an example which show the effect of this patch?

I believe run-javascriptcore-tests will crash if you set the size of the executable pool to ~32MB.
Comment 9 Gavin Barraclough 2010-08-03 17:07:23 PDT
Created attachment 63392 [details]
v3!

Add ASSERT that allocationSuccessful is called, make sure nulls passed back correctly though poolForSize & alloc.
Comment 10 Oliver Hunt 2010-08-03 17:11:09 PDT
Comment on attachment 63392 [details]
v3!

r=me
Comment 11 Gavin Barraclough 2010-08-03 18:17:18 PDT
(In reply to comment #7)
> Now, I see the main idea. Thanks.
> 
> > the patch it's quite the opposite – there are far fewer cases where we will now CRASH
> 
> I was speaking about the individual cases in the patch, not the possible occurrences of these cases in runtime, but you are right. There stubs generation - which allocates larger size of mem than the others - is only executed once. The stubs generation fit well in Fixed VM pool theory, but I feel the others - which falling back - do not take care about out-of-memory. What is happening if there is fall-back (because of out-of-memory) and the native function will create a local structure or variable which should be allocated from memory? Is this allocation done in the same Fixed VM pool or in the main heap?

This patch doesn't help at all if you've actually really run out of memory.  As such, this patch doesn't really help the on-demand allocator at all, since it should only fail to allocate JIT code if you've really exhausted VM / commitable memory, so yes, if you're out of memory you're likely to fail at the point you next try to malloc an internal datatype anyway.

This also doesn't help the Fixed VM pool in cases where you are actually out of memory.  Only JIT buffers are allocated out of the fixed pool, so if you're actually out of memory then the next malloc will likely be a CRASH.

But switching from the on demand allocator to the fixed one does introduce a new resource cap, which is that you could run out of space in the fixed VM pool long before you are actually unable to allocate more memory.  As such switching to use the fixed allocator would increase the chance of a browser CRASHing.  The change mitigates this problem somewhat, by making it a JS exception and not a CRASH.  It only really helps this specific case (which we have hit) of the fixed VM buffer being exhausted where memory is still available.

> >  we need to have a better experience for our users than their browser crashing if they visit a page with too much JS code.
> 
> I was wondering exactly that. Do you have a case or an example which show the effect of this patch?

As Geoff says, JavaScriptCore tests will CRASH in some cases without this patch.  That said, the only platform currently using the fixed allocator is x86-64, and this has plenty of space!  So I've tested with a VM pool capped to an artificially low size, to check it threw an exception as expected, butI haven't written a test case that attempts to exhaust x86-64s 2GB JIT buffer. :-)
Comment 12 Gavin Barraclough 2010-08-03 18:18:04 PDT
Transmitting file data ........................
Committed revision 64608.

OOI I had missed a couple of checks, adding the ASSERT caught this!