WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
RESOLVED WONTFIX
89111
Heap::shouldCollect() should return true when system memory is low
https://bugs.webkit.org/show_bug.cgi?id=89111
Summary
Heap::shouldCollect() should return true when system memory is low
Yong Li
Reported
2012-06-14 10:10:45 PDT
Now the heap size is affected by the system RAM size. The RAM size is calculated only once upon the first call. So JSC will always allow a large heap size even when system is under low memory. If there is a fast way to tell recent memory status, Heap::shouldCollect() should check it.
Attachments
the patch
(3.23 KB, patch)
2012-06-14 11:01 PDT
,
Yong Li
ggaren
: review-
Details
Formatted Diff
Diff
use a small limit rather than collecting on every allocation
(3.36 KB, patch)
2012-06-15 08:01 PDT
,
Yong Li
no flags
Details
Formatted Diff
Diff
updated WTF changelog
(3.41 KB, patch)
2012-06-15 08:03 PDT
,
Yong Li
no flags
Details
Formatted Diff
Diff
try again
(3.41 KB, patch)
2012-06-15 08:04 PDT
,
Yong Li
no flags
Details
Formatted Diff
Diff
patch
(3.41 KB, patch)
2012-06-15 08:06 PDT
,
Yong Li
no flags
Details
Formatted Diff
Diff
Last try
(3.40 KB, patch)
2012-06-15 08:12 PDT
,
Yong Li
ggaren
: review-
Details
Formatted Diff
Diff
Show Obsolete
(5)
View All
Add attachment
proposed patch, testcase, etc.
Yong Li
Comment 1
2012-06-14 11:01:58 PDT
Created
attachment 147617
[details]
the patch
Yong Li
Comment 2
2012-06-14 11:12:18 PDT
Geoffrey, Filip, any objection? An alternative is add the #if platform() thing directly to Heap::shouldCollect
Geoffrey Garen
Comment 3
2012-06-14 13:14:41 PDT
Is there a performance test that demonstrates an advantage to this change? Just knowing that a system is low on physical memory is not sufficient cause to garbage collect. At the limit, you would garbage collect after every byte allocated, which is O(n^2). I don't think that's a good idea. Perhaps we could incorporate system load when calculating m_bytesAllocatedLimit. I could make a clearer suggestion if I knew more about the problem you were trying to solve.
Yong Li
Comment 4
2012-06-14 13:37:19 PDT
(In reply to
comment #3
)
> Is there a performance test that demonstrates an advantage to this change? > > Just knowing that a system is low on physical memory is not sufficient cause to garbage collect. At the limit, you would garbage collect after every byte allocated, which is O(n^2). I don't think that's a good idea. > > Perhaps we could incorporate system load when calculating m_bytesAllocatedLimit. I could make a clearer suggestion if I knew more about the problem you were trying to solve.
When system is under memory pressure, WebKit should shrink memory usage as quick as possible to avoid a crash due to malloc failure. I know there is memory pressure handler in WebCore/platform. However if webkit thread is busy in executing JS (very often), it probably cannot get a chance to do anything before the crash. So I'm trying to find a way to make GC happen as early as possible. Especially now the GC threshold can be as big as 32MB. Yes it would be hurt performance. So how about using a min(m_bytesAllocatedLimit, smallHeapSize) when system memory is low?
> Perhaps we could incorporate system load when calculating m_bytesAllocatedLimit
This is a good idea. That will require ramSize() to dynamically report currently available system memory.
Yong Li
Comment 5
2012-06-14 13:43:42 PDT
I know low physical memory doesn't have to mean memory failure on an OS that supports paging (disk swapping). So this is a very platform specific thing. I just don't know what is the neatest way to hack it.
Geoffrey Garen
Comment 6
2012-06-14 15:32:45 PDT
> Especially now the GC threshold can be as big as 32MB.
If the 32MB isn't appropriate for your platform, don't use it.
> However if webkit thread is busy in executing JS (very often), it probably cannot get a chance to do anything before the crash.
Is this a real problem or a theoretical problem that you're trying to solve?
Yong Li
Comment 7
2012-06-15 07:30:14 PDT
(In reply to
comment #6
)
> > Especially now the GC threshold can be as big as 32MB.
> > > However if webkit thread is busy in executing JS (very often), it probably cannot get a chance to do anything before the crash. > > Is this a real problem or a theoretical problem that you're trying to solve?
Honestly it is still a theoretical one. I've seen a real problem that JS execution continues running to OOM (
Bug 79555
). That was fixed by making JSString report actual memory cost, so JS engine collects garbage in time before exhausting all memory. When system is under memory pressure though, a 32MB waste in the heap can lead to the similar situation: JSString does report memory cost but the total allocated bytes is below the limit, so no GC happens.
> > If the 32MB isn't appropriate for your platform, don't use it.
That means another #if PLATFORM() in Heap.cpp, right? I've noticed large heap size speeds up page load significantly. However system memory status can change quickly on a mutli-tasking OS. A dynamic threshold will be better. If no objection, I can work on a patch for that.
Yong Li
Comment 8
2012-06-15 08:01:13 PDT
Created
attachment 147815
[details]
use a small limit rather than collecting on every allocation
Yong Li
Comment 9
2012-06-15 08:03:24 PDT
Created
attachment 147817
[details]
updated WTF changelog
Yong Li
Comment 10
2012-06-15 08:04:35 PDT
Created
attachment 147818
[details]
try again
Yong Li
Comment 11
2012-06-15 08:06:05 PDT
Created
attachment 147820
[details]
patch
Yong Li
Comment 12
2012-06-15 08:12:06 PDT
Created
attachment 147821
[details]
Last try Argh... kept unloading the same patch .... sorry everyone.
Geoffrey Garen
Comment 13
2012-06-15 10:44:47 PDT
Comment on
attachment 147821
[details]
Last try Thinking theoretically, I see how it's elegant to GC sooner if the system is running out of memory. However, in practice, this kind of policy has had very surprising negative consequences. Namely: - Many systems always claim to be "running out of memory" because they aggressively fill RAM with virtual memory caches. On these systems, the "GC sooner" policy collects way too often, and O(n^2) at the limit. This has happened before. - Some systems dynamically free up memory by killing unneeded processes, evicting VM caches, or paging -- but only if we request new VM pages first. On these systems, the "GC sooner" policy substantially reduces GC performance for no practical gain. - Some systems respond to memory pressure by swapping portions of the heap into compressed or paged storage. On these systems, the "GC sooner" policy makes low memory conditions orders of magnitude worse by constantly swapping the GC heap back into cached/uncompressed/unpaged storage, evicting everything else. This has happened before. - Some systems can only tell you if you're "running out of memory" by running an expensive operation. On these systems, the "GC sooner" policy is very slow because it performs an expensive operation very frequently. These data points make me think that this patch would be a serious performance regression. On the meta level, these data points make me very suspicious of any GC policy change that isn't accompanied by a measurable benefit in some benchmark or known user scenario on a reasonable variety of systems. The outcome is too unpredictable for us to operate on theory alone. Since this patch is Blackberry-only, and doesn't have any data to prove its benefit, I'm marking it r-. I would reconsider if I saw clear evidence of a performance or usability advantage on a variety of systems.
Yong Li
Comment 14
2012-06-15 11:16:41 PDT
(In reply to
comment #13
)
> (From update of
attachment 147821
[details]
) > Thinking theoretically, I see how it's elegant to GC sooner if the system is running out of memory. > > However, in practice, this kind of policy has had very surprising negative consequences. Namely: > > - Many systems always claim to be "running out of memory" because they aggressively fill RAM with virtual memory caches. On these systems, the "GC sooner" policy collects way too often, and O(n^2) at the limit. This has happened before. > > - Some systems dynamically free up memory by killing unneeded processes, evicting VM caches, or paging -- but only if we request new VM pages first. On these systems, the "GC sooner" policy substantially reduces GC performance for no practical gain. > > - Some systems respond to memory pressure by swapping portions of the heap into compressed or paged storage. On these systems, the "GC sooner" policy makes low memory conditions orders of magnitude worse by constantly swapping the GC heap back into cached/uncompressed/unpaged storage, evicting everything else. This has happened before. > > - Some systems can only tell you if you're "running out of memory" by running an expensive operation. On these systems, the "GC sooner" policy is very slow because it performs an expensive operation very frequently. > > These data points make me think that this patch would be a serious performance regression. On the meta level, these data points make me very suspicious of any GC policy change that isn't accompanied by a measurable benefit in some benchmark or known user scenario on a reasonable variety of systems. The outcome is too unpredictable for us to operate on theory alone. > > Since this patch is Blackberry-only, and doesn't have any data to prove its benefit, I'm marking it r-. > > I would reconsider if I saw clear evidence of a performance or usability advantage on a variety of systems.
Thanks for listing these scenarios that "GC sooner" isn't good. I understand this is a very system/platform specific thing. So I new plan is to do it in GCActivityCallback
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug