WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
RESOLVED FIXED
195895
[BMalloc] Scavenger should react to recent memory activity
https://bugs.webkit.org/show_bug.cgi?id=195895
Summary
[BMalloc] Scavenger should react to recent memory activity
Michael Saboff
Reported
2019-03-18 10:53:12 PDT
The current bmalloc scavenging code code uses fixed timeouts to schedule scavenging. It has two scavenging modes, a full scavenge and a partial scavenge. The partial scavenge only frees LargeRanges, while the full scavenge will free objects of various sizes. We have discussed changing the scavenger so that it adapts to the current sizes of the various freelists as well as not freeing items on a freelist if it has been recently used.
Attachments
Patch
(23.03 KB, patch)
2019-03-18 11:31 PDT
,
Michael Saboff
ggaren
: review+
Details
Formatted Diff
Diff
Patch for landing
(26.02 KB, patch)
2019-03-19 10:30 PDT
,
Michael Saboff
no flags
Details
Formatted Diff
Diff
View All
Add attachment
proposed patch, testcase, etc.
Michael Saboff
Comment 1
2019-03-18 10:53:22 PDT
<
rdar://problem/48517629
>
Michael Saboff
Comment 2
2019-03-18 11:31:14 PDT
Created
attachment 365037
[details]
Patch
Geoffrey Garen
Comment 3
2019-03-18 14:54:25 PDT
Comment on
attachment 365037
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=365037&action=review
r=me Please make sure this is unrelated: ** The following JSC stress test failures have been introduced: stress/symbol-is-destructed-before-refing-underlying-symbol-impl.js.default
> Source/bmalloc/bmalloc/Chunk.h:66 > + unsigned char m_usedSinceLastScavenge: 1;
Might as well just say bool here, since we aren't saving any bits.
> Source/bmalloc/bmalloc/Heap.cpp:532 > + range.setUsedSinceLastScavenge();
We don't need this explicit set because 'range' is a temporary.
> Source/bmalloc/bmalloc/LargeRange.h:60 > + , m_usedSinceLastScavenge(true)
I think m_used should default to false, and be set to true inside LargeMap::add. That's a little clearer. In the future, we can consider whether it's really optimal for any allocation from a large range to prevent reclamation of the unused portion(s) of the large range.
> Source/bmalloc/bmalloc/LargeRange.h:101 > + unsigned m_isEligible: 1;
I think m_isEligible is a leftover from the previous partial strategy. You should remove it in a follow-up patch.
> Source/bmalloc/bmalloc/LargeRange.h:102 > + unsigned m_usedSinceLastScavenge: 1;
You should update the merge function to OR together the m_used bits. I don't think that changes behavior for now, but it's good hygiene.
> Source/bmalloc/bmalloc/Scavenger.cpp:340 > + if (m_isInMiniMode) > + m_waitTime = std::min(std::max(static_cast<long>(timeSpentScavenging) * 50L / 1000L, 25L), 500L); > + else > + m_waitTime = std::min(std::max(static_cast<long>(timeSpentScavenging) * 150L / 1000L, 100L), 10000L);
Let's do the conversion to milliseconds at the end, so that it's not embedded inside the min/max logic. Also, can we do the conversion using duration_cast rather than manual multiplication?
> Source/bmalloc/bmalloc/Scavenger.h:95 > + long m_waitTime;
It looks like we always interpret m_waitTime as std::chrono::milliseconds. Can you set the datatype to std::chrono::milliseconds?
Geoffrey Garen
Comment 4
2019-03-18 15:20:48 PDT
Comment on
attachment 365037
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=365037&action=review
r=me
> Source/bmalloc/bmalloc/Heap.cpp:188 > + if (page->usedSinceLastScavenge()) { > + page->clearUsedSinceLastScavenge(); > + continue; > + }
There's a tiny bug where we might not do another scavenge, and we might never free this memory (if no further freeing happens promptly). You can fix that by setting our state back to RunSoon here.
> Source/bmalloc/bmalloc/Heap.cpp:214 > + for (auto iter = list.begin(); iter != list.end(); ) { > + Chunk* chunk = *iter; > + if (chunk->usedSinceLastScavenge()) { > + chunk->clearUsedSinceLastScavenge(); > + ++iter; > + continue; > + } > + ++iter; > + list.remove(chunk); > + deallocateSmallChunk(chunk, &list - &m_chunkCache[0]); > + }
This code will wait for four scavenges before it decommits small chunk memory: The first scavenge clears the bit; the second puts the chunk in the large free list; the third clears the bit in the large free list; and the fourth frees it. My suggestion (possibly for a follow-up patch): There's really no reason for a Chunk to track usedSinceLastScavenge(). When we free a Chunk, we don't madvise; we just put it in the large free list. So we can do that right away.
> Source/bmalloc/bmalloc/Heap.cpp:221 > + if (range.usedSinceLastScavenge()) { > + range.clearUsedSinceLastScavenge(); > + continue; > + }
Same comment about RunSoon.
Michael Saboff
Comment 5
2019-03-19 10:30:25 PDT
Created
attachment 365180
[details]
Patch for landing
Michael Saboff
Comment 6
2019-03-19 10:31:03 PDT
Committed
r243144
: <
https://trac.webkit.org/changeset/243144
>
Michael Catanzaro
Comment 7
2019-03-19 13:07:59 PDT
Committed
r243165
: <
https://trac.webkit.org/changeset/243165
>
Michael Saboff
Comment 8
2019-03-21 17:44:40 PDT
Comment on
attachment 365037
[details]
Patch View in context:
https://bugs.webkit.org/attachment.cgi?id=365037&action=review
>> Source/bmalloc/bmalloc/Heap.cpp:214 >> + } > > This code will wait for four scavenges before it decommits small chunk memory: The first scavenge clears the bit; the second puts the chunk in the large free list; the third clears the bit in the large free list; and the fourth frees it. > > My suggestion (possibly for a follow-up patch): There's really no reason for a Chunk to track usedSinceLastScavenge(). When we free a Chunk, we don't madvise; we just put it in the large free list. So we can do that right away.
This change is being handled in <
https://bugs.webkit.org/show_bug.cgi?id=196121
>.
>> Source/bmalloc/bmalloc/LargeRange.h:101 >> + unsigned m_isEligible: 1; > > I think m_isEligible is a leftover from the previous partial strategy. You should remove it in a follow-up patch.
The isEligible code is needed as it keeps track of whether or not a range is scheduled for recommitting (scavenging). We can't merge a range that might be decommitted with one that already has been.
Geoffrey Garen
Comment 9
2019-03-22 10:32:40 PDT
> >> Source/bmalloc/bmalloc/LargeRange.h:101 > >> + unsigned m_isEligible: 1; > > > > I think m_isEligible is a leftover from the previous partial strategy. You should remove it in a follow-up patch. > > The isEligible code is needed as it keeps track of whether or not a range is > scheduled for recommitting (scavenging). We can't merge a range that might > be decommitted with one that already has been.
We don't schedule ranges for future scavenging anymore; we scavenge them synchronously while holding the malloc lock. Also, we can merge a decommitted range with a committed range. Our strategy, encoded in the merge() function, is that we merge the ranges and, as an optimization, remember the left-most committed portion of the range. This is correct because it is OK to commit an already committed range. And it is optimal because it is very important to avoid fragmentation in large free ranges.
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug