Gmail is almost always allocating some memory. So bmalloc::Scavenger always think it is growing. And amount of actually freeable memory is small. So what happens is that, 1. bmalloc::Scavenger repeatedly wakes up 2. bmalloc::Scavenger performs scavenge. It only performs small amount of madvice. 3. So, (2) takes short time. 4. bmalloc::Scavenger schedules next scavenge in a really short period because (2) is very short In Gmail site, even after getting the stable state, we invoke scavenge operation every 500-700ms.
Is this bad? If so, why?
Presumably power usage.
(In reply to Geoffrey Garen from comment #1) > Is this bad? If so, why? Two reasons, 1. Power usage. It frequently wakes up scavenger even in steady state in Gmail. 2. Scheduling the next scavenge based on the time used by the pervious scavenge does not make sense to me. If the previous scavenge takes only a short period, this means we do not have enough pages to be freed. Why should we perform scavenge very soon if we see no enough pages previously?
> 1. Power usage. It frequently wakes up scavenger even in steady state in > Gmail. It may be beneficial to scavenge, even in steady state, if the steady state workload makes free memory available. The scavenge reduces memory pressure (even if only temporarily). (Of course, you're right that we should consider power usage too.) > 2. Scheduling the next scavenge based on the time used by the pervious > scavenge does not make sense to me. If the previous scavenge takes only a > short period, this means we do not have enough pages to be freed. Why should > we perform scavenge very soon if we see no enough pages previously? The current backoff is designed to return memory to the OS as soon as possible without throughput regression. That's the rationale for computing a delay based on how much time we spent scavenging. You're right that the algorithm doesn't do anything explicit to avoid frequent wakeups, if those wakeups are short. Maybe it should. We do not for sure that at least one page is available to be returned to the OS. So we do want the scavenger to run. One option that could reduce wakeups would be to do some scavenging synchronously on the main thread, and only schedule the scavenger after the main thread has scavenged too much. Or perhaps there are other ways to reduce wakeups. But we want to be careful to return memory to the OS if we can.
(In reply to Geoffrey Garen from comment #4) Another option is to include "wakeups per second" in the scavenger's computation of how long it should delay. That way, if the scavenger repeatedly wakes up to free just one page, it will correct itself and reduce wakeups.
(In reply to Geoffrey Garen from comment #4) > > 1. Power usage. It frequently wakes up scavenger even in steady state in > > Gmail. > > It may be beneficial to scavenge, even in steady state, if the steady state > workload makes free memory available. The scavenge reduces memory pressure > (even if only temporarily). > > (Of course, you're right that we should consider power usage too.) Yeah, we should periodically scavenge freeable memory even in steady state as long as we have allocations. But, yeah, we can control its rate. > > > 2. Scheduling the next scavenge based on the time used by the pervious > > scavenge does not make sense to me. If the previous scavenge takes only a > > short period, this means we do not have enough pages to be freed. Why should > > we perform scavenge very soon if we see no enough pages previously? > > The current backoff is designed to return memory to the OS as soon as > possible without throughput regression. That's the rationale for computing a > delay based on how much time we spent scavenging. Right. The current algorithm basically means, "if we do not hurt performance, we scavenge as much frequently as possible". > > You're right that the algorithm doesn't do anything explicit to avoid > frequent wakeups, if those wakeups are short. Maybe it should. > > We do not for sure that at least one page is available to be returned to the > OS. So we do want the scavenger to run. > > One option that could reduce wakeups would be to do some scavenging > synchronously on the main thread, and only schedule the scavenger after the > main thread has scavenged too much. Or perhaps there are other ways to > reduce wakeups. But we want to be careful to return memory to the OS if we > can. > > Another option is to include "wakeups per second" in the scavenger's computation of how long it should delay. > That way, if the scavenger repeatedly wakes up to free just one page, it will correct itself and reduce wakeups. Yeah! I think encoding some more information what we care to Scavenger's next time calculation sounds really reasonable.