Bug 84270 - [chromium] Set contents texture manager preferred memory limit based on GpuMemoryManager suggestion.
Summary: [chromium] Set contents texture manager preferred memory limit based on GpuMe...
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: New Bugs (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Michal Mocny
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2012-04-18 13:09 PDT by Michal Mocny
Modified: 2012-05-02 13:42 PDT (History)
5 users (show)

See Also:


Attachments
Patch (22.13 KB, patch)
2012-04-18 13:14 PDT, Michal Mocny
no flags Details | Formatted Diff | Diff
Patch (22.19 KB, patch)
2012-04-19 14:37 PDT, Michal Mocny
no flags Details | Formatted Diff | Diff
Patch (53.68 KB, patch)
2012-04-26 17:54 PDT, Michal Mocny
no flags Details | Formatted Diff | Diff
Patch (16.65 KB, patch)
2012-04-30 14:18 PDT, Michal Mocny
no flags Details | Formatted Diff | Diff
Patch (17.66 KB, patch)
2012-05-01 14:51 PDT, Michal Mocny
no flags Details | Formatted Diff | Diff
Patch (17.17 KB, patch)
2012-05-02 06:24 PDT, Michal Mocny
no flags Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Michal Mocny 2012-04-18 13:09:25 PDT
[chromium] GpuMemoryManager suggests values for Contents Texture Managers' preferred memory limit.
Comment 1 Michal Mocny 2012-04-18 13:14:33 PDT
Created attachment 137751 [details]
Patch
Comment 2 WebKit Review Bot 2012-04-18 13:16:38 PDT
Attachment 137751 [details] did not pass style-queue:

Failed to run "['Tools/Scripts/check-webkit-style', '--diff-files', u'Source/WebCore/ChangeLog', u'Source/WebCor..." exit_code: 1
Source/WebCore/ChangeLog:8:  You should remove the 'No new tests' and either add and list tests, or explain why no new tests were possible.  [changelog/nonewtests] [5]
Total errors found: 1 in 17 files


If any of these errors are false positives, please file a bug against check-webkit-style.
Comment 3 Michal Mocny 2012-04-18 13:28:14 PDT
This is an early first version, not ready for format review.  Putting patch up early for any interested eyeballs.

Requires chromium patch https://chromiumcodereview.appspot.com/10083056.


TODO:
- With single threaded compositor, gpu memory allocations are processed by the main thread not the impl thread and we don't use a DebugScopedSetImplThread in LRC to change that.
- Tracing seems to be producing odd results.  Not sure if bug is in my tracing instrumentation, in memory allocations sent, or simply in my expectations.
- There seems to already be some texture memory tracing data, perhaps I should integrate with that?
- Remove some of the texture manager logic out of CCLTH::didBecomeInvisibleOnImplThread.
- Tests!
- Android?

Work for subsequent patches:
- LRC render layer texture manager assumes contents' reclaim texture limit is always the default.  Will need to plumb the actual value down.  Its not a regression to not do this, and I'm not 100% clear that we even want to, so I'll leave for a subsequent patch.
Comment 4 Michal Mocny 2012-04-19 14:37:59 PDT
Created attachment 137976 [details]
Patch
Comment 5 Michal Mocny 2012-04-19 14:43:10 PDT
Patch fixes serious oversight where MaxMemoryLimitBytes was not being updated along with Reclaim limit.

I have written a test to experiment with the effects of running out of texture memory and to confirm that this patch is now working.  I haven't uploaded because I am not sure yet where browser tests like this should go, or if they should.  Looking into it.
Comment 6 Michal Mocny 2012-04-20 12:18:19 PDT
I just got and assertion failure in TextureManager::setMaxMemoryLimitBytes(111).

Code is:

reduceMemoryToLimit(memoryLimitBytes);
ASSERT(currentMemoryUseBytes() <= memoryLimitBytes);

Which seems to imply that either reduceMemoryToLimit is not working, or currentMemoryUseBytes() increases between the two lines.  Perhaps I am reducing memory on the wrong thread.

Current we change TextureManager memory limits on impl thread but I wonder if it should not be posted to main thread?  Investigating.
Comment 7 Dana Jansens 2012-04-20 12:21:45 PDT
(In reply to comment #6)
> Current we change TextureManager memory limits on impl thread but I wonder if it should not be posted to main thread?  Investigating.

Main thread is blocked at CCThreadProxy.cpp:328 during this.
Comment 8 Dana Jansens 2012-04-20 12:22:56 PDT
(In reply to comment #7)
> (In reply to comment #6)
> > Current we change TextureManager memory limits on impl thread but I wonder if it should not be posted to main thread?  Investigating.
> 
> Main thread is blocked at CCThreadProxy.cpp:328 during this.

Oh nevermind, I misread your comment.
Comment 9 Michal Mocny 2012-04-20 12:26:45 PDT
Dana does bring up a good point, though, while we did currently change limits on impl thread that was while main was blocked, and here it isn't.

I can either post to main thread, or (very) preferably set a hint which main thread to pick up and adjust later.
Comment 10 Nat Duca 2012-04-20 13:09:48 PDT
Comment on attachment 137976 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=137976&action=review

> Source/WebKit/chromium/ChangeLog:3
> +        [chromium] GpuMemoryManager suggests values for Contents Texture Managers' preferred memory limit.

Probably can get a better name by swapping the order of the words. Set contents texture manager preferred memory limit based on GpuMemoryManager suggestion? :)

> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:358
> +#if defined(OS_ANDROID)

any reason for #if android?

> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:360
> +        m_contentsTextureManager->setMaxMemoryLimitBytes(TextureManager::defaultHighLimitBytes(viewportSize));

Isn't this clobbering the value we got from the callback? Can we make things clearer by restructuring the texture manager to have a setViewportSize method and a setMaxMemoryLImitBytes(). But, not a setPreferredMemoryLimitBytes --- that should be sealed in the class.

Then, the accessor can say "if m_maxMemoryBytes == 0, use the "default" accessors"

Does this make sense?

> Source/WebCore/platform/graphics/chromium/cc/CCThreadProxy.cpp:314
> +    m_layerTreeHost->setContentsPreferredMemoryLimitBytesOnImplThread(bytes);

This is thread UNSAFE. You need to postTask this command over.

The rules are:
- Variables suffixed with implThread are only safe to access from functions with  onImplThread suffixes.
- Variables with no suffix are only safe to access from functions with no suffix.
Comment 11 Michal Mocny 2012-04-25 13:33:07 PDT
Comment on attachment 137976 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=137976&action=review

>> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:358
>> +#if defined(OS_ANDROID)
> 
> any reason for #if android?

The reason is to only clobber allocation values on android (not sure if viewport size ever changes?..).
There is another #if guard inside the memory allocation changed callback so we only adjust the allocation limit on !android.

This is because GpuMemoryManager has a hardcoded large upper allocation limit, and I dont think that number is appropriate for android.
This will need to be fixed, but I wanted to delay that to another patch and leave android unharmed for now.

>> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:360
>> +        m_contentsTextureManager->setMaxMemoryLimitBytes(TextureManager::defaultHighLimitBytes(viewportSize));
> 
> Isn't this clobbering the value we got from the callback? Can we make things clearer by restructuring the texture manager to have a setViewportSize method and a setMaxMemoryLImitBytes(). But, not a setPreferredMemoryLimitBytes --- that should be sealed in the class.
> 
> Then, the accessor can say "if m_maxMemoryBytes == 0, use the "default" accessors"
> 
> Does this make sense?

(see note above about only clobbering on android)

I am working on restructuring TextureManager, but a few things are a little nasty.

I wanted to confirm: should preferred memory limit also adjust when given a new allocation, or leave preferred as the default and just adjust max?

If former, should preferred == max (at which point, do we even bother distinguishing the two numbers?) or should preferred == 1/2max (this has always been the case until now, and it may make the code path more in line with android).
Comment 12 Nat Duca 2012-04-26 10:30:54 PDT
(In reply to comment #11)
> (From update of attachment 137976 [details])
> View in context: https://bugs.webkit.org/attachment.cgi?id=137976&action=review
> 
> >> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:358
> >> +#if defined(OS_ANDROID)
> > 
> > any reason for #if android?
> 
> The reason is to only clobber allocation values on android (not sure if viewport size ever changes?..).
> There is another #if guard inside the memory allocation changed callback so we only adjust the allocation limit on !android.
> 
> This is because GpuMemoryManager has a hardcoded large upper allocation limit, and I dont think that number is appropriate for android.
> This will need to be fixed, but I wanted to delay that to another patch and leave android unharmed for now.

Just have the GPU memory manager give out less #if android. I think this makes sense.

> 
> >> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:360
> >> +        m_contentsTextureManager->setMaxMemoryLimitBytes(TextureManager::defaultHighLimitBytes(viewportSize));
> > 
> > Isn't this clobbering the value we got from the callback? Can we make things clearer by restructuring the texture manager to have a setViewportSize method and a setMaxMemoryLImitBytes(). But, not a setPreferredMemoryLimitBytes --- that should be sealed in the class.
> > 
> > Then, the accessor can say "if m_maxMemoryBytes == 0, use the "default" accessors"
> > 
> > Does this make sense?
> 
> (see note above about only clobbering on android)
> 
> I am working on restructuring TextureManager, but a few things are a little nasty.
> 
> I wanted to confirm: should preferred memory limit also adjust when given a new allocation, or leave preferred as the default and just adjust max?
> 
> If former, should preferred == max (at which point, do we even bother distinguishing the two numbers?) or should preferred == 1/2max (this has always been the case until now, and it may make the code path more in line with android).

I dont follow
Comment 13 Dana Jansens 2012-04-26 10:36:15 PDT
(In reply to comment #12)
> (In reply to comment #11)
> > (From update of attachment 137976 [details] [details])
> > View in context: https://bugs.webkit.org/attachment.cgi?id=137976&action=review
> > 
> > >> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:358
> > >> +#if defined(OS_ANDROID)
> > > 
> > > any reason for #if android?
> > 
> > The reason is to only clobber allocation values on android (not sure if viewport size ever changes?..).
> > There is another #if guard inside the memory allocation changed callback so we only adjust the allocation limit on !android.
> > 
> > This is because GpuMemoryManager has a hardcoded large upper allocation limit, and I dont think that number is appropriate for android.
> > This will need to be fixed, but I wanted to delay that to another patch and leave android unharmed for now.
> 
> Just have the GPU memory manager give out less #if android. I think this makes sense.

+1

> > 
> > >> Source/WebCore/platform/graphics/chromium/cc/CCLayerTreeHost.cpp:360
> > >> +        m_contentsTextureManager->setMaxMemoryLimitBytes(TextureManager::defaultHighLimitBytes(viewportSize));
> > > 
> > > Isn't this clobbering the value we got from the callback? Can we make things clearer by restructuring the texture manager to have a setViewportSize method and a setMaxMemoryLImitBytes(). But, not a setPreferredMemoryLimitBytes --- that should be sealed in the class.
> > > 
> > > Then, the accessor can say "if m_maxMemoryBytes == 0, use the "default" accessors"
> > > 
> > > Does this make sense?
> > 
> > (see note above about only clobbering on android)
> > 
> > I am working on restructuring TextureManager, but a few things are a little nasty.
> > 
> > I wanted to confirm: should preferred memory limit also adjust when given a new allocation, or leave preferred as the default and just adjust max?
> > 
> > If former, should preferred == max (at which point, do we even bother distinguishing the two numbers?) or should preferred == 1/2max (this has always been the case until now, and it may make the code path more in line with android).
> 
> I dont follow

Our goal is to prepaint more aggressively. Make it == to max for now I think.
Comment 14 Michal Mocny 2012-04-26 17:54:07 PDT
Created attachment 139110 [details]
Patch
Comment 15 Michal Mocny 2012-04-30 14:18:49 PDT
Created attachment 139514 [details]
Patch
Comment 16 Michal Mocny 2012-04-30 14:30:21 PDT
I've simplified this patch to only those parts that are necessary to update the memory allocation limit received from the GpuMemoryManager.

I had started to refactor TextureManager to remove the concept of a "preferred" lower reclaim limit, however:
- On android, we still want a lower reclaim limit
- Viewport size is still needed to set the default size on android
- The are many unittests which expect to have both values, and expect them to be respected (its not just a matter of renaming).

For now, this patch uses setMemoryAllocationLimitBytes as a wrapper around the two existing setMax/setPreferred functions, and the use of those explicit functions is limited to the few places where exact control is still desired (limiting pre-painting on android).

On android, setPreferred will be 1/2 of max (to match current behavior).  In the future we could move this logic inside GpuMemoryManager, or better yet, remove the need for a preferred limit on android once we have TextureManager giving allocation size hints.
Comment 17 Nat Duca 2012-04-30 17:31:54 PDT
Ok. I'm backed up on codereviews, but I think overall it makes sense to not touch texture manager now because @epenner is working on a complete replacement. Its just wasted work to adjust tmv1 for now.
Comment 18 Nat Duca 2012-04-30 17:41:38 PDT
Comment on attachment 139514 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=139514&action=review

Also note, Android can regress slightly while we bring the memory manager online. They're not releasing continuously off of the upstream compositor tree. This enables us to do the right thing.

> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:397
> +    // FIXME: Read the actual TextureManager maxMemoryLimitBytes value, do not assume it is the default.

i dont get this fixme

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:111
> +#if defined(OS_ANDROID)

Put a comment explaining why this is here.

> Source/WebCore/platform/graphics/chromium/cc/CCSingleThreadProxy.cpp:286
> +    // DebugScopedSetMainThread main;

I agree this isn't easy to fix. Just assert main thread and put a comment saying this is called via a graphics context callback and thus hard to affect any other way.
Comment 19 Michal Mocny 2012-05-01 11:33:44 PDT
Comment on attachment 139514 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=139514&action=review

>> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:397
>> +    // FIXME: Read the actual TextureManager maxMemoryLimitBytes value, do not assume it is the default.
> 
> i dont get this fixme

The following line assumes the content texture manager max limit is the "default" value for a given viewport size.  Since the GpuMemoryManager may have upped the limit, it is a very conservative assumption.  I am not sure if we want to be giving render surface texture manager more memory here, but I suspect so.
At the moment, LRC only has access to contentsTextureAllocator which does not have that information -- I am not sure how to solve the problem, or if it is worth solving before TMv2 at all.
Comment 20 Michal Mocny 2012-05-01 14:51:13 PDT
Created attachment 139686 [details]
Patch
Comment 21 Dana Jansens 2012-05-01 15:01:44 PDT
Comment on attachment 139514 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=139514&action=review

>>> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:397
>>> +    // FIXME: Read the actual TextureManager maxMemoryLimitBytes value, do not assume it is the default.
>> 
>> i dont get this fixme
> 
> The following line assumes the content texture manager max limit is the "default" value for a given viewport size.  Since the GpuMemoryManager may have upped the limit, it is a very conservative assumption.  I am not sure if we want to be giving render surface texture manager more memory here, but I suspect so.
> At the moment, LRC only has access to contentsTextureAllocator which does not have that information -- I am not sure how to solve the problem, or if it is worth solving before TMv2 at all.

The render surface texture manager should not use the same limit as the contents texture manager once this lands, and I don't think the FIXME is exactly clear/correct about that.

Keeping it at some fixed value seems fine for now IMO. It's a bit of a separate issue to refactor how render surface memory is managed than how contents memory is.
Comment 22 Nat Duca 2012-05-02 00:01:42 PDT
Comment on attachment 139514 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=139514&action=review

Are you going to post an updated patch so we can get this landed?

>>>> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:397
>>>> +    // FIXME: Read the actual TextureManager maxMemoryLimitBytes value, do not assume it is the default.
>>> 
>>> i dont get this fixme
>> 
>> The following line assumes the content texture manager max limit is the "default" value for a given viewport size.  Since the GpuMemoryManager may have upped the limit, it is a very conservative assumption.  I am not sure if we want to be giving render surface texture manager more memory here, but I suspect so.
>> At the moment, LRC only has access to contentsTextureAllocator which does not have that information -- I am not sure how to solve the problem, or if it is worth solving before TMv2 at all.
> 
> The render surface texture manager should not use the same limit as the contents texture manager once this lands, and I don't think the FIXME is exactly clear/correct about that.
> 
> Keeping it at some fixed value seems fine for now IMO. It's a bit of a separate issue to refactor how render surface memory is managed than how contents memory is.

Remove the comment, then, and file a bug for "figure out how to manage render surface memory with gpu memory manager."

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:111
>> +#if defined(OS_ANDROID)
> 
> Put a comment explaining why this is here.

Not done yet?
Comment 23 Nat Duca 2012-05-02 00:04:10 PDT
Comment on attachment 139686 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=139686&action=review

Oh, look. Sorry, didn't see this patch. Nice. LGTM with nit.

> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:397
> +    // FIXME: Read the actual TextureManager maxMemoryLimitBytes value, do not assume it is the default.

Just update this to point at a bug I suggested filing, or remove the comment completely. This comment really doesn't make sense.
Comment 24 Michal Mocny 2012-05-02 06:24:21 PDT
Created attachment 139798 [details]
Patch
Comment 25 Michal Mocny 2012-05-02 06:25:10 PDT
Done.

I know that many reviewers are out of town this week.  Is there anyone you can suggest for this?  I was told @kbr may be around.
Comment 26 Kenneth Russell 2012-05-02 13:25:26 PDT
Comment on attachment 139798 [details]
Patch

rs=me based on Nat's review.
Comment 27 WebKit Review Bot 2012-05-02 13:42:11 PDT
Comment on attachment 139798 [details]
Patch

Clearing flags on attachment: 139798

Committed r115881: <http://trac.webkit.org/changeset/115881>
Comment 28 WebKit Review Bot 2012-05-02 13:42:27 PDT
All reviewed patches have been landed.  Closing bug.