Bug 72202 - [chromium] Set texture limits as multiples of viewport size instead of hardcoded values
Summary: [chromium] Set texture limits as multiples of viewport size instead of hardco...
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: New Bugs (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: James Robinson
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2011-11-11 18:58 PST by James Robinson
Modified: 2011-12-08 17:05 PST (History)
8 users (show)

See Also:


Attachments
Patch (8.84 KB, patch)
2011-11-11 18:59 PST, James Robinson
no flags Details | Formatted Diff | Diff
Patch (8.68 KB, patch)
2011-11-30 20:51 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (9.71 KB, patch)
2011-12-02 14:48 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (9.44 KB, patch)
2011-12-02 14:58 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (9.43 KB, patch)
2011-12-02 15:41 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (13.83 KB, patch)
2011-12-02 17:41 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (14.12 KB, patch)
2011-12-02 18:50 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (15.58 KB, patch)
2011-12-05 15:07 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (15.22 KB, patch)
2011-12-05 22:45 PST, Eric Penner
no flags Details | Formatted Diff | Diff
Patch (33.79 KB, patch)
2011-12-08 10:46 PST, Eric Penner
no flags Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description James Robinson 2011-11-11 18:58:48 PST
[chromium] Set texture limits as multiples of viewport size instead of hardcoded values
Comment 1 James Robinson 2011-11-11 18:59:09 PST
Created attachment 114813 [details]
Patch
Comment 2 James Robinson 2011-11-11 18:59:26 PST
Thought for food, not fully baked.
Comment 3 Adrienne Walker 2011-11-14 17:48:29 PST
This is intriguing, but I'll admit I'm kind of dubious.  I'd worry about making one of our worst cases even worse (bad hardware, large display).  What use case in particular were you thinking about solving here?
Comment 4 James Robinson 2011-11-14 18:01:24 PST
Really small displays/windows and really big ones.  We have tiny ~ 600x400 popup windows and 2560x1600 fullscreen 30" tabs using the same memory cap, which seems pretty suboptimal.  For, say, scrolling it makes sense to express the cap in terms of how many screenfuls of tiles we want to keep around.  The other thought is that displays that can handle really large viewport sizes are probably associated with graphics cards that can handle more memory, whereas tiny viewports might mean tiny displays and tiny memories.

We could add a floor or a ceiling to these values as well - I'm not sure what the ideal shape of the curve is, but I think it should be a curve.
Comment 5 Eric Penner 2011-11-30 20:51:40 PST
Created attachment 117320 [details]
Patch
Comment 6 James Robinson 2011-11-30 21:04:47 PST
Comment on attachment 117320 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=117320&action=review

I like this idea a lot.  I'm really curious what Vangelis and Enne think about these limits.

Left a few comments inline for you as well, Eric.

> Source/WebCore/ChangeLog:8
> +        No new tests. (OOPS!)

You need to do something with this line or an svn presubmit hook will reject your change.  If you've tested this change manually saying so here would be useful, or otherwise describe what the state of testing this code is. Perhaps it's just a link to a bug saying that we should add some tests :)

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:38
> +    if (viewportSize.width() * viewportSize.height() <= 0)

with IntSize you can spell this "viewportSize.isEmpty()", which I think is a bit clearer

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:41
> +    // To avoid wasting memory round up assuming the viewport straddles 256x256 tiles on both sides.
> +    // Since most textures will need alpha, this assumes 32-bit textures (4 bytes-per-pixel).

The tile assumption makes a lot of sense for the contents texture manager since it manages mostly tiled layers, but it doesn't really make much sense for the render surface manager since it never deals with tiles. Do you think we should have different limits for these cases?

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:42
> +    return viewportMultiplier * 4 * 256 * 256 * ((viewportSize.width() + 255) / 256 + 1) * ((viewportSize.height() + 255) / 256 + 1);

I think rather than hardcoding 4bytes/pixel here we should re-use the memoryUseBytes() function. That function also currently also hardcodes 4 bytes/pixel but at least that way we can only hardcode it once and update it if we ever start using other texture formats.

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:49
> +#if OS(ANDROID)
> +    return 32 * 1024 * 1024;
> +#else

rather than hardcoding this, should memoryLimitBytes() take a max values? I'm assuming that you expect this to be lower than 8*viewport, correct?

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:70
> +    return memoryLimitBytes(1, viewportSize, 3);

I think we want to use a memory cap here as well even on desktop.

> Source/WebCore/platform/graphics/chromium/TextureManager.h:57
>      // Absolute maximum limit for texture allocations for this instance.

at this point having these all be statics isn't quite so useful. what about having the viewport size be the thing we set on the TextureManager and it can internally set these limits?
Comment 7 Adrienne Walker 2011-12-01 09:30:40 PST
Comment on attachment 117320 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=117320&action=review

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:42
>> +    return viewportMultiplier * 4 * 256 * 256 * ((viewportSize.width() + 255) / 256 + 1) * ((viewportSize.height() + 255) / 256 + 1);
> 
> I think rather than hardcoding 4bytes/pixel here we should re-use the memoryUseBytes() function. That function also currently also hardcodes 4 bytes/pixel but at least that way we can only hardcode it once and update it if we ever start using other texture formats.

No, no.  The TextureManager definitely supports different formats these days, so we should use the memoryUseBytes function.

Also, please don't hardcode the tile size.  If you're going to use it, then expose it from TiledLayerChromium (if it's not already).

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:60
> +#if OS(ANDROID)
> +    return memoryLimitBytes(3, viewportSize, 6);
> +#else
> +    return memoryLimitBytes(4, viewportSize, 64);
> +#endif

I don't really like the #ifdef warts and implicit limit setting.  If we want to have settings for the texture manager, maybe we should push them up to CCSettings, where a particular platform can set all of its custom settings at once, in one place, and then pass them in to the texture manager as explicit multipliers and limits. I'm sure other platforms might have similiar needs in the future.

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:67
> +    // Hidden tabs should release all their textures on Android.

Don't we have a CCSetting for this?

>> Source/WebCore/platform/graphics/chromium/TextureManager.h:57
>>      // Absolute maximum limit for texture allocations for this instance.
> 
> at this point having these all be statics isn't quite so useful. what about having the viewport size be the thing we set on the TextureManager and it can internally set these limits?

Very much agreed.
Comment 8 Eric Penner 2011-12-01 13:04:11 PST
Comment on attachment 117320 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=117320&action=review

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:38
>> +    if (viewportSize.width() * viewportSize.height() <= 0)
> 
> with IntSize you can spell this "viewportSize.isEmpty()", which I think is a bit clearer

Sounds good.

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:41
>> +    // Since most textures will need alpha, this assumes 32-bit textures (4 bytes-per-pixel).
> 
> The tile assumption makes a lot of sense for the contents texture manager since it manages mostly tiled layers, but it doesn't really make much sense for the render surface manager since it never deals with tiles. Do you think we should have different limits for these cases?

This function doesn't have to be used for all TextureManagers. Doesn't that manager set it's limits based on the remaining memory from another manager?

The way this rounds up definitely helps most in the tiled case, but the goal is to simply round up the limit to something reasonable based on the viewport (when the viewport is relevant)

>>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:42
>>> +    return viewportMultiplier * 4 * 256 * 256 * ((viewportSize.width() + 255) / 256 + 1) * ((viewportSize.height() + 255) / 256 + 1);
>> 
>> I think rather than hardcoding 4bytes/pixel here we should re-use the memoryUseBytes() function. That function also currently also hardcodes 4 bytes/pixel but at least that way we can only hardcode it once and update it if we ever start using other texture formats.
> 
> No, no.  The TextureManager definitely supports different formats these days, so we should use the memoryUseBytes function.
> 
> Also, please don't hardcode the tile size.  If you're going to use it, then expose it from TiledLayerChromium (if it's not already).

Since the TextureManager supports different formats (even at the same time), I considered memoryUseBytes. But what is the best size when multiple formats are being used at once? I would also say that changes to memoryUseBytes shouldn't implicitly change this function unless we explicitly decide we want to also change the memory limit.

Regarding tiles size, I got a bit carried away given the observation that 256x256 tiles are used in practice, but the goal here wasn't to use the tile size, but rather the viewport size. I don't think it's really useful to use the tile size in the general case. Perhaps just rounding to the nearest megabyte would be better if you want less magic numbers?

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:49
>> +#else
> 
> rather than hardcoding this, should memoryLimitBytes() take a max values? I'm assuming that you expect this to be lower than 8*viewport, correct?

Do you mean the hard-coding of 32MB above?  In any case having a max value is probably good idea.

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:60
>> +#endif
> 
> I don't really like the #ifdef warts and implicit limit setting.  If we want to have settings for the texture manager, maybe we should push them up to CCSettings, where a particular platform can set all of its custom settings at once, in one place, and then pass them in to the texture manager as explicit multipliers and limits. I'm sure other platforms might have similiar needs in the future.

I'm not against moving it to CCSettings. This was just where the current hard-coded value was.

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:67
>> +    // Hidden tabs should release all their textures on Android.
> 
> Don't we have a CCSetting for this?

Do you mean discardAllTextures? I think it's relatively new, but it looks like it does what we need.

>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:70
>> +    return memoryLimitBytes(1, viewportSize, 3);
> 
> I think we want to use a memory cap here as well even on desktop.

Do you mean a maximum in addition to the current curve, or just a straight constant?

>>> Source/WebCore/platform/graphics/chromium/TextureManager.h:57
>>>      // Absolute maximum limit for texture allocations for this instance.
>> 
>> at this point having these all be statics isn't quite so useful. what about having the viewport size be the thing we set on the TextureManager and it can internally set these limits?
> 
> Very much agreed.

Having these be non-static makes sense, especially since the TextureManager will eventually need to know these values internally. However, I don't really like the texture manager having a viewport member (what if we use this class for a different purpose that no relationship to the viewport). 

Perhaps this class should just have setters/getters and the logic can be applied by the caller if that is what is desired.
Comment 9 Nat Duca 2011-12-01 21:05:44 PST
A few uncharacteristically non-grumpy thoughts from me on this patch...

a) I don't really think its a problem to have #ifdef ANDROID stuff up in cc for now. I'd vastly prefer that to dealing with merge conflicts. That is to say, yes, long term these should go away, but as long as the ifdef is guarding a special case or a corner, I dont think we need to force the android specific feature out as a setting. Personally, I'd suggest this though:
int x;
#if OS_ANDROID
x = blah;
#else
x = unblah;
#endif


b) I'm not sure that puttging texture limits on ccsettings makes sense. Again, this bakes into our settings api the idea that there is a per-CCLayerTreeHost memory budget. If we ever go to a more global memory mgmt strategy, which I think makes a TON of sense, then we're going to have to unwind this spaghetti.

When you add a+b, I think you end up in a place where this is by-and-large tolerable to me. Again, I think the overarching goals here are to avoid genuine spaghetti and prevent divergence.

The "release textures on going invisible" should definitely land where it is, and not behind a CCSetting. Again, over time this should just automagically work. A CCSetting is someting that, from my pov, is not a way to hide device-specific configs but rather overall features of our compositor." "Small Footprint Mode" I'm fine with, for xample.
Comment 10 James Robinson 2011-12-01 21:19:33 PST
Keep in mind that the only reason the various limits are in TextureManager.cpp at all is for namespacing and the fact that it needed to go somewhere.  All of the logic in TextureManager itself depends on the initial memory limit set in the constructor and on the setMemoryLimitBytes() API calls.  With that in mind...

(In reply to comment #8)
> >> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:41
> >> +    // Since most textures will need alpha, this assumes 32-bit textures (4 bytes-per-pixel).
> > 
> > The tile assumption makes a lot of sense for the contents texture manager since it manages mostly tiled layers, but it doesn't really make much sense for the render surface manager since it never deals with tiles. Do you think we should have different limits for these cases?
> 
> This function doesn't have to be used for all TextureManagers. Doesn't that manager set it's limits based on the remaining memory from another manager?

TextureManagers do not set their limits.  CCLayerTreeHostImpl adjusts the limits for the render surface TextureManager based on the memory used for content textures:
http://codesearch.google.com/codesearch#OAMlx_jo-ck/src/third_party/WebKit/Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp&exact_package=chromium&q=m_renderSurfaceTextureManager&type=cs&l=307
 
> >>> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:42
> >>> +    return viewportMultiplier * 4 * 256 * 256 * ((viewportSize.width() + 255) / 256 + 1) * ((viewportSize.height() + 255) / 256 + 1);
> >> 
> >> I think rather than hardcoding 4bytes/pixel here we should re-use the memoryUseBytes() function. That function also currently also hardcodes 4 bytes/pixel but at least that way we can only hardcode it once and update it if we ever start using other texture formats.
> > 
> > No, no.  The TextureManager definitely supports different formats these days, so we should use the memoryUseBytes function.
> > 
> > Also, please don't hardcode the tile size.  If you're going to use it, then expose it from TiledLayerChromium (if it's not already).
> 
> Since the TextureManager supports different formats (even at the same time), I considered memoryUseBytes. But what is the best size when multiple formats are being used at once? I would also say that changes to memoryUseBytes shouldn't implicitly change this function unless we explicitly decide we want to also change the memory limit.
> 
> Regarding tiles size, I got a bit carried away given the observation that 256x256 tiles are used in practice, but the goal here wasn't to use the tile size, but rather the viewport size. I don't think it's really useful to use the tile size in the general case. Perhaps just rounding to the nearest megabyte would be better if you want less magic numbers?

I was thinking that the multiplier on the viewport size would be plenty of 'magic'. Keep in mind that we will pretty much always have some textures that aren't 256x256 tiles (scrollbars, sublayers, etc) so rounding to a tile size multiple will in practice pretty much never result in the memory limit fitting actual texture use exactly.

> > 
> > rather than hardcoding this, should memoryLimitBytes() take a max values? I'm assuming that you expect this to be lower than 8*viewport, correct?
> 
> Do you mean the hard-coding of 32MB above?  In any case having a max value is probably good idea.
> > 
> > I think we want to use a memory cap here as well even on desktop.
> 
> Do you mean a maximum in addition to the current curve, or just a straight constant?

I mean having a max value for these calculation.

> 
> >>> Source/WebCore/platform/graphics/chromium/TextureManager.h:57
> >>>      // Absolute maximum limit for texture allocations for this instance.
> >> 
> >> at this point having these all be statics isn't quite so useful. what about having the viewport size be the thing we set on the TextureManager and it can internally set these limits?
> > 
> > Very much agreed.
> 
> Having these be non-static makes sense, especially since the TextureManager will eventually need to know these values internally. However, I don't really like the texture manager having a viewport member (what if we use this class for a different purpose that no relationship to the viewport). 
> 
> Perhaps this class should just have setters/getters and the logic can be applied by the caller if that is what is desired.

We have a setter on TextureManager for the memory limit.  We don't need more setters on TextureManager.

I think the current situation is that we want the TextureManager's limits to scale with the viewport size, modulo some floors and ceilings.  We also want to be able to adjust the scaling factor and possibly limits for some platforms.  We also need a way to evict all textures when going invisible but we already have that.

So with those goals in mind I think allowing for setting a viewport size multiplier and absolute memory cap on CCSettings should suffice.  We can set the reclaim limit to be 50% of the viewport and absolute caps, and the low limit to be 5% of the absolute cap.  This will match current behavior pretty closely for desktop platforms and provide enough flexibility for other platforms.
Comment 11 James Robinson 2011-12-01 21:20:30 PST
(In reply to comment #9)
> b) I'm not sure that puttging texture limits on ccsettings makes sense. Again, this bakes into our settings api the idea that there is a per-CCLayerTreeHost memory budget. If we ever go to a more global memory mgmt strategy, which I think makes a TON of sense, then we're going to have to unwind this spaghetti.
> 

What do you think about putting the limit stuff on CCLayerTreeHost instead of CCSettings?
Comment 12 Nat Duca 2011-12-01 21:26:41 PST
Actually, there's a better way to do the 'drop tiles on going invisible':
https://bugs.webkit.org/show_bug.cgi?id=72956

If we just commit an upstream #if OS(ANDROID) to chrome's feature_info.cc bit where it decides to advertise having a preserving frontbuffer, then you should get the benefits of that patch.

Or my memory serves me wrong. :)
Comment 13 Eric Penner 2011-12-02 14:48:42 PST
Created attachment 117699 [details]
Patch
Comment 14 James Robinson 2011-12-02 14:56:26 PST
Comment on attachment 117699 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=117699&action=review

> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:311
> +    reclaimLimit = std::max<size_t>(0, reclaimLimit - contentsMemoryUseBytes);

silly webkit style nit time - the WebKit style guide says for this you are supposed to instead add a

using namespace std;

declaration at the top of the .cpp and call max() without any std:: prefix. LayerRendererChromium.cpp already has the using declaration so all you need to do here is drop the std:: prefix

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:41
> +    IntSize resolution = viewportSize;

something seems amiss here - this local is created, and it's set in the #if OS(ANDROID) block, but it's never actually used. Did you mean to pass this to TextureMemoryUseBytes() on line 47?

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:47
> +    return std::min(maxLimitMegabytes * 1024 * 1024, viewportMultiplier * TextureManager::memoryUseBytes(viewportSize, GraphicsContext3D::RGBA));

same comment as above - drop the std:: and add a 'using namespace std' declaration at the top of the .cpp if this needs one

> Source/WebCore/platform/graphics/chromium/TextureManager.cpp:85
> +    // FIXME: Use non-zero once discardAllTextures or something similar is working on Android.

I think this is working today - if not could you find/file the appropriate bugs.webkit.org bug and put a link to it in this comment?
Comment 15 James Robinson 2011-12-02 14:56:55 PST
This approach works for me! I don't think some of the OS(ANDROID) logic will work as you intend, though.
Comment 16 Eric Penner 2011-12-02 14:58:39 PST
Created attachment 117701 [details]
Patch
Comment 17 James Robinson 2011-12-02 14:59:54 PST
(In reply to comment #16)
> Created an attachment (id=117701) [details]
> Patch

this patch still has std::'s
Comment 18 Eric Penner 2011-12-02 15:36:06 PST
(In reply to comment #17)
> (In reply to comment #16)
> > Created an attachment (id=117701) [details] [details]
> > Patch
> 
> this patch still has std::'s

The other patch was already on route when I got your comment. Uploading now.
Comment 19 Eric Penner 2011-12-02 15:41:13 PST
Created attachment 117705 [details]
Patch
Comment 20 WebKit Review Bot 2011-12-02 15:48:28 PST
Comment on attachment 117705 [details]
Patch

Attachment 117705 [details] did not pass chromium-ews (chromium-xvfb):
Output: http://queues.webkit.org/results/10728179
Comment 21 Eric Penner 2011-12-02 16:05:43 PST
Rebasing

(In reply to comment #20)
> (From update of attachment 117705 [details])
> Attachment 117705 [details] did not pass chromium-ews (chromium-xvfb):
> Output: http://queues.webkit.org/results/10728179
Comment 22 Eric Penner 2011-12-02 16:46:57 PST
As expected we now need:

a.) The reclaim limit added as a member to TextureManager, or
b.) The viewport added to TextureManager (by which it can calculate reclaim limit) in requestTexture.

I'm in favor of a.) but happy to do b.) if that is what is desired. 

(In reply to comment #21)
> Rebasing
> 
> (In reply to comment #20)
> > (From update of attachment 117705 [details] [details])
> > Attachment 117705 [details] [details] did not pass chromium-ews (chromium-xvfb):
> > Output: http://queues.webkit.org/results/10728179
Comment 23 Eric Penner 2011-12-02 17:41:07 PST
Created attachment 117718 [details]
Patch
Comment 24 James Robinson 2011-12-02 17:50:12 PST
Comment on attachment 117718 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=117718&action=review

Sorry, this has a bad math error that I should have caught sooner.

> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:308
> +    size_t maxLimit = max<size_t>(0, TextureManager::highLimitBytes(viewportSize()) - contentsMemoryUseBytes);
> +    size_t reclaimLimit = max<size_t>(0, TextureManager::reclaimLimitBytes(viewportSize()) - contentsMemoryUseBytes);

wait a second, are you sure this does what you expect it does? TextureManager::reclaimLimitBytes() returns a size_t, which is an unsigned type. contentsMemoryUseBytes is an unsigned type. The binary "-" operator is thus run as an unsigned operation. So if we had, for instance. a reclaimLimitBytes of 5 and a contentsMemoryUseBytes of 12 - the output of this operation when size_t is 64 bits would be 2^64-8 and the max of 0, 2^64-8 would be 2^64-8 - not zero.

The way this code was structured before was deliberate to avoid this overflow issue.  The fact that this bug is here also makes me wonder how much testing this logic has received.
Comment 25 Eric Penner 2011-12-02 17:59:19 PST
(In reply to comment #24)
> (From update of attachment 117718 [details])
> View in context: https://bugs.webkit.org/attachment.cgi?id=117718&action=review
> 
> Sorry, this has a bad math error that I should have caught sooner.
> 
> > Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:308
> > +    size_t maxLimit = max<size_t>(0, TextureManager::highLimitBytes(viewportSize()) - contentsMemoryUseBytes);
> > +    size_t reclaimLimit = max<size_t>(0, TextureManager::reclaimLimitBytes(viewportSize()) - contentsMemoryUseBytes);
> 
> wait a second, are you sure this does what you expect it does? TextureManager::reclaimLimitBytes() returns a size_t, which is an unsigned type. contentsMemoryUseBytes is an unsigned type. The binary "-" operator is thus run as an unsigned operation. So if we had, for instance. a reclaimLimitBytes of 5 and a contentsMemoryUseBytes of 12 - the output of this operation when size_t is 64 bits would be 2^64-8 and the max of 0, 2^64-8 would be 2^64-8 - not zero.
> 
> The way this code was structured before was deliberate to avoid this overflow issue.  The fact that this bug is here also makes me wonder how much testing this logic has received.

That's a great point. I think I had max<int> in a prior iteration. I still like the reduced code lines though, but I can change back to if-statement.
Comment 26 James Robinson 2011-12-02 18:02:23 PST
It's tricky to do this sort of thing in C++. I originally wrote this with ternary statements but the lines got really long (especially since the variable names are long) which is why I went the the if/else blocks. If you find something that looks nicer I'm all for it.
Comment 27 James Robinson 2011-12-02 18:03:08 PST
Another possibility is to switch over to ssize_t everywhere (assuming that's supported - I assume it would be).
Comment 28 Eric Penner 2011-12-02 18:05:56 PST
(In reply to comment #27)
> Another possibility is to switch over to ssize_t everywhere (assuming that's supported - I assume it would be).

Wouldn't a simple cast to int work assuming both are reasonable sized?
Comment 29 James Robinson 2011-12-02 18:08:54 PST
(In reply to comment #28)
> (In reply to comment #27)
> > Another possibility is to switch over to ssize_t everywhere (assuming that's supported - I assume it would be).
> 
> Wouldn't a simple cast to int work assuming both are reasonable sized?

Yes, assuming we're careful about any subsequent comparisons or operations with unsigned variables (since C/C++ will not-so-usefully typically "promote" to unsigned in mixed expression). How about we go back to the if/else or equivalent structure for now and then we can do an examination of our types later on if it's deemed worthwhile?
Comment 30 Eric Penner 2011-12-02 18:50:23 PST
Created attachment 117733 [details]
Patch
Comment 31 WebKit Review Bot 2011-12-02 19:35:42 PST
Comment on attachment 117733 [details]
Patch

Attachment 117733 [details] did not pass chromium-ews (chromium-xvfb):
Output: http://queues.webkit.org/results/10733223
Comment 32 Eric Penner 2011-12-05 12:07:57 PST
I thought of a basic example to illustrate the single viewport multiplier causing problems:

For a small viewport of 270x270, we would need ~8.1X the viewport memory just to support one visible layer (9 256x256 tiles). If we set the memory limit large enough to support 3 layers (~24X) we would quickly hit maximum limit for larger viewports. This problem diminishes as the viewport gets larger, but it is still biased even at more reasonable sizes. If anything I think the memory limit should be biased the opposite way.

The code that rounded up the viewport dimensions took care of this. Of course, other random sized textures could eat into the memory limit, but at least we start at right amount of memory to support N viewports.

We could add tile size as an additional parameter to all the memory limit functions, but I'm wondering whether the plumbing required is justified.
Comment 33 James Robinson 2011-12-05 12:17:04 PST
What about just having fixed floors in addition to maximums?
Comment 34 Eric Penner 2011-12-05 12:24:37 PST
(In reply to comment #33)
> What about just having fixed floors in addition to maximums?

A fixed floor on the resolution would help. But only up to when we hit that floor size, at which point we will have the same bias (although diminished by the time we hit the floor).
Comment 35 James Robinson 2011-12-05 12:53:00 PST
Note that for tiles that are actually visible, the texture limit doesn't matter since they will be protected every frame.  It's only relevant for just-barely-offscreen tiles.

I don't feel we need to get this curve completely perfect. There are cases that will break pretty much any scheme we can come up with.
Comment 36 Eric Penner 2011-12-05 13:11:54 PST
TiledLayerChromium calls ManagedTexture::reserve() which can fail at the maximum memory limit. The small viewport case I described exceeds not only the reclaim limit but also the maximum limit, even though the viewport multiplier was 8X.

In any case, I'll change it to just clamp to min/max megabytes as it seems this is what you have in mind.

(In reply to comment #35)
> Note that for tiles that are actually visible, the texture limit doesn't matter since they will be protected every frame.  It's only relevant for just-barely-offscreen tiles.
> 
> I don't feel we need to get this curve completely perfect. There are cases that will break pretty much any scheme we can come up with.
Comment 37 Eric Penner 2011-12-05 15:07:21 PST
Created attachment 117945 [details]
Patch
Comment 38 James Robinson 2011-12-05 16:59:51 PST
Comment on attachment 117945 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=117945&action=review

> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:313
> +    if (maxLimit > contentsMemoryUseBytes)
> +        m_renderSurfaceTextureManager->setMaxMemoryLimitBytes(maxLimit - contentsMemoryUseBytes);
>      else
> +        m_renderSurfaceTextureManager->setMaxMemoryLimitBytes(0);

How could the contents memory use be bigger than the max memory limit? The high memory limit is called that because we should never exceed it. I think this should be an ASSERT() but not an actual code branch.

We do need this logic for the reclaim limit since the contents memory can (and will) exceed the reclaim limit if the currently visible contents take more memory than the reclaim limit.

> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:322
> +        m_renderSurfaceTextureManager->setPreferredMemoryLimitBytes(reclaimLimit - contentsMemoryUseBytes);
> +        m_renderSurfaceTextureManager->reduceMemoryToLimit(reclaimLimit - contentsMemoryUseBytes);
> +    } else {
> +        m_renderSurfaceTextureManager->setPreferredMemoryLimitBytes(0);
>          m_renderSurfaceTextureManager->reduceMemoryToLimit(0);

rather than having to make 2 calls here could we have reduceMemoryToLimit() set the preferred memory limit implicitly, or have setPreferredMemoryLimitBytes() implicitly reduce the memory limit to that size? it doesn't seem very useful to have two separate calls for this
Comment 39 Eric Penner 2011-12-05 17:26:03 PST
Comment on attachment 117945 [details]
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=117945&action=review

>> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:313
>> +        m_renderSurfaceTextureManager->setMaxMemoryLimitBytes(0);
> 
> How could the contents memory use be bigger than the max memory limit? The high memory limit is called that because we should never exceed it. I think this should be an ASSERT() but not an actual code branch.
> 
> We do need this logic for the reclaim limit since the contents memory can (and will) exceed the reclaim limit if the currently visible contents take more memory than the reclaim limit.

Hmm, TextureManager doesn't explicitly protect against going over the limit. However, after a search it look like only one place calls protectTexture, and that checks the limit and fails. So it looks like an ASSERT good. I'll change it.

>> Source/WebCore/platform/graphics/chromium/LayerRendererChromium.cpp:322
>>          m_renderSurfaceTextureManager->reduceMemoryToLimit(0);
> 
> rather than having to make 2 calls here could we have reduceMemoryToLimit() set the preferred memory limit implicitly, or have setPreferredMemoryLimitBytes() implicitly reduce the memory limit to that size? it doesn't seem very useful to have two separate calls for this

See CCLayerTreeHost::setViewport below.  At that point we know what the new preferred limit is, so we should update the member variable (so TextureManager::requestTexture will work correctly).  But at that time we do not want to reduce the memory to the preferred limit.
Comment 40 James Robinson 2011-12-05 17:33:04 PST
(In reply to comment #39)
> 
> See CCLayerTreeHost::setViewport below.  At that point we know what the new preferred limit is, so we should update the member variable (so TextureManager::requestTexture will work correctly).  But at that time we do not want to reduce the memory to the preferred limit.

Why don't we want to reduce memory use at that point?
Comment 41 James Robinson 2011-12-05 17:44:36 PST
(In reply to comment #39)
> 
> See CCLayerTreeHost::setViewport below.  At that point we know what the new preferred limit is, so we should update the member variable (so TextureManager::requestTexture will work correctly).  But at that time we do not want to reduce the memory to the preferred limit.

Why don't we want to reduce memory use at that point? That seems like a good thing to do.
Comment 42 Eric Penner 2011-12-05 18:25:28 PST
I'm assuming SetViewport could happen at any time. So if it's okay to reduce to reclaim limit at any time then sure. I thought it was something we did at a set time in the frame. 

(In reply to comment #41)
> (In reply to comment #39)
> > 
> > See CCLayerTreeHost::setViewport below.  At that point we know what the new preferred limit is, so we should update the member variable (so TextureManager::requestTexture will work correctly).  But at that time we do not want to reduce the memory to the preferred limit.
> 
> Why don't we want to reduce memory use at that point? That seems like a good thing to do.
Comment 43 James Robinson 2011-12-05 18:42:09 PST
(In reply to comment #42)
> I'm assuming SetViewport could happen at any time. So if it's okay to reduce to reclaim limit at any time then sure.

It is.
Comment 44 Eric Penner 2011-12-05 22:45:56 PST
Created attachment 117990 [details]
Patch
Comment 45 James Robinson 2011-12-06 00:37:43 PST
Comment on attachment 117990 [details]
Patch

Thanks for persisting. I think this looks great now - R=me.
Comment 46 WebKit Review Bot 2011-12-06 03:05:18 PST
Comment on attachment 117990 [details]
Patch

Clearing flags on attachment: 117990

Committed r102115: <http://trac.webkit.org/changeset/102115>
Comment 47 WebKit Review Bot 2011-12-06 03:05:24 PST
All reviewed patches have been landed.  Closing bug.
Comment 48 Eric Penner 2011-12-08 10:46:24 PST
Reopening to attach new patch.
Comment 49 Eric Penner 2011-12-08 10:46:26 PST
Created attachment 118420 [details]
Patch
Comment 50 Eric Penner 2011-12-08 10:48:18 PST
Ack! ignore the last patch. Wrong bug number.