Bug 68479 - Hardware acceleration of W3C Filter Effects
Summary: Hardware acceleration of W3C Filter Effects
Status: RESOLVED FIXED
Alias: None
Product: WebKit
Classification: Unclassified
Component: CSS (show other bugs)
Version: 528+ (Nightly build)
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Chris Marrin
URL:
Keywords: InRadar
Depends on: 68478 70099
Blocks: 68469 74786 74651
  Show dependency treegraph
 
Reported: 2011-09-20 14:37 PDT by Dean Jackson
Modified: 2014-03-02 10:15 PST (History)
20 users (show)

See Also:


Attachments
Patch to move FilterOperation(s) to platform/graphics (75.56 KB, patch)
2011-12-14 17:24 PST, Chris Marrin
no flags Details | Formatted Diff | Diff
Patch with implementation for Mac using CI (285.76 KB, patch)
2011-12-16 17:39 PST, Chris Marrin
no flags Details | Formatted Diff | Diff
Patch fixing builds (289.63 KB, patch)
2011-12-17 05:53 PST, Chris Marrin
no flags Details | Formatted Diff | Diff
Patch fixing builds (285.58 KB, patch)
2011-12-17 06:21 PST, Chris Marrin
no flags Details | Formatted Diff | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Dean Jackson 2011-09-20 14:37:55 PDT
This is the tracking bug for hardware acceleration of Filter Effects. It's not clear yet how much will be platform specific. There might even be multiple implementations (eg. using Core Image on OS X or the WebGL/GraphicsLayer3D backend)
Comment 1 Radar WebKit Bug Importer 2011-09-20 14:38:06 PDT
<rdar://problem/10155927>
Comment 2 Dirk Schulze 2011-09-22 01:01:06 PDT
Hm, what about using OpenCL? I'm working on a OpenCL implementation right now. At the moment I'm testing it on SVG Masking and planned to use it on SVG filters as well. This would be quite platform independent and is also useable on the next generation of mobile devices. Many SoCs already have support for OpenCL or at least the next generation will have support for it.

And we wouldn't end up in a totally unclear landscape of implementations. Where it is hard to make global changes!
Comment 3 Simon Fraser (smfr) 2011-09-22 11:24:30 PDT
We can't expose OpenCL to arbitrary web content at this time. It has the same security issues as WebGL.
Comment 4 Dirk Schulze 2011-09-22 11:50:01 PDT
(In reply to comment #3)
> We can't expose OpenCL to arbitrary web content at this time. It has the same security issues as WebGL.

Hm, maybe a communication problem. I was not talking about WebCL or any other interaction between web applications and OpenCL. I really mean to hardware accelerate filter effects in WebCore.
Comment 5 Simon Fraser (smfr) 2011-09-22 11:52:07 PDT
Sorry, I had my *CLs confused.
Comment 6 Vangelis Kokkevis 2011-09-22 17:46:29 PDT
Dean, will the presence of filters be taken into account in the RenderLayerCompositor and be propagated down to the GraphicsLayers? From a quick glance it appears that, at least for chromium, most of them can be implemented via shaders applied during compositing.
Comment 7 Dirk Schulze 2011-09-22 22:35:27 PDT
(In reply to comment #6)
> Dean, will the presence of filters be taken into account in the RenderLayerCompositor and be propagated down to the GraphicsLayers? From a quick glance it appears that, at least for chromium, most of them can be implemented via shaders applied during compositing.

This would just work for HTML. We don't have different layers in SVG right now. But this shouldn't block doing it in HTML if possible and if it makes sense.
Comment 8 Simon Fraser (smfr) 2011-09-22 22:39:44 PDT
(In reply to comment #6)
> Dean, will the presence of filters be taken into account in the RenderLayerCompositor and be propagated down to the GraphicsLayers? From a quick glance it appears that, at least for chromium, most of them can be implemented via shaders applied during compositing.

We may use composited layers for filters and map the filters to something on a GraphicsLayer. We might only do this if the filter is being animated (like transforms). It's not obvious that a static filter should incur compositing overhead.
Comment 9 Dean Jackson 2011-09-29 18:41:31 PDT
(In reply to comment #8)
> (In reply to comment #6)
> > Dean, will the presence of filters be taken into account in the RenderLayerCompositor and be propagated down to the GraphicsLayers? From a quick glance it appears that, at least for chromium, most of them can be implemented via shaders applied during compositing.
> 
> We may use composited layers for filters and map the filters to something on a GraphicsLayer. We might only do this if the filter is being animated (like transforms). It's not obvious that a static filter should incur compositing overhead.

So I think the answer is yes to Vangelis. As Simon points out, it might not always be necessary to have a compositing layer, but I think we'll definitely have that option.
Comment 10 Chris Marrin 2011-11-04 07:31:16 PDT
(In reply to comment #9)
> (In reply to comment #8)
> > (In reply to comment #6)
> > > Dean, will the presence of filters be taken into account in the RenderLayerCompositor and be propagated down to the GraphicsLayers? From a quick glance it appears that, at least for chromium, most of them can be implemented via shaders applied during compositing.
> > 
> > We may use composited layers for filters and map the filters to something on a GraphicsLayer. We might only do this if the filter is being animated (like transforms). It's not obvious that a static filter should incur compositing overhead.
> 
> So I think the answer is yes to Vangelis. As Simon points out, it might not always be necessary to have a compositing layer, but I think we'll definitely have that option.

Vangelis, does your implementation use the PlatformCALayer interfaces? That's the level where I'm thinking of putting the filter interfaces.
Comment 11 Noam Rosenthal 2011-11-04 07:40:33 PDT
(In reply to comment #10)
> (In reply to comment #9)
> > (In reply to comment #8)
> > > (In reply to comment #6)
> > > > Dean, will the presence of filters be taken into account in the RenderLayerCompositor and be propagated down to the GraphicsLayers? From a quick glance it appears that, at least for chromium, most of them can be implemented via shaders applied during compositing.
> > > 
> > > We may use composited layers for filters and map the filters to something on a GraphicsLayer. We might only do this if the filter is being animated (like transforms). It's not obvious that a static filter should incur compositing overhead.
> > 
> > So I think the answer is yes to Vangelis. As Simon points out, it might not always be necessary to have a compositing layer, but I think we'll definitely have that option.
> 
> Vangelis, does your implementation use the PlatformCALayer interfaces? That's the level where I'm thinking of putting the filter interfaces.

Any reason why not to put them directly in GraphicsLayer? It would be much more portable than PlatformCALayer (at least in Qt we don't use PlatformCALayer).
Comment 12 Chris Marrin 2011-11-09 11:14:28 PST
(In reply to comment #11)
...> > Vangelis, does your implementation use the PlatformCALayer interfaces? That's the level where I'm thinking of putting the filter interfaces.
> 
> Any reason why not to put them directly in GraphicsLayer? It would be much more portable than PlatformCALayer (at least in Qt we don't use PlatformCALayer).

So then you don't use GraphicsLayerCA? Currently there are two levels of abstraction. GraphicsLayer is very high level and lets you do any kind of implementation you want, but it's a lot of work. There will certainly be new Filter APIs exposed at this level. But they will be as generic as GraphicsLayer.

Then there's GraphicsLayerCA, which is still platform agnostic, but is geared toward "Core Animation-like" APIs. It expects a layer substructure under GraphicsLayer and allows for PlatformLayers to be passed in and parented to the tree. We use this, for instance, to do layer based video and WebGL. The details of the layer implementations are hidden in PlatformCALayer and PlatformCAAnimation. If you have an underlying API that looks like (or can be made to look like) Core Animation, then the implementation gets much simpler.

So my thinking right now is to add generic APIs for defining filters on GraphicsLayer, which are passed down to GraphicsLayerCA, with some sort of PlatformCAFilter to hide the actual implementation.
Comment 13 Noam Rosenthal 2011-11-09 11:25:56 PST
(In reply to comment #12)
> (In reply to comment #11)
> Then there's GraphicsLayerCA, which is still platform agnostic, but is geared toward "Core Animation-like" APIs. It expects a layer substructure under GraphicsLayer and allows for PlatformLayers to be passed in and parented to the tree. We use this, for instance, to do layer based video and WebGL. The details of the layer implementations are hidden in PlatformCALayer and PlatformCAAnimation. If you have an underlying API that looks like (or can be made to look like) Core Animation, then the implementation gets much simpler.
> 
Right now we do all of that work ourselves, without PlatformCALayer. I believe that some of our port was there before GraphicsLayerCA was platform agnostic.
We use TextureMapper, which is a "Core Animation-like" API that we have inside WebKit and is used by Qt and by some other small projects, so we don't need to do a lot of API translations/conversions (we basically iterate over the GraphicsLayer tree and render it).

> So my thinking right now is to add generic APIs for defining filters on GraphicsLayer, which are passed down to GraphicsLayerCA, with some sort of PlatformCAFilter to hide the actual implementation.

That sounds good. As long as we don't have to fiddle with RenderLayerCompositor and friends just to get the filter data we need, we'll be fine. This way we can switch TextureMapper to reuse GraphicsLayerCA if it makes sense and contains enough functionality so that duplication doesn't make sense.
Comment 14 Chris Marrin 2011-12-02 11:16:04 PST
(In reply to comment #13)
> (In reply to comment #12)
> > (In reply to comment #11)
> > Then there's GraphicsLayerCA, which is still platform agnostic, but is geared toward "Core Animation-like" APIs. It expects a layer substructure under GraphicsLayer and allows for PlatformLayers to be passed in and parented to the tree. We use this, for instance, to do layer based video and WebGL. The details of the layer implementations are hidden in PlatformCALayer and PlatformCAAnimation. If you have an underlying API that looks like (or can be made to look like) Core Animation, then the implementation gets much simpler.
> > 
> Right now we do all of that work ourselves, without PlatformCALayer. I believe that some of our port was there before GraphicsLayerCA was platform agnostic.
> We use TextureMapper, which is a "Core Animation-like" API that we have inside WebKit and is used by Qt and by some other small projects, so we don't need to do a lot of API translations/conversions (we basically iterate over the GraphicsLayer tree and render it).

> 
> > So my thinking right now is to add generic APIs for defining filters on GraphicsLayer, which are passed down to GraphicsLayerCA, with some sort of PlatformCAFilter to hide the actual implementation.
> 
> That sounds good. As long as we don't have to fiddle with RenderLayerCompositor and friends just to get the filter data we need, we'll be fine. This way we can switch TextureMapper to reuse GraphicsLayerCA if it makes sense and contains enough functionality so that duplication doesn't make sense.

Right now GraphicsLayerCA depends on PlatformCALayer and friends to abstract details about the underlying layer API. Maybe it's possible to interface TextureMapper at the PlatformCALayer level? But if you're using TextureMapper under GraphicsLayer, then maybe you should just leave it there. There will be new API's coming into GraphicsLayer for filters. You can interface to them there. GraphicsLayerCA will takes those APIs and construct PlatformCAFilter objects for filtering. The Mac implementation of that object will either use GLSL or CoreImage (or maybe a combination of the two).

So you can use whatever technique works best. If you want to move your TextureMapper logic into GraphicsLayerCA, you should really fit it into the PlatformCALayer interface. I'd really like to avoid a lot of ifdefs at that level to accomodate other mechanisms.

At the RenderLayer level, I will be adding logic that decides when to create a GraphicsLayer for filters. So whether you use GraphicsLayer or GraphicsLayerCA, you should be able to use all that logic.
Comment 15 Noam Rosenthal 2011-12-02 14:26:23 PST
(In reply to comment #14)
> (In reply to comment #13)
> > (In reply to comment #12)
> > > (In reply to comment #11)
> > > Then there's GraphicsLayerCA, which is still platform agnostic, but is geared toward "Core Animation-like" APIs. It expects a layer substructure under GraphicsLayer and allows for PlatformLayers to be passed in and parented to the tree. We use this, for instance, to do layer based video and WebGL. The details of the layer implementations are hidden in PlatformCALayer and PlatformCAAnimation. If you have an underlying API that looks like (or can be made to look like) Core Animation, then the implementation gets much simpler.
> > > 
> > Right now we do all of that work ourselves, without PlatformCALayer. I believe that some of our port was there before GraphicsLayerCA was platform agnostic.
> > We use TextureMapper, which is a "Core Animation-like" API that we have inside WebKit and is used by Qt and by some other small projects, so we don't need to do a lot of API translations/conversions (we basically iterate over the GraphicsLayer tree and render it).
> 
> > 
> > > So my thinking right now is to add generic APIs for defining filters on GraphicsLayer, which are passed down to GraphicsLayerCA, with some sort of PlatformCAFilter to hide the actual implementation.
> > 
> > That sounds good. As long as we don't have to fiddle with RenderLayerCompositor and friends just to get the filter data we need, we'll be fine. This way we can switch TextureMapper to reuse GraphicsLayerCA if it makes sense and contains enough functionality so that duplication doesn't make sense.
> 
> Right now GraphicsLayerCA depends on PlatformCALayer and friends to abstract details about the underlying layer API. 

>Maybe it's possible to interface TextureMapper at the PlatformCALayer level? 
Possible but painful; I'd have to look at PlatformCALayer more, but some of the structures it creates seems to fit well with CA, but is not needed when we build our own custom tree like in the case of TextureMapper.

> But if you're using TextureMapper under GraphicsLayer, then maybe you should > just leave it there. There will be new API's coming into GraphicsLayer for 
> filters. You can interface to them there. GraphicsLayerCA will takes those 
> APIs and construct PlatformCAFilter objects for filtering. 
> The Mac implementation of that object will either use GLSL or CoreImage
> (or maybe a combination of the two).
Cool. As much as we can share those GLSL files, either inside GraphicsLayer, in PlatformCAFilter or as separate GLSL files, the better for us as we're using GLSL in TextureMapper.

> 
> So you can use whatever technique works best. If you want to move your TextureMapper logic into GraphicsLayerCA, you should really fit it into the PlatformCALayer interface. I'd really like to avoid a lot of ifdefs at that level to accomodate other mechanisms.

Sure!
Though let's separate the TextureMapper / PlatformLayerCA question from the CSS filters question... we can deal with them separately. 
Once we see the proposed code for those APIs in GraphicsLayer and PlatformLayer we'd know a lot better which path makes the most sense.
Comment 16 Stephen White 2011-12-02 14:40:02 PST
(In reply to comment #14)
> (In reply to comment #13)
> > (In reply to comment #12)
> > > (In reply to comment #11)
> > > Then there's GraphicsLayerCA, which is still platform agnostic, but is geared toward "Core Animation-like" APIs. It expects a layer substructure under GraphicsLayer and allows for PlatformLayers to be passed in and parented to the tree. We use this, for instance, to do layer based video and WebGL. The details of the layer implementations are hidden in PlatformCALayer and PlatformCAAnimation. If you have an underlying API that looks like (or can be made to look like) Core Animation, then the implementation gets much simpler.
> > > 
> > Right now we do all of that work ourselves, without PlatformCALayer. I believe that some of our port was there before GraphicsLayerCA was platform agnostic.
> > We use TextureMapper, which is a "Core Animation-like" API that we have inside WebKit and is used by Qt and by some other small projects, so we don't need to do a lot of API translations/conversions (we basically iterate over the GraphicsLayer tree and render it).
> 
> > 
> > > So my thinking right now is to add generic APIs for defining filters on GraphicsLayer, which are passed down to GraphicsLayerCA, with some sort of PlatformCAFilter to hide the actual implementation.
> > 
> > That sounds good. As long as we don't have to fiddle with RenderLayerCompositor and friends just to get the filter data we need, we'll be fine. This way we can switch TextureMapper to reuse GraphicsLayerCA if it makes sense and contains enough functionality so that duplication doesn't make sense.
> 
> Right now GraphicsLayerCA depends on PlatformCALayer and friends to abstract details about the underlying layer API. Maybe it's possible to interface TextureMapper at the PlatformCALayer level? But if you're using TextureMapper under GraphicsLayer, then maybe you should just leave it there. There will be new API's coming into GraphicsLayer for filters. You can interface to them there. GraphicsLayerCA will takes those APIs and construct PlatformCAFilter objects for filtering. The Mac implementation of that object will either use GLSL or CoreImage (or maybe a combination of the two).
> 
> So you can use whatever technique works best. If you want to move your TextureMapper logic into GraphicsLayerCA, you should really fit it into the PlatformCALayer interface. I'd really like to avoid a lot of ifdefs at that level to accomodate other mechanisms.
> 
> At the RenderLayer level, I will be adding logic that decides when to create a GraphicsLayer for filters. So whether you use GraphicsLayer or GraphicsLayerCA, you should be able to use all that logic.

One thing I don't like about this proposal is that it seems to require a compositing layer to be created any time filters are invoked (please correct me if I'm wrong).

Instead of tying filters to the compositing infrastructure, I'd like to propose another alternative:  (which I also proposed over on bug 68478):  put the new image processing functions required for filters GraphicsContext, or a new class which the GraphicsContext vends.  Then each port can use their port-specific infrastructure for acceleration.  Then all that's required is to create those ImageBuffers with the "Accelerated" flag set.

Here's what I said over on bug 68478:

I've noticed that some of the filters (e.g., FEOffset) use GraphicsContext and ImageBuffer directly.  Since several ports already have accelerated implementations of GraphicsContext already, it's possible to trivially accelerate these filters today, simply by setting the "Accelerated" flag on ImageBuffer creation (I tried this for FEOffset, and it works).  Since ImageBuffer->ImageBuffer draws are also accelerated (at least by Chromium and possibly other ports), this sets up a direct GPU->GPU draw path for the filters.

Inspired by that, for the remaining filters, what do people think about either placing the required new APIs (e.g., gaussianBlur) on GraphicsContext itself?  If that seems out of place on GraphicsContext (most of these operations would be essentially image-to-image, and would ignore most GraphicsContext state), another alternative would be to provide a platform-independent CoreImage-like API for other ports to implement.  For example, we could have a class, call it ImageContext, which wraps a CIContext as GraphicsContext wraps a CGContext.  This class could be vended by GraphicsContext, in the same way that CIContext can be vended by CGContext (at least, according to the docs; I've never tried it).

Among other things, I think this would allow the Mac port to do efficient CGImage -> CIImage conversions, and also allow other ports to implement the filters using other backends, with only platform-independent code in the platform/graphics/filters directory.  For example, the Skia backend has some image processing capabilities, but does not have a distinction between images used for image processing or drawing, so it would be nice to take advantage of that if possible.
Comment 17 Noam Rosenthal 2011-12-02 15:09:19 PST
(In reply to comment #16)
> (In reply to comment #14)
> > (In reply to comment #13)
> > > (In reply to comment #12)
> > > > (In reply to comment #11)
> > > > Then there's GraphicsLayerCA, which is still platform agnostic, but is geared toward "Core Animation-like" APIs. It expects a layer substructure under GraphicsLayer and allows for PlatformLayers to be passed in and parented to the tree. We use this, for instance, to do layer based video and WebGL. The details of the layer implementations are hidden in PlatformCALayer and PlatformCAAnimation. If you have an underlying API that looks like (or can be made to look like) Core Animation, then the implementation gets much simpler.
> > > > 
> > > Right now we do all of that work ourselves, without PlatformCALayer. I believe that some of our port was there before GraphicsLayerCA was platform agnostic.
> > > We use TextureMapper, which is a "Core Animation-like" API that we have inside WebKit and is used by Qt and by some other small projects, so we don't need to do a lot of API translations/conversions (we basically iterate over the GraphicsLayer tree and render it).
> > 
> > > 
> > > > So my thinking right now is to add generic APIs for defining filters on GraphicsLayer, which are passed down to GraphicsLayerCA, with some sort of PlatformCAFilter to hide the actual implementation.
> > > 
> One thing I don't like about this proposal is that it seems to require a compositing layer to be created any time filters are invoked (please correct me if I'm wrong).
> 
> Instead of tying filters to the compositing infrastructure, I'd like to propose another alternative:  (which I also proposed over on bug 68478):  put the new image processing functions required for filters GraphicsContext, or a new class which the GraphicsContext vends.  Then each port can use their port-specific infrastructure for acceleration.  Then all that's required is to create those ImageBuffers with the "Accelerated" flag set.
> 
For us this would be a mess with WebKit2... Since all the non-compositing content goes via a backing-store, it would be impossible to accelerate it.

But I don't see why we can't (theoretically) do both; Have a compositing path and a non-compositing path (maybe only one of them supported in the beginning), and use the compositing settings flags to let the ports decide.

At least on mobile + WebKit2, I don't see how CSS shaders would work in any acceptable performance without being tied to accelerated compositing.
Comment 18 Dean Jackson 2011-12-02 15:23:07 PST
(In reply to comment #17)
> > One thing I don't like about this proposal is that it seems to require a compositing layer to be created any time filters are invoked (please correct me if I'm wrong).
> > 
> > Instead of tying filters to the compositing infrastructure, I'd like to propose another alternative:  (which I also proposed over on bug 68478):  put the new image processing functions required for filters GraphicsContext, or a new class which the GraphicsContext vends.  Then each port can use their port-specific infrastructure for acceleration.  Then all that's required is to create those ImageBuffers with the "Accelerated" flag set.
> > 
> For us this would be a mess with WebKit2... Since all the non-compositing content goes via a backing-store, it would be impossible to accelerate it.
> 
> But I don't see why we can't (theoretically) do both; Have a compositing path and a non-compositing path (maybe only one of them supported in the beginning), and use the compositing settings flags to let the ports decide.
> 
> At least on mobile + WebKit2, I don't see how CSS shaders would work in any acceptable performance without being tied to accelerated compositing.

I agree. I think we should try for both. What I've been considering the "software" path could maybe work the way Stephen suggests (software is a bad name because even this approach can be done in hardware). The "hardware" path would use a compositing backend.

Even the compositing backend would be optional. You could imagine some effects only needing acceleration while animating.
Comment 19 Stephen White 2011-12-05 13:35:19 PST
(In reply to comment #18)
> (In reply to comment #17)
> > > One thing I don't like about this proposal is that it seems to require a compositing layer to be created any time filters are invoked (please correct me if I'm wrong).
> > > 
> > > Instead of tying filters to the compositing infrastructure, I'd like to propose another alternative:  (which I also proposed over on bug 68478):  put the new image processing functions required for filters GraphicsContext, or a new class which the GraphicsContext vends.  Then each port can use their port-specific infrastructure for acceleration.  Then all that's required is to create those ImageBuffers with the "Accelerated" flag set.
> > > 
> > For us this would be a mess with WebKit2... Since all the non-compositing content goes via a backing-store, it would be impossible to accelerate it.
> > 
> > But I don't see why we can't (theoretically) do both; Have a compositing path and a non-compositing path (maybe only one of them supported in the beginning), and use the compositing settings flags to let the ports decide.
> > 
> > At least on mobile + WebKit2, I don't see how CSS shaders would work in any acceptable performance without being tied to accelerated compositing.
> 
> I agree. I think we should try for both. What I've been considering the "software" path could maybe work the way Stephen suggests (software is a bad name because even this approach can be done in hardware). The "hardware" path would use a compositing backend.
> 
> Even the compositing backend would be optional. You could imagine some effects only needing acceleration while animating.

That sounds good (and you're right, "software" is kind of a misnomer in that approach).  My main point here was to try to keep the implementation of filters agnostic of the compositor state, and if possible, simply hook up with it at the input and output of the filter graph.  I've put up a patch for a first step in this direction at https://bugs.webkit.org/show_bug.cgi?id=73842.
Comment 20 Chris Marrin 2011-12-14 17:24:11 PST
Created attachment 119345 [details]
Patch to move FilterOperation(s) to platform/graphics
Comment 21 Simon Fraser (smfr) 2011-12-14 17:28:08 PST
Comment on attachment 119345 [details]
Patch to move FilterOperation(s) to platform/graphics

View in context: https://bugs.webkit.org/attachment.cgi?id=119345&action=review

> Source/WebCore/ChangeLog:8
> +        Move FilterOperation(s) to platform/graphics/filters so it can be used to pass filter spec down

filter spec -> filter details?

> Source/WebCore/ChangeLog:12
> +        eventually need to be down in platform. But until we have a hardware accelerated version of 
> +        CSS Shaders that can wait.

I think you can remove the last sentence.

> Source/WebCore/platform/graphics/GraphicsLayer.h:313
> -
> +    

Unrelated change.
Comment 22 Early Warning System Bot 2011-12-14 17:57:41 PST
Comment on attachment 119345 [details]
Patch to move FilterOperation(s) to platform/graphics

Attachment 119345 [details] did not pass qt-ews (qt):
Output: http://queues.webkit.org/results/10900038
Comment 23 Chris Marrin 2011-12-15 08:43:58 PST
Landed first patch in http://trac.webkit.org/changeset/102942. But this just moves FilterOperation(s), so the bug is still not finished.
Comment 24 Csaba Osztrogonác 2011-12-15 08:53:49 PST
Reopen, because it broke the Qt build.

Why did you land it when the Qt EWS was red? Did you want to break Qt build deliberately?
Comment 25 Csaba Osztrogonác 2011-12-15 08:55:18 PST
Comment on attachment 119345 [details]
Patch to move FilterOperation(s) to platform/graphics

View in context: https://bugs.webkit.org/attachment.cgi?id=119345&action=review

Could you fix the build, please?

> Source/WebCore/Target.pri:3256
> +        platform/graphics/filters/FilterOperation.cpp

platform/graphics/filters/FilterOperation.cpp \
Comment 26 Chris Marrin 2011-12-15 08:59:45 PST
(In reply to comment #19)
> (In reply to comment #18)
> > (In reply to comment #17)
> > > > One thing I don't like about this proposal is that it seems to require a compositing layer to be created any time filters are invoked (please correct me if I'm wrong).
> > > > 
> > > > Instead of tying filters to the compositing infrastructure, I'd like to propose another alternative:  (which I also proposed over on bug 68478):  put the new image processing functions required for filters GraphicsContext, or a new class which the GraphicsContext vends.  Then each port can use their port-specific infrastructure for acceleration.  Then all that's required is to create those ImageBuffers with the "Accelerated" flag set.
> > > > 
> > > For us this would be a mess with WebKit2... Since all the non-compositing content goes via a backing-store, it would be impossible to accelerate it.
> > > 
> > > But I don't see why we can't (theoretically) do both; Have a compositing path and a non-compositing path (maybe only one of them supported in the beginning), and use the compositing settings flags to let the ports decide.
> > > 
> > > At least on mobile + WebKit2, I don't see how CSS shaders would work in any acceptable performance without being tied to accelerated compositing.
> > 
> > I agree. I think we should try for both. What I've been considering the "software" path could maybe work the way Stephen suggests (software is a bad name because even this approach can be done in hardware). The "hardware" path would use a compositing backend.
> > 
> > Even the compositing backend would be optional. You could imagine some effects only needing acceleration while animating.
> 
> That sounds good (and you're right, "software" is kind of a misnomer in that approach).  My main point here was to try to keep the implementation of filters agnostic of the compositor state, and if possible, simply hook up with it at the input and output of the filter graph.  I've put up a patch for a first step in this direction at https://bugs.webkit.org/show_bug.cgi?id=73842.

In our discussions we have talked about 3 paths:

1) Today's pure software, modify a rendered image on the CPU, model

2) An intermediate model where some lower level code (like GraphicsContext) performs filter operations directly. In this model content would have a RenderLayer which gives it a separate "bitmap" into which GraphicsContext renders. But it would not have a corresponding GraphicsLayer. So GraphicsContext or something related would need some new filter API. In some models, these bitmaps are simple CPU structures, so filtering would still be done on the CPU. In other models these bitmaps can be on the GPU, so hardware filtering can be used. 

In the most painful model, the bitmaps are on the GPU, but it's a complex filter which has to be done on the CPU. So you'd need to copy the bits to the CPU, operate on them and copy them back to the GPU. I don't relish the idea of this, and I'm not sure there are cases where we will need to do this. But with some of the complex SVG filter chains possible, I'm imagine we will.

Alternatively, some bitmaps could be marked as needing software filtering and we could avoid putting those bitmaps into the GPU in the first place. Then you'd have the slowdown of CPU rendering, but you'd avoid the copy. We'll probably have to support both paths and let the platforms decide with some (possibly non-ideal) heuristics to decide when to do what.

3) The GraphicsLayer model. This is what I'm working on now. New API to GraphicsLayer and the logic to feed that down from the style.

We will probably switch between these at runtime depending on what's happening on the page. If you're animating the filters, we'll pop into layers and animate there. If we already have a GPU bitmap, we'll use an accelerated GraphicsContext operation. If the filtered element is static, maybe we'll do everything in software. It will all be up to the platforms. 

All this goodness is a theory, of course. For now I'm just getting the GraphicsLayer path going.
Comment 27 Chris Marrin 2011-12-15 09:10:55 PST
(In reply to comment #25)
> (From update of attachment 119345 [details])
> View in context: https://bugs.webkit.org/attachment.cgi?id=119345&action=review
> 
> Could you fix the build, please?
> 
> > Source/WebCore/Target.pri:3256
> > +        platform/graphics/filters/FilterOperation.cpp
> 
> platform/graphics/filters/FilterOperation.cpp \

Yep, sorry, checked in as http://trac.webkit.org/changeset/102946
Comment 28 Chris Marrin 2011-12-16 17:39:34 PST
Created attachment 119705 [details]
Patch with implementation for Mac using CI
Comment 29 WebKit Review Bot 2011-12-16 17:41:55 PST
Attachment 119705 [details] did not pass style-queue:

Failed to run "['Tools/Scripts/check-webkit-style', '--diff-files', u'LayoutTests/ChangeLog', u'LayoutTests/css3..." exit_code: 1

Source/WebCore/ChangeLog:27:  Need whitespace between colon and description  [changelog/filechangedescriptionwhitespace] [5]
Source/WebCore/ChangeLog:30:  Need whitespace between colon and description  [changelog/filechangedescriptionwhitespace] [5]
Source/WebCore/ChangeLog:38:  Need whitespace between colon and description  [changelog/filechangedescriptionwhitespace] [5]
Source/WebCore/ChangeLog:48:  Need whitespace between colon and description  [changelog/filechangedescriptionwhitespace] [5]
Total errors found: 4 in 33 files


If any of these errors are false positives, please file a bug against check-webkit-style.
Comment 30 Gyuyoung Kim 2011-12-16 17:51:45 PST
Comment on attachment 119705 [details]
Patch with implementation for Mac using CI

Attachment 119705 [details] did not pass efl-ews (efl):
Output: http://queues.webkit.org/results/10903953
Comment 31 Early Warning System Bot 2011-12-16 17:54:37 PST
Comment on attachment 119705 [details]
Patch with implementation for Mac using CI

Attachment 119705 [details] did not pass qt-ews (qt):
Output: http://queues.webkit.org/results/10895968
Comment 32 Simon Fraser (smfr) 2011-12-16 18:05:25 PST
Comment on attachment 119705 [details]
Patch with implementation for Mac using CI

View in context: https://bugs.webkit.org/attachment.cgi?id=119705&action=review

> Source/WebCore/platform/graphics/GraphicsLayer.h:316
> +    const FilterOperations& filter() const { return m_filter; }
> +    virtual bool setFilter(const FilterOperations& filter) { m_filter = filter; return true; }

I could use "filters" plural everywhere.

A comment should say what the boolean return value to setFilter() means.

> Source/WebCore/platform/graphics/GraphicsLayer.h:452
> +    FilterOperations m_filter;

m_filters

> Source/WebCore/platform/graphics/ca/GraphicsLayerCA.cpp:1256
> +        
> +    }

Extra blank line.

> Source/WebCore/platform/graphics/ca/GraphicsLayerCA.cpp:1287
> +            updateFilter();

updateFilters()

> Source/WebCore/platform/graphics/ca/GraphicsLayerCA.h:348
> +        FilterChanged = 1 << 25,

FiltersChanged

> Source/WebCore/platform/graphics/ca/mac/PlatformCALayerMac.mm:707
> +    // Assume filterCanBeComposited was called and it returned true

Can you ASSERT that here too?

> Source/WebCore/platform/graphics/ca/mac/PlatformCALayerMac.mm:714
> +            // FIXME: For now assume drop shadow is the last filter, put it on the layer

Should file a bug to fix this.

> Source/WebCore/platform/graphics/ca/mac/PlatformCALayerMac.mm:769
> +            //for ( ; amount < 0; amount += 360)
> +            //    ;
> +            //for ( ; amount >= 360; amount -= 360)
> +            //    ;
> +            amount *= M_PI * 2 / 360;
> +            //amount -= M_PI;

Commented out code here!

> Source/WebCore/rendering/RenderLayer.cpp:272
> +bool RenderLayer::rendersFilter() const

'renders' is a bit vague here. Elsewhere we use the "paintsWith" terminology.
Comment 33 Gustavo Noronha (kov) 2011-12-17 01:15:36 PST
Comment on attachment 119705 [details]
Patch with implementation for Mac using CI

Attachment 119705 [details] did not pass gtk-ews (gtk):
Output: http://queues.webkit.org/results/10932026
Comment 34 Chris Marrin 2011-12-17 05:53:52 PST
Created attachment 119721 [details]
Patch fixing builds
Comment 35 Chris Marrin 2011-12-17 06:21:21 PST
Created attachment 119722 [details]
Patch fixing builds
Comment 36 Eric Seidel (no email) 2011-12-19 10:45:50 PST
Comment on attachment 119345 [details]
Patch to move FilterOperation(s) to platform/graphics

Cleared Simon Fraser's review+ from obsolete attachment 119345 [details] so that this bug does not appear in http://webkit.org/pending-commit.
Comment 37 Chris Marrin 2011-12-19 11:33:11 PST
Landed in http://trac.webkit.org/changeset/103148