Bug 213672

Summary: Infrastructure Work for Integrating CoreImage for Accelerated CSS/SVG Filter Rendering
Product: WebKit Reporter: frankhome61
Component: CSSAssignee: frankhome61
Status: RESOLVED FIXED    
Severity: Normal CC: changseok, dino, esprehn+autocc, ews-watchlist, fmalita, frankhome61, fred.wang, glenn, gyuyoung.kim, kondapallykalyan, mmaxfield, pdr, sabouhallawa, sam, schenney, sergio, simon.fraser, webkit-bug-importer
Priority: P2 Keywords: InRadar
Version: Other   
Hardware: Unspecified   
OS: Unspecified   
See Also: https://bugs.webkit.org/show_bug.cgi?id=215958
Attachments:
Description Flags
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch
none
Patch - resolved wpe's merge conflict
none
Patch
none
Patch
none
Patch
none
Patch
simon.fraser: review+
Prepare to commit
none
Addressed new comments; ready to commit
ews-feeder: commit-queue-
Patch - Added reviewer name, ready for commit none

frankhome61
Reported 2020-06-26 19:33:08 PDT
This is an infrastructure work to integrate CoreImage to current code in order to accelerate CSS and SVG filter rendering. The general idea is to have each FilterEffect own a CIImage object, and at the output of the last filter effect, we render the CIImage into an IOSurface, and use the IOSurface to create an ImageBuffer, and this ImageBuffer will eventually be painted onto the screen.
Attachments
Patch (35.59 KB, patch)
2020-07-02 14:32 PDT, frankhome61
no flags
Patch (35.65 KB, patch)
2020-07-02 14:41 PDT, frankhome61
no flags
Patch (35.86 KB, patch)
2020-07-10 01:09 PDT, frankhome61
no flags
Patch (40.99 KB, patch)
2020-07-10 10:25 PDT, frankhome61
no flags
Patch (40.95 KB, patch)
2020-07-10 11:35 PDT, frankhome61
no flags
Patch (42.06 KB, patch)
2020-07-10 11:49 PDT, frankhome61
no flags
Patch (41.01 KB, patch)
2020-07-10 12:05 PDT, frankhome61
no flags
Patch (41.18 KB, patch)
2020-07-10 12:19 PDT, frankhome61
no flags
Patch (41.30 KB, patch)
2020-07-10 12:38 PDT, frankhome61
no flags
Patch (41.23 KB, patch)
2020-07-10 15:39 PDT, frankhome61
no flags
Patch (44.63 KB, patch)
2020-07-13 11:29 PDT, frankhome61
no flags
Patch (40.24 KB, patch)
2020-07-13 11:43 PDT, frankhome61
no flags
Patch (34.98 KB, patch)
2020-07-16 12:17 PDT, frankhome61
no flags
Patch (1.99 MB, patch)
2020-07-16 19:58 PDT, frankhome61
no flags
Patch (37.79 KB, patch)
2020-07-16 20:06 PDT, frankhome61
no flags
Patch (38.09 KB, patch)
2020-07-16 20:11 PDT, frankhome61
no flags
Patch (38.02 KB, patch)
2020-07-16 20:53 PDT, frankhome61
no flags
Patch (53.80 KB, patch)
2020-07-17 15:06 PDT, frankhome61
no flags
Patch - resolved wpe's merge conflict (52.79 KB, patch)
2020-07-17 15:26 PDT, frankhome61
no flags
Patch (52.82 KB, patch)
2020-07-17 16:35 PDT, frankhome61
no flags
Patch (55.44 KB, patch)
2020-07-21 12:38 PDT, frankhome61
no flags
Patch (55.45 KB, patch)
2020-07-21 12:46 PDT, frankhome61
no flags
Patch (55.44 KB, patch)
2020-07-21 12:55 PDT, frankhome61
simon.fraser: review+
Prepare to commit (60.31 KB, patch)
2020-07-22 12:46 PDT, frankhome61
no flags
Addressed new comments; ready to commit (60.27 KB, patch)
2020-07-23 10:41 PDT, frankhome61
ews-feeder: commit-queue-
Patch - Added reviewer name, ready for commit (60.27 KB, patch)
2020-07-23 15:48 PDT, frankhome61
no flags
Radar WebKit Bug Importer
Comment 1 2020-06-26 19:34:03 PDT
frankhome61
Comment 2 2020-07-02 14:32:04 PDT
frankhome61
Comment 3 2020-07-02 14:41:33 PDT
Myles C. Maxfield
Comment 4 2020-07-02 14:47:16 PDT
Comment on attachment 403396 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=403396&action=review > Source/WebCore/ChangeLog:10 > + 1) adding a public API to create an ImageBuffer with an IOSurface I think "public API" isn't quite what you mean - that phrase usually indicates that 3rd party iOS apps (from the App Store) linking with WebKit.framework can call it.
Simon Fraser (smfr)
Comment 5 2020-07-02 14:52:17 PDT
Comment on attachment 403396 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=403396&action=review > Source/WebCore/platform/graphics/RenderingMode.h:46 > + CoreImageAccelerated I don't think this is the right place to plug in this mode. RenderingMode is used in lots of places that are not just about filters, and you don't consider the combinations of CoreImage/Remote. > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.h:2 > +* Copyright (C) 2014-2020 Apple Inc. All rights reserved. Just 2020 > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.mm:37 > + auto ciContext = std::unique_ptr<CIContextWrapper>(new CIContextWrapper()); make_unique() > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.mm:40 > + if (!ciContext) > + return nullptr; > + return ciContext; No need for the if (!ciContext); it doesn't affect the return value. > Source/WebCore/platform/graphics/filters/FilterEffect.cpp:200 > + // Rendering via CoreImage won't use the software path, therefore > + // We use a different CoreImage specific rendering path > + if (m_filter.renderingMode() == RenderingMode::CoreImageAccelerated) { > + platformApplyCoreImageAccelerated(); > + return; > + } Is apply() going to get called for every filter in the chain? With CoreImage, I would have expected us to build up the CI filter chain, and then just render the result. I don't think we'd need to call anything per-filter. > Source/WebCore/rendering/CSSFilter.cpp:435 > +std::unique_ptr<IOSurface> CSSFilter::createIOSurface() > +{ > + auto lastEffect = m_effects.last(); > + FloatSize clampedSize = ImageBuffer::clampedSize(lastEffect->absolutePaintRect().size()); > + IntSize backendSize = ImageBufferIOSurfaceBackend::calculateBackendSize(clampedSize, lastEffect->filter().filterScale()); > + if (backendSize.isEmpty()) > + return nullptr; > + CGColorSpaceRef cgColorSpace = cachedCGColorSpace(lastEffect->resultColorSpace()); > + return IOSurface::create(backendSize, backendSize, cgColorSpace); > +} > + > +std::unique_ptr<ImageBuffer> CSSFilter::createIOSurfaceBackedImageBufferResult(std::unique_ptr<IOSurface>&& surface) > +{ > + auto lastEffect = m_effects.last(); > + FloatSize clampedSize = ImageBuffer::clampedSize(lastEffect->absolutePaintRect().size()); > + IntSize backendSize = ImageBufferIOSurfaceBackend::calculateBackendSize(clampedSize, lastEffect->filter().filterScale()); > + if (backendSize.isEmpty()) > + return nullptr; > + > + return ImageBuffer::create(clampedSize, backendSize, lastEffect->filter().filterScale(), lastEffect->resultColorSpace(), WTFMove(surface)); > +} These seem to be unused in this patch, but why do you need to create an ImageBuffer by passing in an IOSurface? An accelerated ImageBuffer already knows how to create its own IOSurface. > Source/WebCore/rendering/RenderLayer.cpp:6963 > +#if USE(CORE_IMAGE) > + if (renderer().settings().coreImageAcceleratedFilterRenderEnabled()) { > + m_filters->buildFilter(renderer(), page().deviceScaleFactor(), RenderingMode::CoreImageAccelerated); > + return; > + } > +#endif There should be no changes in RenderLayer if you factor this code correctly. Your changes should live in RenderLayerFilters
Said Abou-Hallawa
Comment 6 2020-07-02 16:21:20 PDT
Comment on attachment 403398 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=403398&action=review > Source/WebCore/rendering/CSSFilter.cpp:434 > + auto lastEffect = m_effects.last(); > + FloatSize clampedSize = ImageBuffer::clampedSize(lastEffect->absolutePaintRect().size()); > + IntSize backendSize = ImageBufferIOSurfaceBackend::calculateBackendSize(clampedSize, lastEffect->filter().filterScale()); > + if (backendSize.isEmpty()) > + return nullptr; > + > + return ImageBuffer::create(clampedSize, backendSize, lastEffect->filter().filterScale(), lastEffect->resultColorSpace(), WTFMove(surface)); Why do we have to create the IOSurface outside of the ImageBuffer backend? Why can't this function written like this? return ImageBuffer::create(ImageBuffer::clampedSize(lastEffect->absolutePaintRect().size()), RenderingMode::Accelerated, lastEffect->filter().filterScale(), lastEffect->resultColorSpace());
Dean Jackson
Comment 7 2020-07-02 19:15:08 PDT
Comment on attachment 403398 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=403398&action=review > Source/WebCore/ChangeLog:10 > + 1) adding a public API to create an ImageBuffer with an IOSurface As others have suggested, I think "adding new API" is a better description > Source/WebCore/ChangeLog:21 > + * SourcesCocoa.txt: added two files, CIContextWrapper.mm and CIImageWrapper.mm > + Wrapper classes for CIImage and CIContext Generally we don't name our wrapper classes so similarly to the things being wrapped. For example, in WebGPU we eventually use a MTLDevice, but we call the wrapper class GPUDevice (with a GPUDevice.h, GPUDevice.cpp and GPUDeviceMetal.mm). So I would suggest coming up with a different prefix for CI stuff. Maybe ImageEffectContext and ImageEffectImage? Which would then have an ImageEffectContextCI.mm for the bit that are talking to CI? (The header file would still need a reference to the CIContext, but you'd guard that with #if USE(CoreImage) or something. > Source/WebCore/ChangeLog:26 > + that creates an ImageBufferIOSurfaceBackend with an IOSurface This isn't part of your patch, but the word "Backend" always confuses me for ImageBuffers. It came up in the GPU process work and I can never work out if it means the buffer is here or somewhere else. Anyway.... 🤷‍♀️ > Source/WebCore/ChangeLog:28 > + (WebCore::ImageBuffer::create): Added a public API that could create an ImageBuffer Remove "public" > Source/WebCore/platform/graphics/cg/ImageBufferIOSurfaceBackend.cpp:106 > + RetainPtr<CGContextRef> cgContext = surface->ensurePlatformContext(nullptr); > + if (!cgContext) > + return nullptr; Why do we have to do this? > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.h:33 > +class CIContextWrapper final { Nit: Add a blank line before this. > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.h:37 > + WEBCORE_EXPORT static std::unique_ptr<CIContextWrapper> create(); > + WEBCORE_EXPORT ~CIContextWrapper(); Why do these need to be exported? Also, if Darin is listening, I believe he prefers that static create methods are for constructing Ref or RefPtr objects, and that for unique_ptrs it's better to have a normal constructor and a makeUnique at the call site. He can correct me if that's wrong. Either way, it seems we are very inconsistent with that, so it's ok to leave it as a ::create. > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.mm:28 > +#include "config.h" #import > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.mm:31 > +#import "CIContextWrapper.h" > + > +#import <CoreImage/CIContext.h> Nit: Remove this blank line. > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.mm:40 > + auto ciContext = std::unique_ptr<CIContextWrapper>(new CIContextWrapper()); > + if (!ciContext) > + return nullptr; > + return ciContext; I think you can replace all this with a single line: return makeUnique<CIContextWrapper>() > Source/WebCore/platform/graphics/coreimage/CIContextWrapper.mm:47 > + m_ciContext = [CIContext context]; Do this in the initialization list. i.e : m_ciContext([CIContext context]) > Source/WebCore/platform/graphics/coreimage/CIImageWrapper.h:40 > + WEBCORE_EXPORT static std::unique_ptr<CIImageWrapper> create(CIImage*); > + WEBCORE_EXPORT ~CIImageWrapper(); Do these need to be exported? > Source/WebCore/platform/graphics/coreimage/CIImageWrapper.h:42 > + void renderToIOSurface(CIContext*, IOSurfaceRef, FloatRect, CGColorSpaceRef); Why did you chose this method to exist on the Image class, rather than be a method on the Context class? > Source/WebCore/platform/graphics/coreimage/CIImageWrapper.mm:25 > +*/ > +#if USE(CORE_IMAGE) Nit: Needs a blank line between these. > Source/WebCore/platform/graphics/coreimage/CIImageWrapper.mm:32 > +#include "config.h" > +#include "CIImageWrapper.h" > + > +#include <CoreImage/CIContext.h> > +#include <CoreImage/CoreImage.h> > +#include <Foundation/Foundation.h> #import > Source/WebCore/platform/graphics/filters/FilterEffect.h:69 > + CIImageWrapper* coreImageResult(); This is a good example of why I think the implementation/wrapper classes should be called ImageEffectFoo. A FilterEffect object doesn't ever need to know about CoreImage (other than the #USE). It should be almost opaque. > Source/WebCore/platform/graphics/filters/FilterEffect.h:190 > + // as of now we are using the software path because non of the filters have been implemented Nit: non typo, start with an uppercase letter, and end with a period. > Source/WebCore/rendering/CSSFilter.cpp:440 > +CIImageWrapper* CSSFilter::coreImageOutput() const > +{ > + return m_effects.last()->coreImageResult(); > +} And then this would become ImageEffectImage CSSFilter::imageEffectOutput() or something > Source/WebCore/rendering/CSSFilter.h:75 > + std::unique_ptr<IOSurface> createIOSurface(); > + std::unique_ptr<ImageBuffer> createIOSurfaceBackedImageBufferResult(std::unique_ptr<IOSurface>&&); > + void setSourceGraphicUseCoreImage(); > + CIImageWrapper* coreImageOutput() const; All these methods seem very specific to the implementation. Is there a way they can be hidden from the CSSFilter.h interface? e.g. all these things seem like operations that would done when the filter is processing, and would take the CSSFilter as a parameter. Also, I don't think you are using setSourceGraphicUseCoreImage() > Source/WebCore/rendering/RenderLayerFilters.cpp:191 > + // Applying filters will have CoreImage specific paths Nit: End with a period.
Sam Weinig
Comment 8 2020-07-03 07:53:45 PDT
The fact that the abstraction is leaking out of platform/ and into rendering/ tells me this likely isn't factored the right way. CoreImage usage should be an implementation detail of a platform abstraction, not something the higher layers of the stack need to consider directly.
frankhome61
Comment 9 2020-07-10 01:09:55 PDT
frankhome61
Comment 10 2020-07-10 10:25:14 PDT
frankhome61
Comment 11 2020-07-10 11:35:12 PDT
frankhome61
Comment 12 2020-07-10 11:49:12 PDT
frankhome61
Comment 13 2020-07-10 12:05:22 PDT
frankhome61
Comment 14 2020-07-10 12:19:51 PDT
frankhome61
Comment 15 2020-07-10 12:38:47 PDT
frankhome61
Comment 16 2020-07-10 15:39:11 PDT
Aakash Jain
Comment 17 2020-07-11 05:43:27 PDT
We just added tvOS and watchOS EWS bubbles. This patch seems to be failing on tvOS and watchOS. Please have a look.
Sam Weinig
Comment 18 2020-07-11 18:49:51 PDT
The latest version (In reply to Sam Weinig from comment #8) > The fact that the abstraction is leaking out of platform/ and into > rendering/ tells me this likely isn't factored the right way. CoreImage > usage should be an implementation detail of a platform abstraction, not > something the higher layers of the stack need to consider directly. This remains an issue in the latest version of the change. Perhaps we need to step back here and write up the design / class hierarchy / constraints before putting this to code?
Simon Fraser (smfr)
Comment 19 2020-07-13 10:34:03 PDT
Comment on attachment 404011 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404011&action=review I think the main thing you need to think about is the CSSFilter stuff. > Source/WebCore/platform/graphics/ConcreteImageBuffer.h:54 > + ImageBufferBackend* backend() override { return m_backend.get(); } This is breaking ImageBuffer encapsulation. I think it would be OK to allow someone to request an IOSurface from an ImageBuffer directly, though. Just don't expose the "backend". > Source/WebCore/platform/graphics/ImageBuffer.cpp:134 > - > + Whitespace. > Source/WebCore/platform/graphics/coreimage/AcceleratedImage.h:33 > +class AcceleratedImage final { This name is too generic. "accelerated" can mean lots of different things. > Source/WebCore/platform/graphics/coreimage/AcceleratedImageContext.h:37 > +class AcceleratedImageContext final { Likewise. > Source/WebCore/platform/graphics/coreimage/AcceleratedImageContext.mm:48 > + backend->context().fillRect(FloatRect(1, 1, 0, 0)); This weird thing requires a comment. Is it really necessary? > Source/WebCore/platform/graphics/coreimage/AcceleratedImageContext.mm:55 > + CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB); This leaks colorSpace. > Source/WebCore/platform/graphics/coreimage/CoreImageRenderingMode.h:32 > + DISABLED, > + ENABLED We don't use uppercase enum values. Does this really need its own header file? > Source/WebCore/platform/graphics/filters/FilterEffect.h:84 > return m_imageBufferResult > +#if USE(CORE_IMAGE) > + || m_acceleratedImage > +#endif > || m_unmultipliedImageResult > || m_premultipliedImageResult; This is now adding to the horrible complexity that is the "FilterEffect result buffers" problem. > Source/WebCore/platform/graphics/filters/FilterEffect.h:164 > +#if USE(CORE_IMAGE) > + std::unique_ptr<AcceleratedImage> m_acceleratedImage; > +#endif Move this below the methods. > Source/WebCore/platform/graphics/filters/FilterEffect.h:192 > + virtual void platformApplyCoreImageAccelerated() { platformApplySoftware(); } Seems wrong? > Source/WebCore/rendering/CSSFilter.cpp:425 > +std::unique_ptr<ImageBuffer> CSSFilter::createIOSurfaceBackedImageBufferResult() > +{ > + auto lastEffect = m_effects.last(); > + FloatSize clampedSize = ImageBuffer::clampedSize(lastEffect->absolutePaintRect().size()); > + return ImageBuffer::create(clampedSize, RenderingMode::Accelerated, lastEffect->filter().filterScale(), lastEffect->resultColorSpace()); > +} Who is going to call this? The output of the filter chain is already an ImageBuffer, so I don't see why we need this. > Source/WebCore/rendering/CSSFilter.cpp:447 > +void CSSFilter::applyAccelerated() > +{ > + auto& effect = m_effects.last().get(); > + effect.applyAccelerated(); > + effect.transformResultColorSpace(ColorSpace::SRGB); > +} > +#endif // USE(CORE_IMAGE) && HAVE(IOSURFACE) Why isn't this just inside CSSFilter::apply()? > Source/WebCore/rendering/CSSFilter.h:72 > +#if HAVE(IOSURFACE) && USE(CORE_IMAGE) > + void applyAccelerated(); > + std::unique_ptr<ImageBuffer> createIOSurfaceBackedImageBufferResult(); > + AcceleratedImage* acceleratedImageOutput() const; > + FloatRect coreImageDestRect(); > +#endif Do these all need to be public? > Source/WebCore/rendering/RenderLayerFilters.cpp:131 > + // If the CoreImage-Accelerated Filter Rendering feature switch is turned on > + // we will enable CoreImage rendering on Filters Comment is not necessary. > Source/WebCore/rendering/RenderLayerFilters.cpp:203 > + // Applying filters will have CoreImage specific paths > +#if USE(CORE_IMAGE) > + if (filter.coreImageRenderingMode() == CoreImageRenderingMode::ENABLED) > + LOG_WITH_STREAM(Filters, stream << "Rendering Filters Using CoreImage"); > +#endif This doesn't seem necessary. Log lower down.
frankhome61
Comment 20 2020-07-13 11:29:45 PDT
frankhome61
Comment 21 2020-07-13 11:43:31 PDT
frankhome61
Comment 22 2020-07-13 12:04:27 PDT
(In reply to Sam Weinig from comment #18) > The latest version (In reply to Sam Weinig from comment #8) > > The fact that the abstraction is leaking out of platform/ and into > > rendering/ tells me this likely isn't factored the right way. CoreImage > > usage should be an implementation detail of a platform abstraction, not > > something the higher layers of the stack need to consider directly. > > This remains an issue in the latest version of the change. Perhaps we need > to step back here and write up the design / class hierarchy / constraints > before putting this to code? Hi! Thanks for the comments. I would like some help from you to figure out what changes are considered leaking the implementation details and what are not. So I do need to make changes inside rendering/ to pass down the state of the feature switch that controls whether CoreImage rendering path is used. I do agree that things like creating ImageBuffers should not be inside rendering/ - it seems that ImageBuffer is an implementation detail that RenderLayerFilters doesn't necessarily need to know. I would like to know more about what are the things that you believe are breaking the abstraction barrier. Thanks!
frankhome61
Comment 23 2020-07-13 12:11:39 PDT
(In reply to Simon Fraser (smfr) from comment #19) > Comment on attachment 404011 [details] > Patch > > View in context: > https://bugs.webkit.org/attachment.cgi?id=404011&action=review > > I think the main thing you need to think about is the CSSFilter stuff. > > > Source/WebCore/platform/graphics/ConcreteImageBuffer.h:54 > > + ImageBufferBackend* backend() override { return m_backend.get(); } > > This is breaking ImageBuffer encapsulation. I think it would be OK to allow > someone to request an IOSurface from an ImageBuffer directly, though. Just > don't expose the "backend". > > > Source/WebCore/platform/graphics/ImageBuffer.cpp:134 > > - > > + > > Whitespace. > > > Source/WebCore/platform/graphics/coreimage/AcceleratedImage.h:33 > > +class AcceleratedImage final { > > This name is too generic. "accelerated" can mean lots of different things. > > > Source/WebCore/platform/graphics/coreimage/AcceleratedImageContext.h:37 > > +class AcceleratedImageContext final { > > Likewise. > > > Source/WebCore/platform/graphics/coreimage/AcceleratedImageContext.mm:48 > > + backend->context().fillRect(FloatRect(1, 1, 0, 0)); > > This weird thing requires a comment. Is it really necessary? > > > Source/WebCore/platform/graphics/coreimage/AcceleratedImageContext.mm:55 > > + CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB); > > This leaks colorSpace. > > > Source/WebCore/platform/graphics/coreimage/CoreImageRenderingMode.h:32 > > + DISABLED, > > + ENABLED > > We don't use uppercase enum values. Does this really need its own header > file? > > > Source/WebCore/platform/graphics/filters/FilterEffect.h:84 > > return m_imageBufferResult > > +#if USE(CORE_IMAGE) > > + || m_acceleratedImage > > +#endif > > || m_unmultipliedImageResult > > || m_premultipliedImageResult; > > This is now adding to the horrible complexity that is the "FilterEffect > result buffers" problem. > > > Source/WebCore/platform/graphics/filters/FilterEffect.h:164 > > +#if USE(CORE_IMAGE) > > + std::unique_ptr<AcceleratedImage> m_acceleratedImage; > > +#endif > > Move this below the methods. > > > Source/WebCore/platform/graphics/filters/FilterEffect.h:192 > > + virtual void platformApplyCoreImageAccelerated() { platformApplySoftware(); } > > Seems wrong? > > > Source/WebCore/rendering/CSSFilter.cpp:425 > > +std::unique_ptr<ImageBuffer> CSSFilter::createIOSurfaceBackedImageBufferResult() > > +{ > > + auto lastEffect = m_effects.last(); > > + FloatSize clampedSize = ImageBuffer::clampedSize(lastEffect->absolutePaintRect().size()); > > + return ImageBuffer::create(clampedSize, RenderingMode::Accelerated, lastEffect->filter().filterScale(), lastEffect->resultColorSpace()); > > +} > > Who is going to call this? The output of the filter chain is already an > ImageBuffer, so I don't see why we need this. > > > Source/WebCore/rendering/CSSFilter.cpp:447 > > +void CSSFilter::applyAccelerated() > > +{ > > + auto& effect = m_effects.last().get(); > > + effect.applyAccelerated(); > > + effect.transformResultColorSpace(ColorSpace::SRGB); > > +} > > +#endif // USE(CORE_IMAGE) && HAVE(IOSURFACE) > > Why isn't this just inside CSSFilter::apply()? > > > Source/WebCore/rendering/CSSFilter.h:72 > > +#if HAVE(IOSURFACE) && USE(CORE_IMAGE) > > + void applyAccelerated(); > > + std::unique_ptr<ImageBuffer> createIOSurfaceBackedImageBufferResult(); > > + AcceleratedImage* acceleratedImageOutput() const; > > + FloatRect coreImageDestRect(); > > +#endif > > Do these all need to be public? > > > Source/WebCore/rendering/RenderLayerFilters.cpp:131 > > + // If the CoreImage-Accelerated Filter Rendering feature switch is turned on > > + // we will enable CoreImage rendering on Filters > > Comment is not necessary. > > > Source/WebCore/rendering/RenderLayerFilters.cpp:203 > > + // Applying filters will have CoreImage specific paths > > +#if USE(CORE_IMAGE) > > + if (filter.coreImageRenderingMode() == CoreImageRenderingMode::ENABLED) > > + LOG_WITH_STREAM(Filters, stream << "Rendering Filters Using CoreImage"); > > +#endif > > This doesn't seem necessary. Log lower down. I have a question regarding getting IOSurface from an ImageBuffer, so the reason that I added a backend() function is because IOSurface can only be obtained from ImageBufferIOSurfaceBackend, which is NOT a subclass of ImageBuffer. I need to pass the backend information to ConcreteImageBuffer (which is an IMageBuffer subclass) so that I can get the backend pointer from ImageBuffer, and then the IOSurface. The current abstraction of ImageBuffer seems to prevent me from obtaining IOSurface given an ImageBuffer alone.
Said Abou-Hallawa
Comment 24 2020-07-13 13:02:42 PDT
My suggestion is the following: In PlatformImageBuffer.h, replace the definition of AcceleratedImageBuffer by: #if HAVE(IOSURFACE) class AcceleratedImageBuffer : public ConcreteImageBuffer<ImageBufferIOSurfaceBackend> { using Base = ConcreteImageBuffer<AcceleratedImageBufferBackend>; using Base::Base; public: IOSurface* surface() { return m_backend->surface(); } }; #else using AcceleratedImageBuffer = ConcreteImageBuffer<AcceleratedImageBufferBackend>; #endif In your code, you can do this casting: auto buffer = ImageBuffer::create(clampedSize, RenderingMode::Accelerated, lastEffect->filter().filterScale(), lastEffect->resultColorSpace()); auto surface = static_cast<AcceleratedImageBuffer&>(*buffer).surface(); If we end up using this approach, I think we should use downcast instead.
Sam Weinig
Comment 25 2020-07-13 13:52:19 PDT
(In reply to guowei_yang from comment #22) > (In reply to Sam Weinig from comment #18) > > The latest version (In reply to Sam Weinig from comment #8) > > > The fact that the abstraction is leaking out of platform/ and into > > > rendering/ tells me this likely isn't factored the right way. CoreImage > > > usage should be an implementation detail of a platform abstraction, not > > > something the higher layers of the stack need to consider directly. > > > > This remains an issue in the latest version of the change. Perhaps we need > > to step back here and write up the design / class hierarchy / constraints > > before putting this to code? > > Hi! Thanks for the comments. I would like some help from you to figure out > what changes are considered leaking the implementation details and what are > not. So I do need to make changes inside rendering/ to pass down the state > of the feature switch that controls whether CoreImage rendering path is > used. As a rule of thumb, if you find yourself using #ifdefs for platform specific functionality (like HAVE(IOSURFACE) and USE(CORE_IMAGE)) in rendering/ code, that's likely an issue (#ifdefs for web facing features, like #if ENABLE(CSS4_COLORS) are fine). In your latest version, for instance, rendering/CSSFilter.h has an #if HAVE(IOSURFACE) && USE(CORE_IMAGE).
frankhome61
Comment 26 2020-07-16 12:17:07 PDT
Simon Fraser (smfr)
Comment 27 2020-07-16 12:30:44 PDT
Comment on attachment 404472 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404472&action=review > Source/WebCore/platform/graphics/filters/FilterEffect.h:75 > +enum FilterEffectClass { > + FilterEffectGeneric, > + FilterEffectBlend, > + FilterEffectColorMatrix, > + FilterEffectComponentTransfer, > + FilterEffectComposite, > + FilterEffectConvolveMatrix, > + FilterEffectDiffuseLighting, > + FilterEffectDisplacementMap, > + FilterEffectDropShadow, > + FilterEffectFlood, > + FilterEffectGaussianBlur, > + FilterEffectLighting, > + FilterEffectMerge, > + FilterEffectMorphology, > + FilterEffectOffset, > + FilterEffectSpecularLighting, > + FilterEffectTile, > + FilterEffectTurbulence, > + FilterEffectSourceGraphic > +}; This should be an enum class: enum class FilterEffect : uint8_t { Generic, ... } I think this should be FilterEffect::Type and you rename FilterEffectType to something else in an earlier patch. > Source/WebCore/platform/graphics/filters/FilterEffect.h:81 > +enum CoreImageImplementation { > + Unimplemented, > + Implemented > +}; Weird. Why do you need this? > Source/WebCore/platform/graphics/filters/FilterEffect.h:138 > + virtual CoreImageImplementation coreImageImplementation() const { return Unimplemented; } Why not just virtual bool supportsCoreImageRendering() const ? > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:46 > +class FilterEffectRendererCoreImage { It would make sense to have a generic base class FilterEffectRenderer, and have FilterEffectRendererCoreImage be derived from it. Then the rest of the code would just see FilterEffectRenderer, and another platform could plug in their own. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:50 > + static std::unique_ptr<FilterEffectRendererCoreImage> create(RefPtr<FilterEffect>); I don't think you should pass in a FilterEffect here; you should be able to keep FilterEffectRendererCoreImage around between renders, and the last FilterEffect can change. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:55 > + bool hasResult() { return m_outputImage; } const > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:56 > + static bool allFiltersCoreImageImplemented(FilterEffect*); allFiltersCoreImageImplemented -> canRenderFilterGraph() or something, virtual function on the base class. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:58 > + ImageBuffer* output(); const? > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:59 > + FloatRect destRect(); const > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:69 > + HashMap<Ref<FilterEffect>, Vector<CIFilter*>> m_ciFilterStorageMap; Do you need to own the FilterEffect? Should CIFilter be RetainPtr<CIFilter>? > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:130 > + auto surface = static_cast<AcceleratedImageBuffer&>(*m_outputImageBuffer).surface(); Handle surface being null. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:131 > + CGColorSpaceRef resultColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB); leak > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:147 > + m_context = [CIContext contextWithOptions: @{ kCIContextWorkingColorSpace: (__bridge id)colorSpace}]; Leak > Source/WebCore/rendering/CSSFilter.cpp:383 > +#if USE(CORE_IMAGE) > + if (coreImageRenderingMode() == CoreImageRenderingMode::Enabled > + && FilterEffectRendererCoreImage::allFiltersCoreImageImplemented(&effect)) { > + > + m_filterRenderer = FilterEffectRendererCoreImage::create(makeRefPtr(effect)); > + m_filterRenderer->applyEffects(); > + effect.transformResultColorSpace(ColorSpace::SRGB); > + return; > + } > + setCoreImageRenderingMode(CoreImageRenderingMode::Disabled); > +#endif This should be written in terms of a generic FilterEffectRenderer and not #ifdeffed.
frankhome61
Comment 28 2020-07-16 19:58:19 PDT
frankhome61
Comment 29 2020-07-16 20:06:33 PDT
frankhome61
Comment 30 2020-07-16 20:11:58 PDT
frankhome61
Comment 31 2020-07-16 20:53:08 PDT
Said Abou-Hallawa
Comment 32 2020-07-17 02:49:45 PDT
Comment on attachment 404532 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404532&action=review > Source/WebCore/platform/graphics/filters/FilterEffect.h:127 > + virtual Type filterEffectClassType() const { return FilterEffect::Type::Generic; } What is the use of this function in this patch? And what can be the use of it and the enum FilterEffect::Type in the future? Polymorphism will suffice in most of the cases. > Source/WebCore/platform/graphics/filters/FilterEffectRenderer.cpp:38 > + if (renderSettings.coreImageAcceleratedFilterRenderEnabled() > + && FilterEffectRendererCoreImage::canRenderUsingCIFilters(lastEffect)) > + return FilterEffectRendererCoreImage::create(lastEffect); I think calling canRenderUsingCIFilters() should be moved inside FilterEffectRendererCoreImage::create(). In this case FilterEffectRendererCoreImage::canRenderUsingCIFilters() can even be private. > Source/WebCore/platform/graphics/filters/FilterEffectRenderer.h:45 > + FilterEffectRenderer() = default; This constructor should be protected. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:32 > +#import "FilterEffectRenderer.h" > +#import <wtf/HashMap.h> > +#include <wtf/Vector.h> import or include but not both please. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:53 > + ImageBuffer* output() const override; > + FloatRect destRect(FilterEffect*) const override; > + > + void clearResult() override; I think these functions can be private because FilterEffectRendererCoreImage is not exposed in the code except in FilterEffectRenderer::create(). So all the overridden functions will be accessed through the base class FilterEffectRenderer. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:63 > + HashMap<Ref<FilterEffect>, Vector<RetainPtr<CIFilter>>> m_ciFilterStorageMap; Where is this HashMap used? If it will be used later, please remove it and add it only when it is needed. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:59 > + unsigned size = effect->inputEffects().size(); > + for (unsigned i = 0; i < size; ++i) { I think you can use C++ modern for-loop: for (auto& in : effect->inputEffects()) { } > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:64 > + inputImages.append(connectCIFilters(in)); Did you mean to append inputImage? Do we need to call connectCIFilters(in) twice? Also inputImages is not used. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:95 > + return nullptr; This function returns nullptr always. Is this intentional and you are planning to change it in the following patch? If yes, then please add a FIXME comment explaining your plan. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:98 > +bool FilterEffectRendererCoreImage::canRenderUsingCIFilters(FilterEffect* effect) const FilterEffect& > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:102 > + unsigned size = effect->inputEffects().size(); > + for (unsigned i = 0; i < size; ++i) { > + FilterEffect* in = effect->inputEffects().at(i).get(); I think you can use C++ modern for-loop here as well. Or you can use findMatching: return effect->inputEffects().findMatching([&](auto& in) { return !in->supportsCoreImageRendering() || !canRenderUsingCIFilters(in); }) notFound; > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:106 > + return effect->supportsCoreImageRendering(); I think we should start by checking effect->supportsCoreImageRendering() first, then we loop through the input effects. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:113 > + if (!m_outputImageBuffer) > + return nullptr; > + return m_outputImageBuffer.get(); This can be written like this: return m_outputImageBuffer.get(); > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:122 > +std::unique_ptr<ImageBuffer> FilterEffectRendererCoreImage::createAcceleratedImageBuffer(FilterEffect* lastEffect) > +{ > + FloatSize clampedSize = ImageBuffer::clampedSize(lastEffect->absolutePaintRect().size()); > + auto buffer = ImageBuffer::create(clampedSize, RenderingMode::Accelerated, lastEffect->filter().filterScale(), lastEffect->resultColorSpace()); > + buffer->flushContext(); > + return buffer; > +} Creating an ImageBuffer with RenderingMode::Accelerated does not guarantee it will be backed by an IOSurface. We may fall back to the unaccelerated case if the size you pass is greater than IOSurface::maximumSize(). See ImageBufferIOSurfaceBackend::create(), ConcreteImageBuffer::create() and ImageBuffer::create(). It looks like your code assumes the ImageBuffer can be created always and the IOSurface is always available. But these are not correct assumptions. What is the backup plan if the ImageBuffer can't be created or the unaccelerated ImageBuffer is the only choice you have? > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:127 > + auto surface = static_cast<AcceleratedImageBuffer&>(*m_outputImageBuffer).surface(); This casting can be wrong and you may be casting UnacceleratedImageBuffer to AcceleratedImageBuffer which is a security bug. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:134 > +FloatRect FilterEffectRendererCoreImage::destRect(FilterEffect* lastEffect) const const FilterEffect& > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:145 > + if (m_outputImageBuffer) > + m_outputImageBuffer.reset(); I think it is okay to say: m_outputImageBuffer.reset() without checking the value of m_outputImageBuffer. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:155 > + auto colorSpace = adoptCF(cachedCGColorSpace(lastEffect->resultColorSpace())); Why are you using adoptCF here? cachedCGColorSpace() returns CGColorSpaceRef of a static variable. It will never be freed. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:159 > +FilterEffectRendererCoreImage::~FilterEffectRendererCoreImage() = default; You do not have to define the destructor of this class if it is the default. You need to define it only for the base class FilterEffectRenderer because it has virtual functions.
Simon Fraser (smfr)
Comment 33 2020-07-17 12:33:25 PDT
Comment on attachment 404532 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404532&action=review > Source/WebCore/platform/graphics/cg/ImageBufferIOSurfaceBackend.h:63 > + IOSurface* surface(); const > Source/WebCore/platform/graphics/filters/FilterEffect.h:58 > + Generic, Remove >> Source/WebCore/platform/graphics/filters/FilterEffect.h:127 >> + virtual Type filterEffectClassType() const { return FilterEffect::Type::Generic; } > > What is the use of this function in this patch? And what can be the use of it and the enum FilterEffect::Type in the future? Polymorphism will suffice in most of the cases. Right, make this pure virtual. Or store Type as a member variable and make this non-virtual. > Source/WebCore/platform/graphics/filters/FilterEffectRenderer.h:34 > +class FilterEffectRenderer { This class needs a method like: bool canRenderFilters(const FilterEffect& lastFilter) const that is called to determine if it can handle a given filer chain. > Source/WebCore/platform/graphics/filters/FilterEffectRenderer.h:38 > + static std::unique_ptr<FilterEffectRenderer> create(Settings&, FilterEffect*); I don't think this should take Settings as an argument. Code in platform/ should not see Settings. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:40 > +class FilterEffectRendererCoreImage : public FilterEffectRenderer { This file should live in platform/graphics/cocoa or maybe a new platform/graphics/coreimage directory. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:46 > + void applyEffects(FilterEffect*) override; override -> final (on other methods too). Also FilterEffect& if it's always non-null. > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:49 > + std::unique_ptr<ImageBuffer> createAcceleratedImageBuffer(FilterEffect*); Just pass a size, not a FilterEffect*. In fact, why is this a public method? > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:51 > + FloatRect destRect(FilterEffect*) const override; FilterEffect& > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:59 > + CIImage* connectCIFilters(FilterEffect*); FilterEffect& > Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.h:60 > + void renderToImageBuffer(FilterEffect*) override; FilterEffect& > Source/WebCore/rendering/CSSFilter.cpp:371 > + m_filterRenderer->applyEffects(&effect); What if the filter renderer can't handle this chain? This needs a fallback.
Said Abou-Hallawa
Comment 34 2020-07-17 13:02:30 PDT
Comment on attachment 404532 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404532&action=review >> Source/WebCore/platform/graphics/filters/FilterEffectRendererCoreImage.mm:127 >> + auto surface = static_cast<AcceleratedImageBuffer&>(*m_outputImageBuffer).surface(); > > This casting can be wrong and you may be casting UnacceleratedImageBuffer to AcceleratedImageBuffer which is a security bug. You can do the following: In PlatformImageBuffer.h, add the following SPECIALIZE_TYPE_TRAIT: SPECIALIZE_TYPE_TRAITS_BEGIN(WebCore::AcceleratedImageBuffer) static bool isType(const WebCore::ImageBuffer& buffer) { return buffer.isAccelerated(); } SPECIALIZE_TYPE_TRAITS_END() In ImageBuffer.h, add the following virtual function: virtual bool isAccelerated() const = 0; In ConcreteImageBuffer.h, override this function: bool isAccelerated() const override { if (auto* backend = ensureBackendCreated()) return backend->isAccelerated(); return false; } In ImageBufferBackend.h, add the following virtual function: virtual bool isAccelerated() const { return false; } In ImageBufferIOSurfaceBackend., override this function: bool isAccelerated() const override { return true; } In FilterEffectRendererCoreImage::createAcceleratedImageBuffer, add the following if-statement which should cover the nullability test as well: if (!is<AcceleratedImageBuffer>(*buffer)) return nullptr; And in this function change the casting to be auto surface = downcast<AcceleratedImageBuffer>(*m_outputImageBuffer).surface(); Maybe we need to override ImageBufferCairoGLSurfaceBackend::isAccelerated() also just for the sake of correctness even it is not currently used.
frankhome61
Comment 35 2020-07-17 15:06:58 PDT
frankhome61
Comment 36 2020-07-17 15:26:05 PDT
Created attachment 404607 [details] Patch - resolved wpe's merge conflict
frankhome61
Comment 37 2020-07-17 16:35:35 PDT
Simon Fraser (smfr)
Comment 38 2020-07-21 09:38:27 PDT
Comment on attachment 404618 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404618&action=review Let's do one more round. Would it be hard to get a single filter working here, so we can land this with some tests? > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:31 > +#if USE(CORE_IMAGE) > + > +#import "config.h" > +#import "FilterEffectRendererCoreImage.h" > + > +#import "Filter.h" The order here should be: #import "config.h" #import "FilterEffectRendererCoreImage.h" #if USE(CORE_IMAGE) #import "Filter.h ... > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:59 > + Vector<CIImage*> inputImages; What's the ownership model for the CIImage*? Who hold the ref to them? > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:119 > + buffer->flushContext(); Why do you need to flushContext right after creating it? > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:126 > + m_outputImageBuffer = createAcceleratedImageBuffer(clampedSize, lastEffect.filter().filterScale(), lastEffect.resultColorSpace()); createAcceleratedImageBuffer() is basically one line, so maybe just move its contents to here. > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:134 > + auto surface = static_cast<AcceleratedImageBuffer&>(*m_outputImageBuffer).surface(); You shouldn't use static_cast here. You need to make it possible to ask "is this an accelerated image buffer", and use is<> and downcast<>. > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:161 > + CGColorSpaceRef colorSpace = cachedCGColorSpace(lastEffect.resultColorSpace()); > + m_context = [CIContext contextWithOptions: @{ kCIContextWorkingColorSpace: (__bridge id)colorSpace}]; It doesn't seem right to assume that the working colorspace is the result colorspace of the last effect. > Source/WebCore/platform/graphics/filters/FilterEffect.h:128 > + virtual Type filterEffectClassType() const = 0; Rather than all the virtual functions, I'd do this by storing Type as a const member var (see FilterOperation for an example).
frankhome61
Comment 39 2020-07-21 10:36:25 PDT
(In reply to Simon Fraser (smfr) from comment #38) > Comment on attachment 404618 [details] > Patch > > View in context: > https://bugs.webkit.org/attachment.cgi?id=404618&action=review > > Let's do one more round. Would it be hard to get a single filter working > here, so we can land this with some tests? > > > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:31 > > +#if USE(CORE_IMAGE) > > + > > +#import "config.h" > > +#import "FilterEffectRendererCoreImage.h" > > + > > +#import "Filter.h" > > The order here should be: > > #import "config.h" > #import "FilterEffectRendererCoreImage.h" > > #if USE(CORE_IMAGE) > > #import "Filter.h > ... > > > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:59 > > + Vector<CIImage*> inputImages; > > What's the ownership model for the CIImage*? Who hold the ref to them? All CIImage* are temporary, and the last output image is owned by the renderer object. they all live in the autorelease pool, and they will be destroyed once we destroy the renderer. > > > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:119 > > + buffer->flushContext(); > > Why do you need to flushContext right after creating it? When creating a ImageBufferIOSurfaceBackend, ImageBufferIOSurfaceBackend::create calls CGContextClearRect(cgContext.get(), FloatRect(FloatPoint::zero(), backendSize)); so when create() calls CGContextClearRect(), it just puts the command in the pipeline but no execution happens. I need to flush the context to force this step happen before I draw anything into it, or otherwise whatever I drawn into the surface will then be cleared. > > > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:126 > > + m_outputImageBuffer = createAcceleratedImageBuffer(clampedSize, lastEffect.filter().filterScale(), lastEffect.resultColorSpace()); > > createAcceleratedImageBuffer() is basically one line, so maybe just move its > contents to here. > > > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:134 > > + auto surface = static_cast<AcceleratedImageBuffer&>(*m_outputImageBuffer).surface(); > > You shouldn't use static_cast here. You need to make it possible to ask "is > this an accelerated image buffer", and use is<> and downcast<>. > > > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:161 > > + CGColorSpaceRef colorSpace = cachedCGColorSpace(lastEffect.resultColorSpace()); > > + m_context = [CIContext contextWithOptions: @{ kCIContextWorkingColorSpace: (__bridge id)colorSpace}]; > > It doesn't seem right to assume that the working colorspace is the result > colorspace of the last effect. This is just setting the color space CIContext. When creating a CIContext you always need to settle with ONE color space and work around that. I can change it to use SRGB always. > > > Source/WebCore/platform/graphics/filters/FilterEffect.h:128 > > + virtual Type filterEffectClassType() const = 0; > > Rather than all the virtual functions, I'd do this by storing Type as a > const member var (see FilterOperation for an example).
frankhome61
Comment 40 2020-07-21 12:38:46 PDT
frankhome61
Comment 41 2020-07-21 12:46:09 PDT
frankhome61
Comment 42 2020-07-21 12:55:11 PDT
Simon Fraser (smfr)
Comment 43 2020-07-22 11:18:07 PDT
Comment on attachment 404853 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404853&action=review > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:130 > + // FIXME: In the situation where the Image is too large, an UnacceleratedImageBuffer will be created > + // in this case, special handling is needed to render to ImageBufferCGBackend instead > + if (!is<AcceleratedImageBuffer>(*m_outputImageBuffer)) > + return; This should have a test once you start adding them. We'll need to gracefully fallback to non-accelerated filters (or figure out another solution like resolution reduction). > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:159 > + CGColorSpaceRef colorSpace = cachedCGColorSpace(ColorSpace::SRGB); OK for now, but when you start using this for SVG, you'll have to pass in the colorspace from outside. > Source/WebCore/platform/graphics/filters/FEBlend.h:53 > + Type filterEffectClassType() const override { return FilterEffect::Type::Blend; } You didn't do the change I asked for before? > Source/WebCore/platform/graphics/filters/FilterEffect.h:131 > + virtual bool supportsCoreImageRendering() const { return false; } Now that you have FilterEffectRenderer[CoreImage] which keeps all the CI code out of FilterEffect, I don't think you should have this virtual function. Instead, have: virtual bool FilterEffectRenderer::supportsRendering(const FilterEffect&) const;
Said Abou-Hallawa
Comment 44 2020-07-22 11:46:32 PDT
Comment on attachment 404853 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=404853&action=review > Source/WebCore/platform/graphics/coreimage/FilterEffectRendererCoreImage.mm:122 > + m_outputImageBuffer->flushContext(); > + > + if (!m_outputImageBuffer) { You are calling m_outputImageBuffe->flushContext() and then you are checking whether m_outputImageBuffer is null. flushContext() should be moved even further after checking is<AcceleratedImageBuffer>. It might even better to move it to ImageBufferIOSurfaceBackend::surface(). It is better to hide the IOSurface details from the caller side. It is confusing to see creating an ImageBuffer is followed immediately calling flushContext(). Only a close look at ImageBufferIOSurfaceBackend::create makes this understandable. The call to CGContextClearRect() there needs to be flushed before getting the accessing the IOSurface bytes.
frankhome61
Comment 45 2020-07-22 12:46:20 PDT
Created attachment 404950 [details] Prepare to commit
Said Abou-Hallawa
Comment 46 2020-07-23 10:09:37 PDT
Comment on attachment 404950 [details] Prepare to commit View in context: https://bugs.webkit.org/attachment.cgi?id=404950&action=review > Source/WebCore/platform/graphics/PlatformImageBuffer.h:70 > + IOSurface* surface() { return m_backend->surface(); } I think this can return a reference to an IOSurface. See the constructor of ImageBufferIOSurfaceBackend, where we ASSERT(m_surface).
frankhome61
Comment 47 2020-07-23 10:41:02 PDT
Created attachment 405057 [details] Addressed new comments; ready to commit
EWS
Comment 48 2020-07-23 15:44:22 PDT
ChangeLog entry in Source/WebCore/ChangeLog contains OOPS!.
frankhome61
Comment 49 2020-07-23 15:48:09 PDT
Created attachment 405087 [details] Patch - Added reviewer name, ready for commit
EWS
Comment 50 2020-07-23 16:42:47 PDT
Committed r264805: <https://trac.webkit.org/changeset/264805> All reviewed patches have been landed. Closing bug and clearing flags on attachment 405087 [details].
Simon Fraser (smfr)
Comment 51 2020-07-23 21:47:50 PDT
Comment on attachment 405087 [details] Patch - Added reviewer name, ready for commit View in context: https://bugs.webkit.org/attachment.cgi?id=405087&action=review > Source/WebCore/rendering/CSSFilter.cpp:308 > +#if USE(CORE_IMAGE) You can (and should) remove this #ifdef. The implementation of FilterEffectRenderer::tryCreate() will just return null for non-CI platforms.
frankhome61
Comment 52 2020-07-24 11:37:55 PDT
(In reply to Simon Fraser (smfr) from comment #51) > Comment on attachment 405087 [details] > Patch - Added reviewer name, ready for commit > > View in context: > https://bugs.webkit.org/attachment.cgi?id=405087&action=review > > > Source/WebCore/rendering/CSSFilter.cpp:308 > > +#if USE(CORE_IMAGE) > > You can (and should) remove this #ifdef. The implementation of > FilterEffectRenderer::tryCreate() will just return null for non-CI platforms. Well I do think this ifdef is needed because we need to query the state of the feature switch here, which is only defined on Cocoa platforms. It's true we can return null for non-CI platforms, we still need the state of the feature switch to know whether we want to use the renderer
Note You need to log in before you can comment on or make changes to this bug.